Ends in
00
days
00
hrs
00
mins
00
secs
SHOP NOW

Practice Test + eBook Bundle Sale - Buy our Practice Test and get the supplementary eBook at 50% OFF

Forum Replies Created

Viewing 1 - 15 of 67 posts
  • JR-TutorialsDojo

    Administrator
    April 9, 2024 at 8:22 pm

    Hello Francesca Andreoni,

    Thank you for bringing this to our attention.

    You’re correct, Amazon S3 key (SSE-S3) is always enabled by default. We will make the necessary updates, and this should be reflected on the portal as soon as possible.

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    April 2, 2024 at 1:27 pm

    Hello juano1985,

    It would be better if you could provide a snippet of the question so we can look it up. But yes, you’re correct. Spot Instances offer a cost-effective option If you can adjust the running times of your applications and if they can be interrupted. However, Spot capacity might be unavailable during extremely high demand. This can cause Amazon EC2 Spot instance launches to be delayed when the overall demand for capacity increases.

    To mitigate the risk of downtime, Amazon ECS allows a container instance to be transitioned to a DRAINING status. When in DRAINING status, Amazon ECS prevents new tasks from being scheduled on the container instance. The service scheduler will initiate replacement tasks if the cluster has sufficient capacity for container instances. However, if there is insufficient container instance capacity, a service event message will be sent indicating the issue.

    I hope this helps. Let us know if you have any further questions.

    Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    March 7, 2024 at 1:54 pm

    Hi juano1985,

    Thanks for your feedback.

    IAM policies and SCPs serve different purposes but can complement each other for enhanced security and management in AWS.

    • IAM Policy: An IAM policy is more granular and applies to users, groups, and roles within a specific AWS account. It allows or denies permissions to specific AWS services and resources and provides fine-grained control within an AWS account.

    • Service Control Policies (SCPs): An SCP is used at the organization level to set permission boundaries for all AWS accounts within the organization. It’s a way to centrally control the maximum available permissions for all accounts in your organization.

    Having both allows for layered security – SCPs ensure organization-wide compliance with certain restrictions, while IAM policies provide detailed permissions within each account. So, even if an SCP restricts the ec2:RunInstances action across all accounts, having an IAM policy in each AWS account provides an additional layer of security by ensuring that the required tags are added at the account level. This way, even if the SCP were to be modified or removed, the IAM policy would still enforce the tagging requirement. Therefore, using SCPs and IAM policies together provides a more robust and flexible security configuration.

    Please refer to this: https://tutorialsdojo.com/service-control-policies-scp-vs-iam-policies/

    I hope this helps! Let me know if you have any further questions.

    Best Regards.

  • JR-TutorialsDojo

    Administrator
    February 14, 2024 at 6:55 pm

    Hi VitalyKr,

    Thank you for your feedback.

    The options provided in the question are indeed based on different ways to implement identity federation between on-premises systems and AWS. The correct options describe scenarios where a broker service, either the web application itself or a separate identity broker, is used to authenticate against the on-premises LDAP server and then call AWS STS to get temporary credentials. These are valid scenarios, but I understand that the wording might have caused some confusion.

    Regarding the diagram, it was not intended to be misleading or to make the question a trick one. The diagram provided is a broad representation of identity federation implementations, encompassing both SAML and non-SAML solutions. It’s designed to illustrate the general process rather than match every specific scenario.

    We value your input and will make the necessary updates to improve the clarity of the question and its options. These changes should be reflected on the portal as soon as possible. Thank you again for helping us improve our service.

    Best Regards.

  • JR-TutorialsDojo

    Administrator
    February 6, 2024 at 2:11 pm

    Hi AWSPro21,

    Thanks for your feedback.

    I understand your confusion. The answer does not mention SNS (Simple Notification Service), but it is implied in the context of mobile push notifications, which is a feature of Amazon SNS. We will ensure to specify Amazon SNS to prevent any ambiguity.

    As for SQS (Simple Queue Service), the mobile app can indeed send messages directly to an Amazon SQS queue. This can be done by using the AWS SDK within the mobile app to call the SendMessage API of Amazon SQS. This would place the message directly into the queue.

    I hope this helps! If you have any other questions or need further clarification, feel free to ask.

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    February 5, 2024 at 1:22 pm

    Hi alexander.friesen,

    Could you please post the complete question so we can look it up?

  • JR-TutorialsDojo

    Administrator
    January 24, 2024 at 9:26 pm

    Hi AWSPro21,

    Thanks for your feedback.

    In the given scenario, the company has a large amount of video data stored on tape drives in their on-premises data center. They want to move this data to AWS and also analyze the data to build a metadata library. The scenario also does not specify that the on-premises tape library needs to be replaced. It’s important to understand the difference between the AWS Storage Gateway’s File Gateway and Tape Gateway options and how they align with the company’s requirements.

    The AWS Storage Gateway service provides a way to securely move on-premises data to AWS. It offers different types of gateways, including File Gateway and Tape Gateway, each suited for different use cases.

    The Tape Gateway is designed to help you move away from physical tapes towards a virtual tape library in the cloud. It’s a great solution if you’re looking to archive data in AWS for long-term retention, typically using Amazon S3 Glacier or Glacier Deep Archive. However, it’s not designed for frequent data access or analysis.

    On the other hand, the File Gateway provides a seamless way to connect to the cloud to store application data files and backup images as durable objects on Amazon S3. It provides low-latency access to data through transparent local caching. This means that frequently accessed data is cached on-premises, providing your applications with low-latency access, while securely and durably storing your data in Amazon S3.

    In the context of the given question, the company not only wants to move the data to AWS, but they also want to frequently access and analyze the data using Amazon Rekognition. Therefore, the File Gateway would be a more suitable solution because it provides the low-latency access needed for frequent data analysis.

    I hope this helps! If you have any more questions, feel free to ask

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    January 16, 2024 at 2:14 pm

    Hi fancypants,

    Thank you for bringing this to our attention. We will do the necessary update, and this should be reflected in our practice test as soon as possible.

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    January 15, 2024 at 5:50 pm

    Hi azurj,

    I apologize for the delay in responding to your query. I understand that you’ve encountered an issue with our learning portal, specifically with a quiz that you’ve taken.

    Our learning portal underwent a maintenance update, which unfortunately resulted in some latency issues. These issues may have affected the performance of the quizzes, including the one you took. However, I want to assure you that we resolved these issues promptly.

    You can visit our public status page for more information.
    http://status.tutorialsdojo.com/

    Please don’t hesitate to reach out if you’re still experiencing any issues or if you have any other concerns.

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    January 15, 2024 at 12:17 pm

    Hi sac,

    Thanks for your feedback.

    The key difference between the two approaches lies in the enforcement of the rules and the timing of their application.

    1. IAM Policy: An IAM policy that denies the ec2:runInstances action if the Project tag is not applied, is enforced at the time of resource creation. This means that if a user tries to create an EC2 instance without the required Project tag, the action will be denied and the resource will not be created. This is a proactive approach that prevents the creation of improperly tagged resources in the first place.

    2. AWS Config Rule: On the other hand, an AWS Config rule that flags any resources if the Project tag is not applied is a reactive approach. The rule checks for compliance after the resource has been created. If a resource is found to be non-compliant (i.e., it does not have the required Project tag), it is flagged for review. However, the resource is still created and may incur costs until it is reviewed and corrected.

    In this scenario, the goal is to prevent the creation of resources that do not have the required Project tag to ensure accurate cost reports. Therefore, applying an IAM policy that denies the creation of such resources is a more effective solution. The AWS Config rule, while useful for identifying non-compliant resources, does not prevent their creation and may result in inaccurate cost reports until the non-compliance is rectified.

    I hope this helps! Let me know if you have any further questions.

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    January 12, 2024 at 9:01 pm

    Hello,

    Thank you for bringing this to our attention. We apologize for the inconvenience you’re experiencing with our site.

    Could you please let us know if you are still experiencing this issue?

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    April 4, 2024 at 1:35 pm

    Hello juano1985,

    Your understanding seems to be correct. The question is about reducing costs and minimizing the probability of service interruptions, not eliminating them entirely.

    Amazon ECS Spot Instances can indeed provide a cost-effective solution. However, it can interrupt these instances with two minutes’ notice when it needs the capacity back.

    To handle these potential interruptions, you can configure Spot Instance Draining. If a Spot Instance is marked for termination, the Amazon ECS container agent automatically sets the container instance state to DRAINING, which prevents new tasks from being scheduled for placement on the container instance.

    So, in the context of the given question, using Amazon ECS with Spot Instances and configuring Spot Instance Draining would be a suitable solution. It allows for cost reduction (due to the use of Spot Instances) and reduces the probability of service interruptions (through the use of Spot Instance Draining).


    I hope this helps clarify the question!

    Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    March 12, 2024 at 12:08 pm

    Hello Juan,

    Thank you so much for your kind words! It’s always a pleasure to hear from our users, and I’m thrilled to hear that our site has been helpful in your journey. Congratulations on passing your AWS Solutions Architect Associate exam. That’s a fantastic achievement!

    I’m glad to hear that you’re finding our practice tests useful as you prepare for your professional certification. It’s definitely a challenging step up, but it sounds like you’re approaching it with the right attitude and dedication.

    Remember, the key to success is consistency and understanding the concepts thoroughly. Don’t hesitate to reach out if you have any questions or need further clarification on any topics. We’re here to help.

    Thank you again for your feedback, and best of luck with your studies and upcoming exam.

    Cheers,
    JR @ Tutotorials Dojo

  • JR-TutorialsDojo

    Administrator
    March 11, 2024 at 12:49 pm

    Hi juano1985,

    You’re correct that managing IAM policies in each AWS account can be a bit of an overhead. However, the purpose of having both SCPs and IAM policies is to provide layered security.

    To answer your question, yes, an SCP alone can restrict people from running EC2 instances without the required tags. If the SCP is in place and properly configured, it will enforce the restrictions as defined.

    However, the reason for also having IAM policies is to provide an additional layer of security at the account level. This is particularly useful in scenarios where the SCP might be modified or removed. With an IAM policy in place, the tagging requirement would still be enforced even if the SCP were removed.

    I hope this clarifies your question.

    Best Regards,
    JR @ Tutorials Dojo

  • JR-TutorialsDojo

    Administrator
    February 26, 2024 at 12:33 pm

    Hi VitalyKr,

    The flow you described does not necessarily mean that an on-premises user is directly accessing a VPC resource. The web application in the VPC is the one making the call to the on-premises custom identity broker, not the user directly. The user’s request is being forwarded through the web application, which acts as a proxy. This doesn’t violate the principle of separation of concerns as the web application isn’t handling the authorization itself, but rather coordinating with the on-premises broker.

    As for the STS calls, they are made by the on-premises broker, not the web application. The broker is responsible for authenticating the user against the LDAP server, calling STS to get temporary credentials, and then providing these credentials to the web application.

    If you have multiple applications in your VPC, you wouldn’t need to include the authorization logic in each app. Instead, the custom identity broker would handle the authorization for all applications, ensuring a separation of concerns. AWS does recommend using STS directly if you’re not using Cognito, but this doesn’t mean that the call to STS has to originate from an on-premises-hosted authorization orchestrator. It’s possible to securely call STS from a VPC-hosted application as long as you’re following best practices for securing the credentials used to make the STS call.

    I hope this helps! Let me know if you have any more questions.

    Best Regards.

Viewing 1 - 15 of 67 posts