Ends in
00
days
00
hrs
00
mins
00
secs
SHOP NOW

Get $4 OFF in AWS Solutions Architect & Data Engineer Associate Practice Exams for $10.99 each ONLY!

Forum Replies Created

Viewing 16 - 30 of 301 posts
  • Carlo-TutorialsDojo

    Member
    January 18, 2024 at 6:01 pm

    Hello Lifelong1250,
    Thank you for posting this question.
    Just to give our readers full context on what we’re discussing, here are the given options for this question:

    Option 1 – Retrieve the data using Amazon Glacier Select
    Option 2 –Use Expedited Retrieval to access the financial data. (correct)
    Option 3 – Use Bulk Retrieval to access the financial data.
    Option 4 – Specify a range, or portion, of the financial data archive to retrieve.
    Option 5 – Purchase provisioned retrieval capacity. (correct)

    In this scenario, the key words are:

    • retrieve the required data in under 15 minutes
    • handle up to 150 MB/s of retrieval throughput

    When differentiating between retrieval options, the key metric is usually the access time, not archive size. I understand that there are edge cases, like the one you mentioned, where archives over 250MB may not be retrieved within 5 minutes in Expected retrieval. But in actual AWS exams, these minor details and some edge cases typically aren’t the focus when distinguishing between different options. When we write questions, we do our best to stick to the same style, wording, and format that we see in actual exams. This way, our users will get accustomed to what the real test will be like. With this said, I believe that the scenario we provided contains enough details for one to be able to pick the correct answers.

    Let me know if this answers your question.

    Regards,

    Carlo @ Tutorials Dojo

  • Carlo-TutorialsDojo

    Member
    January 18, 2024 at 4:31 pm

    Hello Lifeling1250,

    Let me answer your question.

    Just for context, here’s the rationale that’s given for the option: Use EC2 Dedicated Instances with elastic inference accelerator

    “..is incorrect because these are EC2 instances that run in a VPC on hardware that is dedicated to a single customer and are physically isolated at the host hardware level from instances that belong to other AWS accounts. It is not used for reducing latency. In addition, elastic inference accelerators only enable customers to attach low-cost GPU-powered acceleration to Amazon EC2, Amazon SageMaker instances and other resources”

    EC2 Dedicated Instance indeed isolates your instances at the host hardware level. However, it doesn’t guarantee that instances will always launch on the same physical server. What it does ensure is that the underlying hardware your instance runs on is exclusively yours and not shared with other customers. Therefore, there may be situations where your instances are not colocated and are hosted on separate hardware. For that reason, the use of EC2 Dedicated Instances is incorrect.

    Let me know if this helps.

    Regards,

    Carlo @ Tutorials Dojo

  • Carlo-TutorialsDojo

    Member
    January 5, 2024 at 6:12 pm

    Hello fancypants and dotcloud,

    Thank you for your feedback.

    We apologize for the inaccurate information that is present in this particular item. First off, Global Accelerator runs on the same internal AWS network that CloudFront uses to deliver content, so it also has a presence globally. Hence, it doesn’t make much sense to put GA behind a CloudFront distribution in hopes of further ‘optimizing’ requests. Second, they’re designed to optimize different aspects of the network path. Layering them isn’t really necessary and could add unnecessary complexity and cost.

    We’ll work on improving this item.

    Regards,

    Carlo @ Tutorials Dojo

  • Carlo-TutorialsDojo

    Member
    January 3, 2024 at 12:59 pm

    Hello fernando-6,

    Thank you so much for your continued support in using our practice materials. We appreciate customers like you who are willing to speak up because it helps us understand and improve our products better.

    I understand your points, and I’d like to clarify our stance. I believe that defining how ‘bad’ a throughput is should be relative to the presented requirements rather than taking it at face value. In context, while Cold HDD might not be ideal for high-performance scenarios, it may suffice for low-throughput needs. Regarding your claim of the poor usage of the term ‘throughput-oriented’; In discussing EBS volumes, we primarily focus on two performance-defining types: IOPS and throughput, each tailored for specific use cases. Our intention in using the term ‘throughput-oriented’ is to clearly indicate that we’re seeking a solution that prioritizes data transfer over large volumes. If someone asked me to pick a volume for reading hundreds of MBs per block, wouldn’t a throughput-oriented type be better than IOPS?

    For this reason, I’m puzzled as to how Cold HDD would contradict the requirements of this scenario in any manner. As you’ve already said, Provisioned IOPS wouldn’t be ideal; it would be overkill for the job and more expensive. But what if there’s an alternative like Cold HDD that matches our workload’s demands and comes at a lower cost? Wouldn’t that be a more appropriate solution?

Viewing 16 - 30 of 301 posts