Find answers, ask questions, and connect with our
community around the world.

Home Forums AWS AWS Certified Solutions Architect Professional Clarification on a Cost Control Question

  • Clarification on a Cost Control Question

  • jithin

    October 27, 2020 at 11:38 am

    I found the below question in one of the Tutorials Dojo Practice exam and I have a doubt if the option is the most suited one.


    A leading news company produces new high-definition video files every day in their on-premises network which is then compressed to a single file with a total size of around 200 GB. The generated file needs to be uploaded to an Amazon S3 bucket from 12 AM to 2 AM every midnight for storage. Your current network bandwidth is 30MB/sec and you want to maximize its throughput.

    Which of the following is the best and most cost-effective option which will ensure that the file uploads are completed in the allotted time window?


    The answer given by tutorials dojo is s3 multi part upload. But I notice that file size is 200 gb and network speed is 30mbps and we need to complete the upload in 12-2am window.

    So if we calculate the bandwidth(30mbpsx60x120)/1024 = 210 gb of data bandwidth i.e mas x DAta upload possible in case the network performs in optimum range

    So would S3 multipart be a correct choice here?

  • Carlo-TutorialsDojo

    October 28, 2020 at 12:16 pm

    Hello jithin,

    First of all, Bandwidth should not be confused for throughput. There are a lot of factors that affects throughput such as latency, throttling of your ISP, packet loss, communication medium , and etc. It is really impossible to determine what your throughput will be as it is a case to case basis. But for the sake of the question, let’s pretend that we are in a ideal evironment.

    Even with a great network connection, you can’t fully utilize its bandwidth. The limitations of the TCP/IP protocol make it very difficult for a single application to saturate a network connection.

    So, in order to maximize the 30/MB bandwidth, we have to use multithreading which s3 multipart upload uses. We can break the 200GB file size into 10,000 parts each having a size of 20MB. Let’s say that we are using the default concurrency configuration of multipart upload which is 10 concurrent requests. Considering an ideal environment, each processes will have a throughput of about 24Mbit/s or 3 MB/s. A single 20 MB file would take 6.6666 seconds to upload. But since we are uploading concurrently, 10s of these 20 MB files are uploaded at a single time.

    10,000/ 10 = 1,000 batches.

    Each batch would take 6.6666 seconds to complete. 1,000 * 6.6666 is approximately 6,6666 seconds or 1.83 hours.

    1.83 hours < 2 hours. So, yes. S3 multi-part upload is the correct answer.



  • jithin

    November 4, 2020 at 3:43 pm

    • Carlo-TutorialsDojo

      November 4, 2020 at 6:39 pm

      Hello jithin,

      It seems that you posted an external question that is not part of our practice tests. We highly suggest that you contact the author of the practice question you posted as they are the ones with the correct answer.

      We provide full support to our content but not for external ones. And more importantly, make sure that the questions you are posting comply with Section 2.3. (Confidentiality) of the AWS Certification Program Agreement:

      Let us know if you need further assistance. The Tutorials Dojo team is dedicated to help you pass your AWS exam on your first try!


      Carlo @ TutorialsDojo

Viewing 1 - 3 of 3 replies

Log in to reply.

Original Post
0 of 0 posts June 2018