Ends in
00
days
00
hrs
00
mins
00
secs
SHOP NOW

🚀 25% OFF All Practice Exams, Video Courses, & eBooks – Cyber Sale Extension!

Find answers, ask questions, and connect with our
community around the world.

Home Forums AWS AWS Certified Solutions Architect Professional Category: CSAP – Continuous Improvement for Existing Solutions – Question

  • Category: CSAP – Continuous Improvement for Existing Solutions – Question

  • Bharath Ram Chandrasekar

    Member
    November 25, 2024 at 10:39 am

    A company wants to release a weather forecasting app for mobile users. The application servers generate a weather forecast every 15 minutes, and each forecast update overwrites the older forecast data. Each weather forecast outputs approximately 1 billion unique data points, where each point is about 20 bytes in size. This results in about 20GB of data for each forecast. Approximately 1,500 global users access the forecast data concurrently every second, and this traffic can spike up to 10 times more during weather events. The company wants users to have a good experience when using the weather forecast application so it requires that each user query must be processed in less than two seconds.

    Which of the following solutions will meet the required application request rate and response time?
    I am not convinced that Use an Amazon EFS volume to store the weather forecast data points. Mount this EFS volume on a fleet of Auto Scaling Amazon EC2 instances behind an Elastic Load Balancer. Create an Amazon CloudFront distribution and point the origin to the ELB. Configure a 15-minute cache-control timeout for the CloudFront distribution.
    I feel Create an Amazon OpenSearch cluster to store the weather forecast data points. Write AWS Lambda functions to query the ES cluster. Create an Amazon CloudFront distribution and point the origin to an Amazon API Gateway endpoint that invokes the Lambda functions. Write an Amazon Lambda@Edge function to cache the data points on edge locations for a 15-minute duration. is more a viable solution.
    Could you please explain ?

  • JR-TutorialsDojo

    Administrator
    November 26, 2024 at 2:06 pm

    Hello Bharath Ram Chandrasekar,

    Thank you for reaching out to us.

    You preferred the solution that involves creating an Amazon OpenSearch cluster, writing AWS Lambda functions to query the ES cluster, creating an Amazon CloudFront distribution, and using Lambda@Edge for caching data points at edge locations for a 15-minute duration.

    Based on the given explanation, this solution is possible as it offers several advantages, such as efficient querying with OpenSearch and global low-latency access with CloudFront and Lambda@Edge. However, it’s important to consider the default burst concurrency limits of AWS Lambda, which range between 500 to 3,000 requests per second depending on the region. During peak traffic times, these limits might prevent Lambda functions from quickly scaling to meet the demand, possibly affecting response times.

    Hence, the correct answer is: Using Amazon EFS with a fleet of Auto Scaling Amazon EC2 instances behind an Elastic Load Balancer (ELB). This setup ensures high concurrency and can handle significant traffic spikes, with CloudFront caching helping to reduce latency.

    We would appreciate your perspective on this matter. Could you please explain why you believe the Amazon OpenSearch and Lambda@Edge solution is more viable, given the potential scaling issues with Lambda functions during peak traffic?

    Thank you once again.

    Regards,
    JR @ Tutorials Dojo

Viewing 1 - 2 of 2 replies

Log in to reply.

Original Post
0 of 0 posts June 2018
Now
Skip to content