Home › Forums › AWS › AWS Certified Machine Learning – Specialty › Out dated question on Amazon Elastic Inference – AEI
-
Out dated question on Amazon Elastic Inference – AEI
Irene-TutorialsDojo updated 4 months, 4 weeks ago
2 Members
·
2
Posts
-
AEI is no longer provided as it’s now SageMaker.
Category: MLS – Machine Learning Implementation and Operations
A Machine Learning Specialist has trained an Apache MXNet model using Amazon SageMaker. The Specialist wants to accelerate his inference workloads without having to pay for expensive GPU-based instances.
Which is the MOST cost-effective solution for this problem?
Use a P2 or P3 instance type for inference.
* Use Amazon Elastic Inference
Use Amazon Kinesis Data Streams to get a real-time inference.
Use Inference Pipeline
—
Reference: https://aws.amazon.com/blogs/machine-learning/model-serving-with-amazon-elastic-inference/ -
Hello PKX01,
Thank you for bringing this to our attention.
You are correct that Amazon Elastic Inference (AEI) has been deprecated and is no longer available as a standalone option. The original intent of the question was to highlight a cost-effective way to accelerate inference without relying on GPU-based instances, but AWS now recommends using newer solutions such as SageMaker with Inferentia (Inf1/Inf2) instances, serverless inference, or multi-model endpoints for that purpose. We’ll update this item to reflect the latest AWS guidance so that it remains accurate and aligned with current best practices.
If you have further questions or need additional clarification, please don’t hesitate to contact us.
Best,
Irene @ Tutorials Dojo
Log in to reply.