Data is growing bigger every day. And it will grow bigger and faster as we progress with IoT. The natural choice for storing and processing data at a high scale is a cloud service - AWS being the most popular among them. AWS provides us several ways of working with data - at every step in the data analytics pipeline. It starts with collecting data, storing, processing and analyzing the data to obtain meaningful insights.
The Amazon Transcribe service can be used to recognize speech in audio files and convert it to text. It can identify the individual speakers in an audio clip. We can use it to convert audio to text and to create applications that incorporate the content of audio files
Amazon Rekognition can be used to add image and video analysis to applications. For any given image or video the Rekognition API can identify objects, people, text, scenes, and activities.
SageMaker is one of the fundamental offerings for AWS that helps us through all stages in the machine learning pipeline - build, train, tune and deploy. It provides us with simple Jupyter Notebook UI that can be used to script basic Python code.
AWS provides us several services for solving machine learning problems on different levels. Starting with high performance EC2 instances and scalable SageMaker to specialized services like Textract, Comprehend, Deepracer and many more.