a gym with exercise equipment to symbolize Machine Learning services from AWS

Machine Learning services and features

Using SageMaker AI services is like visiting a well equipped gym, you just have to choose the right equipment for your goals. AWS has a wide range of Machine Learning services and capabilities, each one has its own advantages and disadvantages. Understanding your use case is key to selecting the most appropriate service.

Questions

To confirm your understanding scroll to the bottom of the page for questions and answers.

These revision notes cover sub-domain 4.2, Recommend and implement the appropriate machine learning services and features for a given problem, of the AWS Machine Learning Speciality exam. A description of all the knowledge domains in the exam is in these revision notes: AWS Machine Learning exam syllabus

The three layers of ML technologies and services

AWS describes it’s Machine Learning technologies and services in terms of three layers. Each layer builds on top of the preceding layer incorporating features and abstracting them so that users do not have to develop expertise in the underlying technologies.

  1. AI services
  2. Machine Learning Services
  3. Frameworks and Infrastructure
Infographic to show Amazon Machine Learning services in three layers
Add this infographic to your Pinterest account

These revision notes contain four videos:

  1. An Overview of AI and Machine Learning Services From AWS
  2. Build Intelligent Apps Using AI Services
  3. Why TensorFlow?
  4. Deep learning with Apache MXNet

Video: An Overview of AI and Machine Learning Services From AWS

A 1.39 minute video from AWS.

AI Services

AI services are AWS’s premiere value-add services. They are very easy to use and incorporate to enhance existing systems or as completely new systems. There are nine services that can be placed into four groups depending on the medium they process:

  1. Vision: Rekognition, Textract
  2. Language: Translate, Comprehend
  3. Speech: Poly, Transcribe, Lex
  4. Data: Personalize, Forecast

AI services are discussed in the AWS White paper AWS Well-architected Machine Learning Lens on pages 17 to 21:

Video: Build Intelligent Apps Using AI Services

This video from AWS is 41.30 minutes long. The first 29.39 minutes are directly relevant to the exam content for sub-domain 4.2, so I recommend you watch the entire video. Here are the timestamps for the contents:

  • 0 Introduction to the three layer model
  • 3.57 Rekognition for images and video, human trafficking
  • 7.04 Textract, Natural Language Processing reference architecture
  • 9.04 Transcribe, Transcribe integration
  • 11.38 Translate, high volume and time sensitive content
  • 13.55 Lex, Book a hotel speech recognition
  • 16.31 Polly, text to speech, Connect, contact center in the cloud
  • 21.46 Comprehend, sentiment analysis, comprehend medical
  • 24.43 Personalize
  • 28.05 Forecast
  • 29.41 Demo by HSBC – Building a Virtual Assistant by Gareth Butler
  • 40.55 Summary
  • 41.30 End

Amazon Rekognition

Overview

Rekognition is an AWS service that identifies objects, people, text, scenes and activities in images and video. Rekognition has both built in recognition capabilities and custom tags which allow you to label objects and people important to your business. When a tag, or label is identified Rekognition returns a percentage certainty so you can decide on the value of the inference.

Key features

The Key features of Rekognition are:

  • object and scene detection
  • facial recognition
  • facial analysis
  • face comparison
  • unsafe image detection
  • celebrity recognition
  • text in image
  • personal protective equipment (PPE) detection

Use cases

The most common use-cases for Rekognition Image include:

  • Searchable Image Library
  • Face-Based User Verification
  • Sentiment Analysis of images
  • Facial Recognition
  • Image Moderation

Image Moderation is also known as Content Moderation. It involves the detection of gore, drugs, explicit nudity, or suggestive nudity​ in an image to classify the content and enable it to be removed, or restricted to certain users, for example adults only.

The most common use-cases for Rekognition Video include:

  • Search Index for video archives
  • Easy filtering of video for explicit and suggestive content

Amazon Textract

AWS icon for Amazon SageMaker Textract
  • Function: Convert scanned documents to text
  • AWS docs: https://aws.amazon.com/textract/
  • AWS FAQs: https://aws.amazon.com/textract/faqs/

Overview

Textract can convert scanned documents to text. This includes text in tables and hand written form. When text is extracted it is returned with coordinates that identify a box shaped area on the document. This allows for auditing later since the text can be traced back to a specific area in a specific document. The extracted text is also returned with a score to indicate how confident Textract is on the results. This gives you the option to reject the automatic processing of text extracted with a low level of confidence.

Key features

  • Optical Character Recognition (OCR)
  • Form Extraction
  • Table Extraction
  • Handwriting Recognition
  • Built-in Human Review Workflow

Use cases

The most common use cases for Amazon Textract include:

  • Import Documents and Forms into Business Applications
  • Create Smart Search Indexes 
  • Build Automated Document Processing Workflows
  • Maintain Compliance in Document Archives
  • Extract Text for Natural Language Processing (NLP)
  • Text Extraction for Document Classification

Amazon Translate

AWS icon for Amazon SageMaker Translate
  • Function: Translating text from one language to another
  • AWS docs: https://aws.amazon.com/translate/
  • AWS FAQs: https://aws.amazon.com/translate/faqs/

Overview

This service translates text from one language to another. You can translate individual words, phrases, or entire documents. An API is provided, enabling either real-time or batch translation of text from the source language to the target language.

Key features

  • There is a large list of languages, 71+, that can be translated.
  • You can provide your own list of custom words and phrases for specialist technical niches or business areas.
  • Language Identification
  • Batch and Real-Time Translations

Use cases

Translate use cases have three features in common:

  1. High volume
  2. High speed
  3. Minor translation errors OK

Uses cases fall into three main groups:

  1. Integrating Amazon Translate into applications to provide multilingual features.
  2. Process and manage an organization’s incoming data.
  3. Integrating Amazon Translate with other AWS services.

Example use cases:

  • Translate meeting minutes
  • Translate technical reports
  • knowledge-base articles
  • Translate emails
  • Translate customer service chat
  • Analyze text in social media
  • Analyze text in news feeds
  • Search for information
  • eDiscovery cases (the process of identifying and delivering electronic information that can be used as evidence in legal cases)
  • Combine with Amazon Comprehend to analyze unstructured text in any language. For example, social media streams to extract named entities, sentiment, and key phrases.
  • Combine with Amazon Transcribe for subtitles and live captioning available in any language.
  • Combine with Amazon Polly to convert text to speech in any language.
  • Translate a corpus of document stored in Amazon S3.
  • Translate text stored in databases.
  • Provide translation features for processing in AWS Lambda or AWS Glue systems

Amazon Comprehend

AWS icon for Amazon SageMaker Comprehend
  • Function: Extract meaning from text
  • AWS docs: https://aws.amazon.com/comprehend/
  • AWS FAQs: https://aws.amazon.com/comprehend/faqs/

Overview

Comprehend is used to analyse text to reveal insights and relationships in unstructured data. The data can be any type of free form text such as emails or text messages. For sentiment analysis Comprehend can tell you the overall sentiment of the text i.e. want it favourable to the subject, or did it contain negative sentiments and how positive or negative the text was.

Key features

  • identifies the language of the text
  • extracts key phrases, places, people, brands, or events
  • understands how positive or negative the text is
  • analyzes text using tokenization and parts of speech
  • automatically organizes a collection of text files by topic
  • You can build a custom set of entities or text classification models that are tailored uniquely to your organization’s needs.
  • Comprehend Medical

Use cases

  • Voice of customer analytics, sentiment analysis to find out what your customers think
  • More accurate search, based on key words and phrases
  • Knowledge management and discovery leading to recommendations of related articles
  • Classify support tickets for better issue handling
  • Perform Medical Cohort Analysis

Amazon Polly

AWS icon for Amazon SageMaker Polly
  • Function: Convert text to speech
  • AWS docs: https://aws.amazon.com/polly/
  • AWS FAQs: https://aws.amazon.com/polly/faqs

Overview

Polly allows you to build speech enabled services. Polly can translate text to speech (TTS) to produce realistic voice messages to which a user can take action, or respond as part of a conversation.

Key features

  • Two speaking styles are available: newscaster and conversational.
  • Custom speaking, brand, style can be requested
  • Male, or female voice
  • Choice of languages
  • Control over speed, pitch and other characteristics of the voice
  • Custom pronunciation lists

Use cases

  • Content Creation, convert text content into speech
  • E-learning
  • Telephony

Amazon Transcribe

AWS icon for Amazon SageMaker Transcribe
  • Function: Speech to text
  • AWS docs: https://aws.amazon.com/transcribe/
  • AWS FAQs: https://aws.amazon.com/transcribe/faqs

Overview

This service converts speech to text, by using Automatic Speech Recognition (ASR) technology.

Key features

  • Punctuation and number normalization, this provides correctly formatted text
  • Streaming transcription for real time transcription
  • Timestamp generation, this enables to precise part of the audio or video to be identified that corresponds to the transcribed text
  • Custom vocabulary, for business specific and niche words and phrases
  • Vocabulary filtering and Automatic content redaction to remove profane words and phrases and sensitive information
  • Recognize multiple speakers
  • Automatic language identification
  • Amazon Transcribe Medical

Use cases

  • Live-call analytics and agent assist
  • Post-call analytics
  • Clinical Documentation
  • Media content subtitling, automatically add subtitles to videos
  • Media intelligence, convert audio content to text for searching and categorizing
  • Digital scribes and court reporters

Amazon Lex

AWS icon for Amazon SageMaker Lex
  • Function: Chatbot – conversational interface using voice and text
  • AWS docs: https://aws.amazon.com/lex/
  • AWS FAQs: https://aws.amazon.com/lex/faqs

Overview

Lex provides natural language chatbot capability. It is based on the same technology as Amazon Alexa. With Lex a user can communicate by voice as part of a conversation to achieve their desired goal.

Key features

  • High quality speech recognition and natural language understanding
  • Multi-turn conversations
  • Context Management
  • Utility Prompts
  • Integration with AWS Lambda
  • Connect to Enterprise Systems
  • One-click Deployment to Multiple Platforms
  • Powerful Lifecycle Management Capabilities
  • Intent Chaining
  • 8 kHz Telephony Audio Support

Use cases

  • Call Center Chatbots and Voice Assistants
  • Application Bots
  • QnA Bots and Informational Bots. These bits help customers with their purchases or other interactions by providing information and answering questions
  • Enterprise Productivity Bots

Amazon Personalize

AWS icon for Amazon SageMaker Personalize
  • Function: Real-time personalized recommendations
  • AWS docs: https://aws.amazon.com/personalize/
  • AWS FAQs: https://aws.amazon.com/personalize/faqs/

Overview

This service draws on features that Amazon incorporates into their own retail website. This includes personalization experiences, including specific product recommendations, personalized product re-ranking, and customized direct marketing.

Key features

  • Real-time or batch recommendations
  • New user and new item recommendations
  • Contextual recommendations
  • Similar item recommendations

Use cases

  • Personalized recommendations
  • Similar items
  • Personalized reranking i.e. rerank a list of items for a user
  • Personalized promotions/notifications

Amazon Forecast

AWS icon for Amazon SageMaker Forecast
  • Function: Time series based forecasts
  • AWS docs: https://aws.amazon.com/forecast
  • AWS FAQs: https://aws.amazon.com/forecast/faqs/

Overview

This service uses historical time series data combined with user provided parameter data to generate predictions.

Key features

  • Works with any historical time series data to create accurate forecasts
  • Automated machine learning
  • Based on the same technology used at Amazon.com
  • Easily evaluate the accuracy of your forecasting models
  • Visualize forecasts
  • Integrate with your existing tools
  • Generate probabilistic forecasts

Use cases

  • Product Demand Planning
  • Financial planning
  • Resource planning
Infographic showing nine Amazon AI Services
Add this infographic to your Pinterest account.

Machine Learning Services

The Machine Learning services layer is focused on Amazon SageMaker and it’s associated services. If you want AWS to do all the heavy lifting but do not have a use case that is satisfied by any of the AI services in the top layer then SageMaker is your new friend.

Amazon SageMaker

SageMaker is Amazon’s managed service for Machine Learning. SageMaker has services and features to support all stages of Model development and production.

Training

Model training can be performed using API calls that set up, run and tear down a high performance compute cluster managed by SageMaker. You can control the configuration of the cluster by selecting the EC2 instance type, size and number of instances. To help with analyzing the results of training SageMaker Debugger provides real-time insight into the training process by automating the capture and analysis of data.

Optimization

A model is optimized by adjusting it’s hyperparameters.

“A model hyperparameter is a configuration that is external to the model and whose value cannot be estimated from data.” (Machine Learning Mastery).

Amazon SageMaker has an automatic model optimizing feature known as hyperparameter tuning. You choose a metric and SageMaker runs the model multiple times with different combinations of hyper parameters until the optimal set of hyperparameters is found. For successful hyperparameter tuning you need:

  • A prepared dataset
  • A training job (model) that has run successfully before with the data
  • Understand the Machine Learning algorithm you have selected
  • A clear understanding what a successful training run looks like

There is a choice of strategies for finding the optimized hyperparameters:

  1. Random search
  2. Bayesian Search
  3. Custom search

Note, that it is possible for hyperparameter tuning to be unsuccessful and not return the optimized parameters.

Deployment

Using SageMaker endpoints, multiple production model variants can be deployed at the same time. Traffic can be rapidly switched between them. This feature enables risk limiting deployment options such as:

  • Green / Blue deployment
  • Canary release
  • A/B testing

Hosting

Within SageMaker there are two ways to host your model for production:

  1. SageMaker endpoints
  2. SageMaker batch transform

SageMaker GroundTruth

Amazon SageMaker Ground Truth is a service you can use to manually label data. You hand your training data over to AWS and they take care of the rest returning your data with attached labels processed by humans. You can also create private GroundTruth jobs where you provide the work force and use Ground Truth to manage the workflow.

SageMaker Notebooks

AWS icon for Amazon SageMaker Notebook

SageMaker Notebooks are Jupyter Notebooks hosted on a managed SageMaker EC2 instances. A Jupyter Notebook is a web application that allows you to create documents that contain code, equations, visualizations and narrative text. It can be described as an Integrated Development Environment (IDE) in a web browser. This feature enables the text of a tutorial to be interlaced with executable code, so you can read the narrative and then execute the code to study the output. Amazon provides walkthrough instances that use this feature.

Typically each user has their own Notebook instance. Unlike Glue Notebooks, SageMaker notebooks do not have a permanent cluster spun up to support them, So they are much more cost effective to use.

Algorithms and Marketplace

AWS Marketplace is a curated catalog of algorithms and model packages that have been built by third party suppliers. Users can purchase the algorithms and model packages to use in their systems.

  • SageMaker Algorithms are the model artifacts that can be trained to produce production models.
  • SageMaker Model packages are the complete trained model ready to be used in production.

Amazon SageMaker RL

dow holding flower in mouth to symbolize reinforcement learning

Reinforcement Learning (RL) is something we do with pets and children. When they do something right we praise them, or give them a treat. When they screw up we scold them, or withdraw a privilege and blame an innocent third party or society in general. For Machine Learning models we use Markov Decision Processes (MDPs) which consist of a number of Episodes that comprises a series of Time Steps. Each Time Step has the following:

  1. Environment
  2. State
  3. Action
  4. Reward
  5. Observation

The model attempts to find a strategy that optimizes the cumulative reward over the long term. This strategy is called a Policy.

Amazon SageMaker RL key features are:

  • A deep learning framework: TensorFlow, Apache MXNet.
  • RL tool kit: Intel Coach, Ray RLlib.
  • RL environment: OpenAI Gym, EnergyPlus and RoboSchool.

Frameworks and Infrastructures

This is the lowest Machine Learning layer. Here you are doing most, if not all of the heavy lifting yourself. You may not even be deploying in the cloud! Using a managed service such as SageMaker simplifies Machine Learning by abstracting away the complexities of the underlying infrastructure. However this abstraction comes at a cost since the details of the implementation are hidden from you and maybe SageMaker just does not allow you to do what you want to. Working with the lower level frameworks and interfaces gives you the freedom to interact more directly with the Machine Learning algorithms. The full Machine Learning pipeline is now exposed to you.

Working at this level also opens up the world of Deep Learning. Deep Learning is a subset of Machine Learning that focuses on Neural Networks. The performance of most Machine Learning algorithms plateaus as more training data is supplied. However Neural Networks get better the more data is processed.

Frameworks

TensorFlow

TensorFlow is an open source Apache project. It provides an end to end framework with tools and libraries that make it easy to build and deploy Machine Learning applications. TensorFlow was originally developed by Google and was open sourced in 20?? Google continues to support it and contribute to the underlying code. TensorFlow uses Python as the orchestration language, but the processing libraries are written in super fast C++. 

Using TensorFlow users can create processing pipelines called dataflow graphs that contain processing nodes. Each node is a mathematical operation, and each connection between the nodes is a data array, or tensor which is where the framework gets its name.

Why TensorFlow?

This is a 2.22. minute video from Tensor.org. It gives a brief overview of TensorFlow.

2.22 minute video from TensorFlow.org

MxNet

Mxnet is an open source framework developed by AWS and supported by Microsoft. It uses C++ on the backend for high performance and a range of languages, including Python, to interface with users. MxNet models are portable and able to fit into small amounts of memory. They are also scalable using multiple GPU instances. MxNet is Amazon’s Deep Learning language of choice since November 2016.

Video: Deep learning with Apache MXNet

This is a 24.13 minute from AWS introducing MxNet by Nathalie Rauschmayr. The timestamps are:

  • 0 Introduction to Deep Learning
  • 3.20 Introduction to MxNet
  • 4.05 History of MxNet
  • 6.53 MxNet Ecosystem
  • 7.08 Multiple Language support
  • 7.49 Ecosystem – Gluon toolkits, Model zoo, MxBoard, Spark, TensorRT, TVM, ONNX, Keras (fork)
  • 13.36 Gluon API
  • 14.09 imperative vs symbolic
  • 17.35 Hybrid programming
  • 18.44 Distributed training
  • 20.12 Deep learning acceleration
  • 21.05 ML Perf benchmark
  • 21.44 MxNet community
  • 24.13 End
This is a 24.13 minute video from AWS.

Pytorch

Pytorch is an open source deep learning framework developed by Facebook in partnership with AWS. It has a Python and C++ interface and is optimised for high performance GPU processing. PyTorch has a versitile collection of tools including:

  • torchtext – NLP
  • torchvision – Computer vision
  • torchaudio – Speech processing

PyTorch has tensor like structures that are GPU compatible for performance. An imperative paradigm adds a little to the processing graph with each line of code which aids in debugging.

Interfaces

Gluon

Gluon is an open source deep learning library jointly created by AWS and Microsoft. Gluon acts as an interface between the user, coding in Python, and the Apache MxNet framework. This interface greatly simplifies the process of creating deep learning models without sacrificing training speed.

Keras

Keras is an open source Interface. It is independent of the major IT vendors, but the main contributor is a Google engineer. Keras APIs make TensorFlow easier to work with. They are simple and consistent to minimise the number of user actions for common use cases. This makes development iterations faster leading to final solutions quicker.

Infrastructure

EC2 types

All EC2 choices are based on computational power vs cost. More expensive EC2 types may work out cheaper because the high hourly rates are offset by needing to have the instance provisioned for a shorter length of time. Spot instances provide the opportunity of having the power you want with up to 70% cost saving. However the downside is that AWS can withdraw the EC2s at any time, even in the middle of your processing. This means spot instances are only useful for situations where your work can be interrupted. For example training cycles that can be delayed for a while.

The main choices for EC2 types are:

  1. CPU
  2. GPU
  3. SageMaker ML instance types
  4. AWS Inferentia

CPUs are regular compute virtual processors. For Machine Learning you should choose CPUs that are compute optimized. The instances in this family are prefixed by the letter “C”.

GPUs get their name from Graphical Processing Units. Originally this type of processor was designed for the high levels of processing required by highly graphical applications such as computer games. Compared to a CPU, a GPU has many more smaller sized logical cores. A core comprises arithmetic logic units (ALU), control units and memory cache. This architecture is suitable for processing a set of similar, simpler, computations in parallel. This is a typical workload for Machine Learning applications. GPUs cost more but complete processing quicker and so can work out more cost effective.

SageMaker managed EC2 instances are prefixed ml.m for standard instances, ml.c for compute optimised and ml.p for accelerated computing.

AWS Inferentia

AWS Inferentia is a custom designed CPU chip optimised for inferencing in the cloud. This optimising will drive down the cost of cloud based Machine Learning by as much as 45% per inference. Up to 16 Inferentia CPUs can be configured in a single Inf1 EC2 instance for maximum power and throughput. Enhanced 100Gps networks improve throughput further by preventing network bottlenecks.

AWS Neuron

AWS has provided a SDK to make the best use of Inferentia instances called AWS Neuron. With Inferentia and Neuron, Machine Learning frameworks such as TensorFlow, Pytorch and MxNet can use high performance, low latency, EC2 instances to power Neural Network Processing.

Deep Learning (DL) containers and AMI

AWS Deep Learning containers are Docker containers pre-loaded with Machine Learning frameworks and libraries needed to start Machine Learning straight away. AWS DL container images can be obtained from the Elastic Container Registry and AWS Marketplace at no additional cost.

AWS Deep Learning AMIs (Amazon Machine Images) have popular Machine Learning frameworks pre-loaded. There are Base images ready for you to configure and load tools and images with Conda pre-installed.

Amazon Elastic Kubernetes Service (EKS)

Kubernetes is an open source orchestration system for Docker containers. EKS is Kubernetes with all the heavy lifting done by AWS. You can use EKS to run SageMaker Deep Learning containers, or your own non-sageMaker containers.

AWS IoT Greengrass

GreenGrass helps you to build, maintain and deploy software on devices as part of an Internet of Things (IoT) system. With GreenGrass you can program devices to act locally on the data they generate and execute Machine Learning model inferencing. Only information that has to be returned is transmitted back home. GreenGrass also helps to maintain the software versions on the devices to keep them up to date.

SageMaker Spark containers

SageMaker Spark containers are used for data processing or feature engineering workloads. This brings the tremendous power and scalability of Apache Spark to bear on these resource intensive tasks. It makes sense to use Spark containers when these pre-processing tasks are intermittent and would not use a dedicated Spark cluster enough to make the administration of the cluster worthwhile.

SageMaker build your own containers

With SageMaker you can bring-your-own container. Because SageMaker uses containers for its own processing you can take a container that has a model developed outside SageMaker and adapt it so it can work inside the SageMaker environment. There are two toolkits that will enable you to adapt your existing containerised model to work in SageMaker. If you are developing a new model there are toolkits for each of the major frameworks you can download from github.

Summary

The ML Pipeline course does not mention this sub-domain at all. The AWS White Paper The Machine Learning Lens describes some services and provides a couple of reference architectures. The AWS Exam Readiness course describes this sub-domain in terms of three tiers and then lists loads of AWS services. From this sparse guidance it appears AWS wants us to have an overview of their services for Machine Learning.

These revision notes cover sub-domain 4.2 of the Machine Learning Implementation and Operations knowledge domain (domain 4). The four sub-domains are:

If you are progressing through the exam structure in order, the next revision notes to study are those for sub-domain 4.3 which is about security in AWS.

Credits
  • Photo by CHUTTERSNAP on Unsplash
  • Dog with flower photo by Richard Brutyo on Unsplash
  • TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
  • The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation.
  • PyTorch, the PyTorch logo and any related marks are trademarks of Facebook, Inc.

AWS Certified Machine Learning Study Guide: Specialty (MLS-C01) Exam

This study guide provides the domain-by-domain specific knowledge you need to build, train, tune, and deploy machine learning models with the AWS Cloud. The online resources that accompany this Study Guide include practice exams and assessments, electronic flashcards, and supplementary online resources. It is available in both paper and kindle version for immediate access. (Vist Amazon books)


10 questions and answers

15
Created on By Michael Stainsbury

4.2 Machine Learning services and features (full)

10 test questions that cover subdomain 4.2, Recommend and implement the appropriate machine learning services and features for a given problem

1 / 10

What AI service can you use to build using speech enabled services?

2 / 10

Which AI service can be used to perform Handwriting Recognition?

3 / 10

SageMaker RL uses <–?–> to describe the Reinforcement Learning process.

3 words left

4 / 10

<–?–> is an open source Interface. It is independent of the major IT vendors, but the main contributor is a Google engineer. The APIs make TensorFlow easier to work with. They are simple and consistent to minimise the number of user actions for common use cases.

5 characters left

5 / 10

What risk limiting deployment options do SageMaker Endpoints enable?

6 / 10

What are the main choices for EC2 types for Machine Learning?

7 / 10

What frameworks does SageMaker support?

8 / 10

Which AI service identifies objects, people, text, scenes and activities in images and video?

9 / 10

SageMaker <–?–> is a service that can be used to manually label data. The training data is handed over to AWS for processing and the data is returned with attached labels processed by humans.

10 / 10

The AI service <–?–> can be used for sentiment analysis?

1 words left

Your score is

The average score is 68%

0%


Pluralsight AWS Certified Machine Learning web page screen shot
Reviews
Pluralsight review – AWS Certified Machine Learning Specialty

Contains affiliate links. If you go to Whizlab’s website and make a purchase I may receive a small payment. The purchase price to you will be unchanged. Thank you for your support. The AWS Certified Machine Learning Specialty learning path from Pluralsight has six high quality video courses taught by expert instructors. Two are introductory…

Amazon Study Guide for the AWS Machine Learning Speciality exam
Reviews
Amazon Study Guide review – AWS Certified Machine Learning Specialty

This Amazon Study Guide review is a review of the official Amazon study guide to accompany the exam. The study guide provides the domain-by-domain specific knowledge you need to build, train, tune, and deploy machine learning models with the AWS Cloud. The online resources that accompany this Study Guide include practice exams and assessments, electronic…


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *