Semantic Segmentation Algorithm
The Semantic Segmentation algorithm processes images by tagging every single pixel in the image. This fine grained approach enables the information about the shapes of objects and edges to be gathered. A common use case is computer vision. The output of training is a Segmentation Mask which is a RGB or grayscale PNG image in the same shape as the input image.
Attributes
Problem attribute | Description |
Data types and format | Image |
Learning paradigm or domain | Image Processing, Supervised Learning |
Problem type | Computer vision |
Use case examples | Tag every pixel of an image individually with a category |
Training
There are two sets of input files for the Semantic Segmentation algorithm, one for training and one for validation:
- Images
- Annotations: single channel PNG images
- label_map.json
Color is handled by having a label_map.json per color channel.
There are two components in the Semantic Segmentation algorithm training system, the backbone and the decoder. It is the backbone that is trained. Incremental training can be used to provide pre-trained backbones. The backbone passes an encoded activation map of features to the decoder which creates a Segmentation Mask.
Model artifacts and inference
Description | Artifacts |
Learning paradigm | Supervised Learning |
Request format | jpeg |
Result | S3 |