Semantic Segmentation Algorithm

via Gfycat

The Semantic Segmentation algorithm processes images by tagging every single pixel in the image. This fine grained approach enables the information about the shapes of objects and edges to be gathered. A common use case is computer vision. The output of training is a Segmentation Mask which is a RGB or grayscale PNG image in the same shape as the input image.

Attributes

Problem attributeDescription
Data types and formatImage
Learning paradigm or domainImage Processing, Supervised Learning
Problem typeComputer vision
Use case examplesTag every pixel of an image individually with a category

Training

There are two sets of input files for the Semantic Segmentation algorithm, one for training and one for validation:

  • Images
  • Annotations: single channel PNG images
  • label_map.json

Color is handled by having a label_map.json per color channel.

There are two components in the Semantic Segmentation algorithm training system, the backbone and the decoder. It is the backbone that is trained. Incremental training can be used to provide pre-trained backbones. The backbone passes an encoded activation map of features to the decoder which creates a Segmentation Mask.

Model artifacts and inference

DescriptionArtifacts
Learning paradigmSupervised  Learning
Request formatjpeg
ResultS3

Processing environment

Similar Posts