Semantic images segmentation

About the demo

In this demo you can try models for general semantic segmentation of indoor and outdoor scenes. There are three SegFormer models available - B0 (3.8M parameters), B1 (13.7M parameters), and B4 (64.1M parameters). Pre-trained models from Hugging Face hub (B0, B1, and B4) were exported to ONNX format and quantinized using standard PyTorch functionality. All models were originally trained on the ADE20K dataset.

How to use the demo:
  1. Select the model and load it.
  2. Load the image from the device, or select one of example images.
  3. Generate segments.
  4. You can click on the image to see the class of the object.
Selected class: none
Select the image
Select
Example images (click to set the image)