Object detection in vision systems

Pixel-level Segmentation for navigation and perception with 40x lower latency

Detect fine-grain obstacles and pathways from imagery in real-time using C7™ NPU equipped processors

Pixel-level Segmentation for navigation and perception with 40x lower latency

Application overview

The real world contains complex scenery, which robots, drones and vehicles must navigate safely and efficiently. Understanding this scenery requires a detailed view of the pathways and obstacles.

AI models can identify their surroundings at the pixel level using semantic segmentation neural network models. Other applications like defect detection, medical imaging and agriculture can similarly benefit from the precise contours of meaningful anomalies and objects that segmentation models provide.

Substantial processing power is needed to run these complex AI models at the edge, generally requiring accelerators like the C7™ NPU to enable real-time decision making. 

Starting evaluation

Data collection

Data samples will be images collected on cameras similar to the camera(s) used in the production application. Images may be collected manually or through tools like Edge AI Studio. The images will be labelled with either pixel masks or polygons that outline the contours of objects, and may form complex shapes.

A good dataset will contain a variety of realistic scenes and combinations of the objects to recognize. It is common to have a generic 'background' class that encompasses anything that is not important to track.  For most real-world scenes, there should be many instances of objects overlapping or partially occluding each other, but applications like medical imaging or defect detection may not encounter these situations. To train a robust model, there should be plenty of variations in the positions and orientations of objects, as well as different lighting or weather conditions.

Data quality assessment

The data and labels should align closely such that the objects to recognize are well covered by the masks or polygons. It is helpful to visualize these labels onto the image to look for areas that are not covered or where the label goes far beyond the objects. Noisy labels will make it harder for the neural network to learn the right visual patterns.

 

Dataset augmentation is a good way to increase the size and variations captured by the dataset. Artificial 'augmentations' modify the image to make multiple copies and expand the dataset. However, some augmentations, like rotating and scaling the image, will require the label itself be modified in the same way.

 

The image below shows Edge AI Studio for a segmentation model on data from the tiscapes2017 segmentation dataset. This dataset includes both object detection bounding boxes and segmentation masks. The tool displays both types of annotations, but the mask outlining the people, signs, and vehicles in the image is the annotation used for segmentation model training.

Build and train your model

CCStudio™ Edge AI Studio and edgeai-modelmaker contain several segmentation models that are ready to train on your custom dataset. If edgeai-modelmaker is being used, the custom dataset will use the COCO format with segmentation labels. 

Otherwise, tools like Pytorch and Tensorflow can be used to train well-established models or implement entirely custom models.

Find the right model for your needs

Choosing the right model is a tradeoff between accuracy and latency. Models like Deeplabv3 run efficiently on the C7 NPU, and representative benchmarks are available in the model selection tool.

Deploying your model

Model deployment requires the model to be compiled beforehand for the target hardware accelerator. With tools like Edge AI Studio and edgeai-modelmaker, compilation is automatic. Otherwise, compiling models will require a separate step through software packages like edgeai-tidl-tools on the TI GitHub using a Bring Your Own Model flow.

Model artifacts are deployed through runtimes like ONNX Runtime, LiteRT and TVM using TI Deep Learning (TIDL) as the hardware backend for acceleration.

To deploy the model into an end-to-end vision application, start with edgeai-gst-apps, which composes the pipeline with multiple stages of hardware acceleration for pre-processing and post-processing the image, in addition to accelerating the AI model itself.

Choosing the right device for you

Device selection will depend on the level of AI performance required and the camera throughput (resolution and framerate). Refer to the table below for performance comparison across different devices.  Note: For comprehensive benchmarks of these devices, use the model selection tool available on Edge AI Studio.

The benchmarks in the table below were produced using SDK version 10.1 and demonstrate that the AM62A at 2 TOPS outperforms CPU-only solutions by a factor greater than 40x.

Product number
Processing core
NPU available
Semantic segmentation benchmarks
DeepLabv3 segmentation (512x512) Performance
FPN Lite with regnetx-800 backbone (512x512) Performance
AM62P
4x Arm®
Cortex®-A53
no NPU

1061 ms

0.94 FPS

 

1560 ms

0.64 FPS

 

AM62A7
4x Arm®
Cortex®-A53 + C7™ NPU
2 TOPS

25.3 ms

39 FPS

48.72 ms

21 FPS

TDA4VE-Q1
4x Arm®
Cortex®-A53 + C7™ NPU
8 TOPS

7.66 ms

130 FPS

25.5 ms

39 FPS

FPS (Frames per second)

All the hardware, software and resources you’ll need to get started

Hardware

SK-AM62A-LP
The AM62A is the lowest-cost AI-accelerated device in the AM6xA family, and is best suited for evaluation. A generic USB camera or webcam can be used for image capture and model evaluation on live data.

Software & development tools

PROCESSOR-SDK-LINUX-AM62A
The Edge AI processor SDK is Linux-based and includes the necessary software components to run a compiled model with hardware acceleration. Other Edge AI accelerated processors may be substituted for AM62A.

CCStudio™ Edge AI Studio
This tool contains tools for training, compiling and deploying a model to TI edge AI processors. A model selection tool is available to view pre-generated benchmarks of popular models.

Command-Line tools
Tools for Micro Processor devices with Linux and TIDL support. TI's edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries. 

Industrial | Vision

Detect people in a wide variety of scenes with vision-based AI in >120 FPS using AI-accelerators.

Industrial | Vision

Find and localize specific objects, people in real-time at high framerate with AI-accelerated processors and industry-standard software.