SLVSJB5A November 2025 – December 2025 MSPM0G5187
PRODMIX
The MSPM0G5187 series MCUs feature an on-chip Edge AI NPU (Neural-network Processing Unit) for enabling Artificial Intelligence (AI) and Machine Learning (ML) applications.
The NPU is a highly optimized core for deep convolutional neural networks (CNNs), supporting machine learning inference using pre-trained models. It works in conjunction with the on-chip CPU to provide higher performance and lower power consumption for CNNs inference. The NPU runs at 80MHz operates autonomously from the main CPU in the system.
The NPU is a fully programmable hardware accelerator that can support arbitrary deep neural networks. Input activations can be 8-bit or 4-bit while weight parameters can be 8-bit, 4-bit or 2-bit. Layer types supported include the generic convolutional layer, pointwise layer, depthwise layer, pooling layers (max/average), and residual layers. Convolution kernel sizes can be configured and layers can include padding and/or strides. RELU activation is supported.
With capability for 640–2560MOPS (Mega Operations Per Second), the NPU provides up to 10x NN inferencing cycle improvement when compared to a purely software-based implementation.
For a seamless experience in data collection and model training, get started with the TI Edge AI Studio or use the collected data with advanced features through the command line tool, Modelmaker, or with Model Composer GUI. Both of these options automatically generate source code for the MSPM0, eliminating the need to manually write code.