SLVSJB5 November 2025 MSPM0G5187
ADVANCE INFORMATION
The MSPM0G5187 series MCUs feature an on-chip Edge AI NPU (Neural-network Processing Unit) for enabling Artificial Intelligence (AI) and Machine Learning (ML) applications.
The NPU is a highly optimized core for deep convolutional neural networks (CNNs), supporting machine learning inference using pre-trained models. It works in conjunction with the on-chip CPU to provide higher performance and lower power consumption for CNNs inference.
The NPU runs at 80MHz operates autonomously from the main CPU in the system. The NPU is considered as a programmable hardware accelerator with supporting various inference kernels, which is defined to perform a sub-set (Arc Fault, Vibration Analysis, Motor Faults, Acoustic Anomalies, Voice Processing, etc.) of Machine Learning (ML) algorithms efficiently. With capability for 640–2560MOPS (Mega Operations Per Second), the NPU provides up to 10x NN inferencing cycle improvement when compared to a purely software-based implementation.
Load and train models with TI Edge AI Studio, and get start with Model Composer GUI or TI's command-line Modelmaker tool for an advanced set of capabilities. Both of these options automatically generate source code for the MSPM0, eliminating the need to manually write code.