SPRACZ2 August   2022 TDA4VM , TDA4VM-Q1

ADVANCE INFORMATION  

  1.   Abstract
  2. 1Introduction
    1. 1.1 Vision Analytics
    2. 1.2 End Equipments
    3. 1.3 Deep learning: State-of-the-art
  3. 2Embedded edge AI system: Design considerations
    1. 2.1 Processors for edge AI: Technology landscape
    2. 2.2 Edge AI with TI: Energy-efficient and Practical AI
      1. 2.2.1 TDA4VM processor architecture
        1. 2.2.1.1 Development platform
    3. 2.3 Software programming
  4. 3Industry standard performance and power benchmarking
    1. 3.1 MLPerf models
    2. 3.2 Performance and efficiency benchmarking
    3. 3.3 Comparison against other SoC Architectures
      1. 3.3.1 Benchmarking against GPU-based architectures
      2. 3.3.2 Benchmarking against FPGA based SoCs
      3. 3.3.3 Summary of competitive benchmarking
  5. 4Conclusion
  6.   Revision History
  7. 5References

Vision Analytics

There are three types of data produced in the edge devices - video, audio and other sensor data. Video-based analytics tend to be more complex as each video is a collection of images per second and the image itself will have multiple channels - Red, Green and Blue. With advances in cameras, vision-based analytics are gaining momentum across many applications - smart video doorbells, video surveillance, drones, robots, autonomous vehicles and last mile delivery. Fundamentally, there are three functions one can implement with vision-based analytics as shown in Figure 1-1: Classification, Detection and Segmentation. You can see from the same image, three different functions that can be implemented based on vision-analytics in an edgeAI system - starting from classifying the image to pixel level analysis of the entire scene.

GUID-3DC5E282-B101-437C-BCA0-3AFE07C74187-low.gif Figure 1-1 Top three vision AI functions