Inference Plugins¶
Overview¶
Inference plugins provide deep learning model execution on various hardware accelerators. They handle model loading, optimization, and efficient inference execution across multiple AI frameworks and chipsets.
Available Plugins¶
Plugin | Description |
---|---|
Engines | The Engines plugin collection provides unified AI inference acceleration across a wide range of hardware platforms and AI frameworks. It offers a common interface for neural network inference while leveraging platform-specific optimizations. |
Inference | Inference is the core AI inference engine plugin for CVEDIA-RT that provides universal interface for running machine learning models on various hardware platforms. It manages model loading, execution, and result processing across different inference backends and chipsets. |
Common Use Cases¶
- Running AI models on GPUs (TensorRT, OpenVINO)
- Edge device inference (Hailo, RKNN, Jetson)
- Multi-model pipelines
- Real-time object detection and classification