This matrix shows which plugins are available on different platforms and hardware configurations.
- โ
Fully Supported - Available and tested
- โ ๏ธ Limited Support - Available with restrictions
- โ Not Available - Not supported on this platform
- ๐งช Experimental - Beta/testing phase
| Engine |
Windows x64 |
Linux x64 |
Linux ARM64 |
NVIDIA Jetson |
Requirements |
| TensorRT |
โ
|
โ
|
โ |
โ
|
NVIDIA GPU, CUDA 11.8+ |
| OpenVINO |
โ
|
โ
|
โ |
โ |
Intel CPU/GPU/VPU |
| ONNX Runtime |
โ
|
โ
|
โ
|
โ
|
CPU inference |
| Hailo |
โ
|
โ
|
โ
|
โ |
Hailo AI processor |
| RKNN |
โ |
โ |
โ
|
โ |
Rockchip NPU |
| Qualcomm SNPE |
โ |
โ |
โ
|
โ |
Qualcomm DSP/GPU |
| Blaize |
โ |
โ
|
โ
|
โ |
Blaize GSP |
| Plugin |
Description |
Requirements |
| Input/Screencap |
Desktop screen capture |
Windows 10+ |
| Plugin |
Description |
Ambarella SoCs |
| Platform/Ambarella |
Ambarella SoC integration |
CV2x, CV3x series |
| Decoder |
Windows |
Linux |
Jetson |
Notes |
| Software |
โ
|
โ
|
โ
|
CPU-based, always available |
| NVIDIA NVDEC |
โ
|
โ
|
โ
|
GeForce GTX 900+ series |
| Intel Quick Sync |
โ
|
โ
|
โ |
Intel iGPU required |
| Intel VPL/MFX |
โ
|
โ
|
โ |
Intel Arc/Xe GPUs |
| GPU Generation |
Compute Capability |
TensorRT |
NVDEC |
Notes |
| RTX 40 Series |
8.9 |
โ
|
โ
|
Latest features |
| RTX 30 Series |
8.6 |
โ
|
โ
|
Full support |
| RTX 20 Series |
7.5 |
โ
|
โ
|
Full support |
| GTX 16 Series |
7.5 |
โ
|
โ
|
No RT cores |
| GTX 10 Series |
6.1 |
โ
|
โ ๏ธ |
Limited NVDEC |
| GTX 900 Series |
5.2 |
โ
|
โ |
Minimum for TensorRT |
| Jetson Model |
Jetpack |
TensorRT |
DLA |
Max Power |
| Orin AGX |
5.1+ |
8.5+ |
2x |
60W |
| Orin NX |
5.1+ |
8.5+ |
1x |
25W |
| Orin Nano |
5.1+ |
8.5+ |
1x |
15W |
| Xavier NX |
4.6+ |
8.0+ |
2x |
20W |
| Xavier AGX |
4.6+ |
8.0+ |
2x |
30W |
| Nano |
4.6+ |
7.1+ |
โ |
10W |
| Plugin Category |
Minimum RAM |
Recommended RAM |
GPU Memory |
Notes |
| Input Only |
2GB |
4GB |
N/A |
Basic video processing |
| Basic Inference |
4GB |
8GB |
2GB |
Single model |
| Multi-Model |
8GB |
16GB |
4GB+ |
Multiple AI models |
| High Throughput |
16GB |
32GB |
8GB+ |
Multi-stream processing |
| Protocol |
Plugin |
Platform Support |
Use Case |
| RTSP |
GStreamerReader |
All platforms |
IP cameras |
| RTMP |
GStreamerReader |
All platforms |
Live streaming |
| HTTP/HTTPS |
FFmpegReader |
All platforms |
Web streams |
| UDP |
GStreamerReader |
All platforms |
Multicast streams |
| WebRTC |
๐งช GStreamerReader |
Limited |
Web browsers |
Use these commands to determine your platform capabilities:
# GPU information
nvidia-smi
# System information
systeminfo
# GPU information
nvidia-smi
# CPU information
lscpu
# Platform detection
uname -m
# Jetson model information
cat /proc/device-tree/model
# JetPack version
apt show nvidia-jetpack
-
Hardware Available
- GPU type and memory
- CPU architecture
- Available accelerators
-
Performance Requirements
- Number of streams
- Required frame rate
- Latency constraints
-
Platform Constraints
- Power consumption
- Network connectivity
- Storage capacity
If you're unsure about plugin compatibility:
- Check the specific plugin documentation
- Review hardware requirements
- Test in a development environment
- Contact support for enterprise deployments