Overview¶
Version Notice¶
This documentation applies to AI Appliance systems compatible with CVEDIA-RT version 2025.2.0 and later.
Introduction¶
The CVEDIA AI Appliance is a dedicated, preconfigured compute node engineered to deliver out-of-the-box AI video analytics using the CVEDIA-RT engine. Equipped with an onboard GPU or VPU, it operates as a standalone inference server that plugs directly into existing VMS architectures — such as Milestone XProtect and Nx Witness — as well as custom integrations and third-party systems via REST APIs, without requiring changes to cameras, NVRs, or server infrastructure.
Designed for large-scale and distributed deployments, the appliance provides predictable performance by isolating AI processing from production servers while offering the flexibility to scale analytics across multiple edge nodes. The appliance performs real-time AI inference on incoming video streams, where external systems send frames for local processing and receive structured detection metadata in return.
Key Features¶
Plug-and-Play Deployment¶
- Ships fully configured with CVEDIA-RT, accelerator drivers, and preloaded licenses.
- Requires only basic network configuration and VMS integration to connect with existing infrastructure.
- Compatible with VMS platforms running on either Linux or Windows operating systems.
- Can operate standalone without a VMS using the built-in Cockpit web interface for direct management and monitoring.
- Supports REST API integration for custom applications and third-party system connectivity beyond traditional VMS deployments.
Decoupled Inference Node¶
- Offloads AI workloads from host systems, isolating compute-intensive inference from production infrastructure.
- Enables smooth analytics performance without requiring GPUs on connected systems, freeing resources for core tasks.
- Integrates with VMS platforms (Nx Witness Remote Mode, Milestone Connector), custom applications via REST API, or operates standalone using the Cockpit web interface.
Scalable Multi-Appliance Operation¶
- Multiple appliances can operate on the same network, each with different licensed features enabled.
- Appliance names serve as namespaces for load balancing and failover logic, allowing grouping by analytic role (e.g., "Weapon Detection", "Intrusion Detection").
- For VMS integrations, stream load is automatically distributed across appliances within the same namespace, with built-in failover support. REST API and standalone deployments require manual stream assignment.
Flexible Hardware Options¶
- Appliances are available in configurations optimized for 4, 16, 50+ camera streams.
- Supports multiple accelerator types including NVIDIA GPUs, Hailo VPUs, Blaize, and Intel NPUs.
- Can be deployed as a pre-configured appliance or installed on compatible custom hardware via ISO image.
For detailed hardware requirements and supported accelerators, see the Quickstart Guide.
Licensing Model¶
- Pre-configured appliances ship with licenses already activated — no additional setup required.
- Additional licenses can be activated via the Cockpit web interface or VMS plugin.
- ISO installations require manual license activation after deployment.
For deployment instructions and license activation steps, see the Quickstart Guide.
Next Steps¶
To get started with your AI Appliance, see the Quickstart Guide for deployment options and setup instructions.