Skip to content

Release notes - CVEDIA-RT - 2023.5.7


This release is not yet available to the public. If you require access to these updates, please contact us. We will provide download links and a complimentary activation license key.


  • Added support for running inference on multiple accelerator. For this to happen, each instance must be configured to use a specific accleration by setting a different protocol in the URI of the AI model (eg. tensorrt.1, tesorrt.2). There is no load balancing in place yet to distribute the computation load between multiple accelerators.
  • Added back ONNX plugin.