Skip to content

RT Command Line Tool

rtcmd is CVEDIA-RT's command-line interface for system administration, model management, inference operations, and hardware diagnostics. It provides programmatic access to core CVEDIA-RT functionality without requiring the GUI.

Overview

rtcmd is organized into subcommands, each focused on specific functionality:

  • config - View and modify server configuration
  • modelforge - Download and manage AI models
  • licensing - License activation and management
  • inference - Run, benchmark, and evaluate AI models
  • onvif - Discover and configure ONVIF cameras
  • vpu - List and inspect AI accelerator capabilities
  • videodec - List available video decoders
  • systeminfo - Generate comprehensive system reports
  • streamgauge - Stream testing and performance analysis

Global Options

rtcmd [GLOBAL_OPTIONS] SUBCOMMAND [OPTIONS]

Global Options:
  -h, --help          Show help message
  -v, --version       Show CVEDIA-RT version
  --verbose           Enable verbose output
  --config FILE       Use custom configuration file
  --profile FILE      Record performance profile (Tracy format)

Available Commands

Server Configuration

config - Configuration Management

View and modify CVEDIA-RT server configuration settings.

rtcmd config [OPTIONS]

Options:
  -h, --help                  Print this help message and exit
  --help-all                  Print help for all subcommands
  -c, --config-file PATH      Path to rtconfig.json (default: $RT_HOME/rtconfig.json)
  -l, --list                  List all configurable settings and their current values
  -g, --get KEY               Get the value of a config key (e.g., webserver.host)
  -s, --set KEY=VALUE         Set a config value (e.g., webserver.port=8080)

Available Config Keys:

Key Description
webserver.host Web server bind address
webserver.port Web server port
webserver.enabled Enable web server (true/false)
webserver.name Web server name

Exit Codes:

Code Description
0 Success
1 Generic error
2 Unknown config key
3 Invalid value
4 Config file not found
5 Failed to save config

Usage Examples:

# List all configurable settings
rtcmd config --list

# Get current web server host
rtcmd config --get webserver.host

# Set web server port
rtcmd config --set webserver.port=8080

# Set web server bind address
rtcmd config --set webserver.host=0.0.0.0

Server Binding IP Configuration

Changing the server binding IP can significantly impact how your server is accessed. Here are the available options:

127.0.0.1 (Default)

Restricts access to local connections only. This is ideal for all-in-one deployments where external access is not required.

rtcmd config --set webserver.host=127.0.0.1

0.0.0.0 (All Interfaces)

Allows the server to be accessible from any network interface. This is suitable for distributed deployments where the server needs to be accessed by other machines on the network.

rtcmd config --set webserver.host=0.0.0.0

Specific Network IP

Limits the server visibility to a selected network interface. Use this when you want to restrict access to a specific network.

rtcmd config --set webserver.host=192.168.1.100

Important

An incorrect IP can prevent access to the server. Always verify network connectivity after changing the binding address.

Model Management

modelforge - AI Model Downloads

Download and manage AI models from the CVEDIA Model Repository.

# Search for available models
rtcmd modelforge --search "detection"

# Download a specific model
rtcmd modelforge --download "model-uri"

# Download with overwrite
rtcmd modelforge --download "model-uri" --overwrite

# List downloaded models
rtcmd modelforge --list

Licensing

licensing - License Management

Activate and manage CVEDIA-RT licenses.

# Activate a license
rtcmd licensing --activate "LICENSE-KEY"

# Check license status  
rtcmd licensing --status

# List active licenses
rtcmd licensing --list

# Deactivate a license
rtcmd licensing --deactivate "LICENSE-KEY"

AI Inference Operations

inference - Model Testing

Run, benchmark, and evaluate AI models.

# Run inference on an image
rtcmd inference run --model "model-path" --input "image.jpg"

# Benchmark model performance
rtcmd inference benchmark --model "model-path" --iterations 100

# Evaluate model accuracy
rtcmd inference evaluate --model "model-path" --dataset "test-data/"

Subcommands:

  • run - Execute inference on input data
  • benchmark - Measure model performance metrics
  • evaluate - Test model accuracy against datasets

Camera Management

onvif - ONVIF Camera Discovery

Discover and configure ONVIF-compatible IP cameras on the network.

# Discover cameras on network
rtcmd onvif --discover

# List previously discovered cameras
rtcmd onvif --list

# Get camera details by UUID
rtcmd onvif --id "camera-uuid"

# Set camera credentials
rtcmd onvif --id "camera-uuid" --username "admin" --password "pass"

Hardware Diagnostics

vpu - AI Accelerator Information

List and inspect AI accelerator capabilities (VPUs, GPUs, NPUs).

# List all available AI accelerators
rtcmd vpu --list

# Get capabilities for specific device
rtcmd vpu --deviceid "device-id"

# Output in bare format (scripting)
rtcmd vpu --list --bare

videodec - Video Decoder Information

List available hardware and software video decoders.

# List all video decoders
rtcmd videodec --list

# Show decoder capabilities
rtcmd videodec --capabilities

System Information

systeminfo - System Reports

Generate comprehensive system information reports.

# Generate system report
rtcmd systeminfo

# Output to JSON file
rtcmd systeminfo --output system-report.json

# Include detailed hardware info
rtcmd systeminfo --detailed

# Export specific information
rtcmd systeminfo --hardware --inference-engines

Includes:

  • Hardware specifications (CPU, GPU, memory)
  • Available inference engines and capabilities
  • System libraries and versions
  • Network configuration
  • Storage information

streamgauge - Stream Testing

Test and analyze video stream performance.

# Test stream performance
rtcmd streamgauge --input "rtsp://camera/stream"

# Benchmark multiple streams
rtcmd streamgauge --input "stream1.mp4" --input "stream2.mp4"

# Generate performance report
rtcmd streamgauge --input "source" --report output.json

Usage Examples

Development Workflow

# 1. Check system capabilities
rtcmd systeminfo --inference-engines

# 2. Download required models
rtcmd modelforge --download "securt/person-detection-v1"

# 3. Test model performance
rtcmd inference benchmark --model "securt/person-detection-v1"

# 4. Discover available cameras
rtcmd onvif --discover

# 5. Test camera stream
rtcmd streamgauge --input "rtsp://192.168.1.100/stream"

Hardware Diagnostics

# Complete hardware assessment
rtcmd systeminfo --detailed --output system-full.json
rtcmd vpu --list > accelerators.txt
rtcmd videodec --list > decoders.txt

# Test inference capabilities
for device in $(rtcmd vpu --list --bare | jq -r '.devices[].id'); do
    echo "Testing device: $device"
    rtcmd inference benchmark --device "$device" --model "test-model"
done

Configuration

rtcmd uses the same configuration system as other CVEDIA-RT components:

  • Global Config: rtconfig.json
  • Custom Config: Use --config flag
  • Environment Variables: Standard RT environment variables

Custom Configuration

# Use custom configuration
rtcmd --config /path/to/custom.json modelforge --list

# Override specific settings
export RT_LOG_LEVEL=debug
rtcmd --verbose inference benchmark --model "test"

Output Formats

Most commands support multiple output formats:

  • Human-readable (default): Formatted for terminal display
  • JSON: Machine-readable structured data (--json)
  • Bare/Raw: Minimal output for scripting (--bare)
# Human-readable output
rtcmd vpu --list

# JSON output for parsing  
rtcmd vpu --list --json

# Bare output for scripting
rtcmd vpu --list --bare

Error Handling

rtcmd provides detailed error messages and exit codes:

  • Exit Code 0: Success
  • Exit Code 1: General error
  • Exit Code 2: Invalid arguments
  • Exit Code 3: Runtime error
rtcmd modelforge --download "invalid-uri"
if [ $? -ne 0 ]; then
    echo "Download failed"
fi

Performance Profiling

Use the --profile option to record detailed performance metrics:

# Record performance profile
rtcmd --profile trace.tracy inference benchmark --model "test"

# View with Tracy profiler
# https://github.com/wolfpld/tracy

Integration with RT Ecosystem

rtcmd integrates seamlessly with other CVEDIA-RT components:

  • RT Server: Manage models and check system status
  • RT Studio: Validate models before GUI use
  • VMS Plugins: System diagnostics and model verification

Next Steps