Skip to content

Instance Plugin

Description

Instance is the central orchestration hub for CVEDIA-RT that provides comprehensive runtime instance control and management capabilities. It serves as the core coordination point for all AI processing instances, handling their complete lifecycle from creation to destruction, solution loading, configuration management, and coordination between different system components.

This plugin implements sophisticated instance management with multi-threading support, extensive Lua scripting integration, UI callback systems, and seamless plugin coordination. It enables dynamic creation, configuration, and control of AI processing pipelines while providing robust state management and performance monitoring.

Key Features

  • Complete Lifecycle Management: Create, configure, start, stop, pause, and reset processing instances
  • Solution Integration: Dynamic loading and management of CVEDIA-RT solutions
  • Multi-Threading Support: Thread-safe instance execution with worker thread pools
  • Extensive Lua Bindings: Comprehensive scripting API for automation and customization
  • Configuration Management: Dynamic configuration loading, validation, and hot-reloading
  • State Management: Persistent state tracking and context buffer management
  • Performance Monitoring: Built-in FPS tracking and performance metrics
  • UI Integration: Custom UI callbacks and real-time status updates
  • Plugin Coordination: Central registry and communication hub for all plugins
  • REST API: HTTP endpoints for programmatic instance management

Use Cases

  • AI Pipeline Orchestration: Manage complex multi-stage AI processing pipelines
  • Solution Deployment: Deploy and manage pre-built AI solutions
  • Dynamic Configuration: Runtime configuration changes without system restart
  • Multi-Instance Management: Control multiple concurrent processing instances
  • Workflow Automation: Automated instance lifecycle management through scripts
  • Development and Testing: Rapid prototyping and testing of AI workflows
  • Production Deployment: Robust production instance management with monitoring
  • Custom Solution Development: Framework for building custom AI solutions

Requirements

Hardware Requirements

  • CPU: Multi-core processor for concurrent instance management
  • Memory: Minimum 2GB RAM (8GB+ recommended for multiple instances)
  • Storage: Sufficient space for solutions, configurations, and temporary data

Software Dependencies

  • RTCORE: CVEDIA-RT core library and plugin infrastructure
  • Sol2: Modern C++ Lua binding library for scripting integration
  • Threading Libraries: Multi-threading support (std::thread, std::mutex)
  • JSON Library: Configuration parsing and serialization
  • File System Libraries: Configuration file management and hot-reloading

Platform Requirements

  • Windows: Primary supported platform with full feature set
  • Linux: Full cross-platform compatibility
  • UI Framework: Optional UI integration for desktop applications

Configuration

Basic Instance Configuration

{
  "InstanceId": "550e8400-e29b-41d4-a716-446655440000",
  "DisplayName": "Camera 1 Processing",
  "Solution": "object_detection",
  "AutoStart": true,
  "AutoRestart": false,
  "ReadOnly": false,
  "SystemInstance": false,
  "Persistent": true
}

Advanced Instance Configuration

{
  "InstanceId": "550e8400-e29b-41d4-a716-446655440001",
  "DisplayName": "Security Camera System",
  "Solution": "security_monitoring",
  "AutoStart": true,
  "AutoRestart": true,
  "ReadOnly": false,
  "SystemInstance": false,
  "Persistent": true,
  "Global": {
    "Detection": {
      "enabled": true,
      "inference_strategy": "motion_guided",
      "max_object_size": "large",
      "confidence_threshold": 0.7
    },
    "exit_on_end": false,
    "performance_monitoring": {
      "enabled": true,
      "fps_reporting_interval": 5000
    }
  },
  "Input": {
    "VideoReader": {
      "real_time": true,
      "buffer_size": 10,
      "drop_frames_on_delay": true
    },
    "uri": "rtsp://192.168.1.100:554/stream1",
    "media_format": {
      "color_format": 0,
      "height": 1080,
      "width": 1920,
      "is_software": false
    },
    "privacy_masks": [
      {
        "name": "building_entrance",
        "points": [[0.1, 0.1], [0.3, 0.1], [0.3, 0.4], [0.1, 0.4]]
      }
    ],
    "privacy_mode": "BLUR",
    "start_frame": 0
  },
  "output": {
    "handlers": [
      {
        "type": "rest",
        "endpoint": "https://api.example.com/events",
        "enabled": true
      },
      {
        "type": "file",
        "path": "./output/detections.json",
        "enabled": true
      }
    ]
  },
  "workers": {
    "max_lua_workers": 4,
    "worker_timeout_ms": 5000,
    "enable_performance_tracking": true
  }
}

Configuration Schema

Parameter Type Default Description
InstanceId string auto-generated Unique identifier for the instance
DisplayName string "" Human-readable instance name
Solution string "" Solution ID to load for this instance
AutoStart bool false Automatically start instance on creation
AutoRestart bool false Automatically restart instance on failure
ReadOnly bool false Prevent configuration modifications
SystemInstance bool false Mark as system-critical instance
Persistent bool true Save instance configuration persistently
Global.Detection.enabled bool true Enable detection processing
Global.Detection.inference_strategy string "continuous" Detection strategy ("continuous", "motion_guided")
Global.Detection.max_object_size string "medium" Maximum object size ("small", "medium", "large")
Input.VideoReader.real_time bool false Enable real-time processing mode
Input.uri string "" Input video source URI
Input.privacy_mode string "FILL" Privacy mask mode ("FILL", "BLUR", "PIXELATE")
output.handlers array [] Output handler configurations
workers.max_lua_workers int 2 Maximum concurrent Lua worker threads
workers.worker_timeout_ms int 10000 Worker thread timeout in milliseconds

API Reference

C++ API (InstanceImpl)

Core Instance Lifecycle

class InstanceImpl : public iface::Instance {
public:
    // Construction and initialization
    InstanceImpl(Uuid const& instanceId, std::string const& displayName);
    expected<void> initialize() override;
    expected<void> loadConfig(std::string const& configPath) override;

    // Lifecycle management
    expected<void> start() override;
    bool stop() override;
    expected<void> reset() override;
    bool setPause(bool pause_state) override;

    // State queries
    InstanceState getState() const override;
    bool isRunning() const override;
    bool isPaused() const override;
    bool isConfigured() const override;

    // Configuration access
    uCVDictRoot config() override;                    // Full config
    uCVDictRoot config(std::string const& path) override; // Scoped config
    uCVDictRoot state() override;                     // Runtime state
    uCVDictRoot state(std::string const& path) override;  // Scoped state

    // Context management
    expected<cbuffer> getContextBuffer(std::string const& name) override;
    void setContextBuffer(std::string const& name, cbuffer const& buffer) override;

    // Plugin integration
    ObjectRegistry<iface::Module>& modules() override;
    Sink& inputSink() override;
    Sink& outputSink() override;

    // Performance monitoring
    internal::PerformanceBase& performance() override;
    void setPerformanceCounterState(bool enabled) override;
    float getFPS() const override;

    // Solution management
    expected<void> setSolution(std::weak_ptr<iface::Solution> solution) override;
};

Instance State Enumeration

enum class InstanceState {
    INSTANCE_NONE = 0,      // Uninitialized
    INSTANCE_STOPPED = 1,   // Stopped/Ready
    INSTANCE_STARTING = 2,  // Starting up
    INSTANCE_PAUSED = 3,    // Paused
    INSTANCE_RUNNING = 4,   // Running
    INSTANCE_FAIL = 5       // Failed state
};

Factory Creation

// Create new instance
auto instance = api::factory::Instance::create(
    Uuid::generate(),           // Instance ID
    "My Processing Instance"    // Display name
);

// Initialize and configure
instance->loadConfig("configs/my_instance.json");
instance->initialize();

// Start processing
instance->start();

Lua API (rt_instance)

Instance Control

-- Lifecycle operations
rt_instance.start(instance)                 -- Start instance
rt_instance.stop(instance)                  -- Stop instance
rt_instance.reset(instance)                 -- Reset instance
rt_instance.setPause(instance, true)        -- Pause instance
rt_instance.setPause(instance, false)       -- Resume instance

-- State queries
local running = rt_instance.isRunning(instance)
local paused = rt_instance.isPaused(instance)
local name = rt_instance.getName(instance)
local id = rt_instance.getId(instance)

Configuration Management

-- Configuration access
local value = rt_instance.getConfigValue(instance, "Global.Detection.enabled")
rt_instance.setConfigValue(instance, "Global.Detection.confidence_threshold", 0.8)

-- With defaults
local threshold = rt_instance.getConfigValueOr(instance, "detection_threshold", 0.5)

-- Create if not exists
local buffer_size = rt_instance.getConfigValueOrCreate(instance, "Input.buffer_size", 10)

-- State management
rt_instance.setStateValue(instance, "processing.frame_count", 1000)
local frame_count = rt_instance.getStateValue(instance, "processing.frame_count")

Data Pipeline Operations

-- Input/Output sink operations
local input_data = rt_instance.readInputSink(instance)
local output_data = rt_instance.readOutputSink(instance)

-- Write to output sink
rt_instance.writeOutputSink(instance, "detections", "json", detection_data, "Object Detections")
rt_instance.flushOutputSink(instance)

-- Context buffers for inter-plugin communication
local buffer = rt_instance.getContextBuffer(instance, "preprocessing_result")
rt_instance.setContextBuffer(instance, "custom_data", processed_buffer)

Worker Thread Management

-- Create background worker for concurrent processing
rt_instance.createWorker(instance, 
    "workerInit",    -- Initialization function name
    "workerRun"      -- Main processing function name
)

-- Create event-driven worker
rt_instance.createEventWorker(instance,
    "eventInit",     -- Initialization function
    "eventCallback", -- Event handler function
    "detection"      -- Event domain to subscribe to
)

-- Single-step execution (for debugging)
rt_instance.runStep(instance)

Examples

Basic Instance Creation and Management

#include "instance.h"
#include "rtcore.h"

// Basic instance management system
class InstanceManager {
public:
    void createAndRunInstance() {
        // Create instance
        instance_ = api::factory::Instance::create(
            Uuid::generate(),
            "Camera Processing Instance"
        );

        // Configure instance
        setupConfiguration();

        // Initialize and start
        auto initResult = instance_->initialize();
        if (!initResult) {
            LOGE << "Failed to initialize instance: " << initResult.error().message();
            return;
        }

        auto startResult = instance_->start();
        if (!startResult) {
            LOGE << "Failed to start instance: " << startResult.error().message();
            return;
        }

        LOGI << "Instance started successfully";

        // Monitor instance
        startMonitoring();
    }

private:
    std::shared_ptr<iface::Instance> instance_;

    void setupConfiguration() {
        // Access configuration
        auto config = instance_->config();

        // Set input configuration
        config->set("Input.uri", "rtsp://192.168.1.100:554/stream");
        config->set("Input.VideoReader.real_time", true);

        // Set detection parameters
        config->set("Global.Detection.enabled", true);
        config->set("Global.Detection.confidence_threshold", 0.7);

        // Configure output
        auto outputHandlers = CValue::createArray();
        auto restHandler = CValue::create();
        restHandler->set("type", "rest");
        restHandler->set("endpoint", "https://api.example.com/events");
        restHandler->set("enabled", true);
        outputHandlers->push_back(restHandler);

        config->set("output.handlers", outputHandlers);

        LOGI << "Instance configuration complete";
    }

    void startMonitoring() {
        // Monitor instance performance
        std::thread monitorThread([this]() {
            while (instance_->isRunning()) {
                float fps = instance_->getFPS();
                auto state = instance_->getState();

                LOGI << "Instance FPS: " << fps << ", State: " << static_cast<int>(state);

                std::this_thread::sleep_for(std::chrono::seconds(5));
            }
        });

        monitorThread.detach();
    }
};

Advanced Lua Script Integration

-- Advanced instance management with Lua automation
local logger = api.logger
local factory = api.factory

-- Instance management state
local instanceManager = {
    instances = {},
    monitoring = {},
    config = {
        max_instances = 10,
        auto_restart = true,
        health_check_interval = 30, -- seconds
        performance_threshold = {
            min_fps = 5.0,
            max_cpu_usage = 80.0
        }
    }
}

-- Initialize instance manager
function initializeInstanceManager()
    logger.info("Initializing advanced instance manager")

    -- Start health monitoring
    api.system.createTimer("instance_health_monitor", 
                          instanceManager.config.health_check_interval * 1000, 
                          function()
                              performHealthCheck()
                          end)

    -- Load existing instances
    loadExistingInstances()

    logger.info("Instance manager initialized with", #instanceManager.instances, "instances")
end

-- Create and configure new instance
function createInstance(config)
    local instanceId = config.id or generateInstanceId()
    local displayName = config.name or ("Instance " .. instanceId)

    logger.info("Creating instance:", displayName)

    -- Create instance through factory
    local instance = factory.instance.create(instanceId, displayName)
    if not instance then
        logger.error("Failed to create instance:", displayName)
        return nil
    end

    -- Configure instance
    configureInstance(instance, config)

    -- Initialize instance
    local success = rt_instance.initialize(instance)
    if not success then
        logger.error("Failed to initialize instance:", displayName)
        return nil
    end

    -- Register instance
    instanceManager.instances[instanceId] = {
        instance = instance,
        config = config,
        created_at = api.system.getCurrentTime(),
        last_health_check = 0,
        restart_count = 0,
        performance_history = {}
    }

    -- Start if configured
    if config.auto_start then
        startInstance(instanceId)
    end

    logger.info("Instance created successfully:", displayName)
    return instanceId
end

-- Configure instance with provided settings
function configureInstance(instance, config)
    -- Set basic configuration
    rt_instance.setConfigValue(instance, "DisplayName", config.name or "")
    rt_instance.setConfigValue(instance, "Solution", config.solution or "")
    rt_instance.setConfigValue(instance, "AutoRestart", config.auto_restart or false)

    -- Input configuration
    if config.input then
        rt_instance.setConfigValue(instance, "Input.uri", config.input.uri or "")
        rt_instance.setConfigValue(instance, "Input.VideoReader.real_time", 
                                  config.input.real_time or false)

        if config.input.media_format then
            rt_instance.setConfigValue(instance, "Input.media_format.width", 
                                      config.input.media_format.width or 0)
            rt_instance.setConfigValue(instance, "Input.media_format.height", 
                                      config.input.media_format.height or 0)
        end
    end

    -- Detection configuration
    if config.detection then
        rt_instance.setConfigValue(instance, "Global.Detection.enabled", 
                                  config.detection.enabled or true)
        rt_instance.setConfigValue(instance, "Global.Detection.confidence_threshold", 
                                  config.detection.confidence_threshold or 0.5)
        rt_instance.setConfigValue(instance, "Global.Detection.inference_strategy", 
                                  config.detection.strategy or "continuous")
    end

    -- Output configuration
    if config.output and config.output.handlers then
        for i, handler in ipairs(config.output.handlers) do
            local handlerPath = string.format("output.handlers[%d]", i - 1)
            rt_instance.setConfigValue(instance, handlerPath .. ".type", handler.type)
            rt_instance.setConfigValue(instance, handlerPath .. ".enabled", handler.enabled)

            -- Handler-specific configuration
            for key, value in pairs(handler) do
                if key ~= "type" and key ~= "enabled" then
                    rt_instance.setConfigValue(instance, handlerPath .. "." .. key, value)
                end
            end
        end
    end
end

-- Start instance with monitoring
function startInstance(instanceId)
    local instanceData = instanceManager.instances[instanceId]
    if not instanceData then
        logger.error("Instance not found:", instanceId)
        return false
    end

    logger.info("Starting instance:", instanceId)

    local success = rt_instance.start(instanceData.instance)
    if success then
        instanceData.started_at = api.system.getCurrentTime()
        instanceData.restart_count = 0

        -- Setup monitoring
        setupInstanceMonitoring(instanceId)

        logger.info("Instance started successfully:", instanceId)
    else
        logger.error("Failed to start instance:", instanceId)
    end

    return success
end

-- Stop instance gracefully
function stopInstance(instanceId)
    local instanceData = instanceManager.instances[instanceId]
    if not instanceData then
        logger.error("Instance not found:", instanceId)
        return false
    end

    logger.info("Stopping instance:", instanceId)

    local success = rt_instance.stop(instanceData.instance)
    if success then
        instanceData.stopped_at = api.system.getCurrentTime()
        logger.info("Instance stopped successfully:", instanceId)
    else
        logger.error("Failed to stop instance:", instanceId)
    end

    return success
end

-- Setup performance monitoring for instance
function setupInstanceMonitoring(instanceId)
    local instanceData = instanceManager.instances[instanceId]

    -- Create performance monitoring worker
    rt_instance.createWorker(instanceData.instance, 
        "monitoringInit", 
        "monitoringRun"
    )

    -- Initialize monitoring state
    rt_instance.setStateValue(instanceData.instance, "monitoring.enabled", true)
    rt_instance.setStateValue(instanceData.instance, "monitoring.instance_id", instanceId)
end

-- Worker initialization for monitoring
function monitoringInit()
    logger.info("Initializing performance monitoring worker")
end

-- Worker main loop for monitoring
function monitoringRun()
    local instance_id = rt_instance.getStateValue(instance, "monitoring.instance_id")
    if not instance_id then
        return
    end

    local instanceData = instanceManager.instances[instance_id]
    if not instanceData then
        return
    end

    -- Collect performance metrics
    local fps = getFPS(instanceData.instance)  -- Would use actual FPS API
    local cpu_usage = getCPUUsage()           -- Would use system API
    local memory_usage = getMemoryUsage()     -- Would use system API

    -- Store metrics
    local metrics = {
        timestamp = api.system.getCurrentTime(),
        fps = fps,
        cpu_usage = cpu_usage,
        memory_usage = memory_usage
    }

    table.insert(instanceData.performance_history, metrics)

    -- Keep only last 100 measurements
    if #instanceData.performance_history > 100 then
        table.remove(instanceData.performance_history, 1)
    end

    -- Check performance thresholds
    if fps < instanceManager.config.performance_threshold.min_fps then
        logger.warn(string.format("Low FPS detected for instance %s: %.2f", instance_id, fps))
        handlePerformanceIssue(instance_id, "low_fps", fps)
    end

    if cpu_usage > instanceManager.config.performance_threshold.max_cpu_usage then
        logger.warn(string.format("High CPU usage for instance %s: %.2f%%", instance_id, cpu_usage))
        handlePerformanceIssue(instance_id, "high_cpu", cpu_usage)
    end
end

-- Handle performance issues
function handlePerformanceIssue(instanceId, issue_type, value)
    local instanceData = instanceManager.instances[instanceId]

    -- Log issue
    logger.warn(string.format("Performance issue detected: %s for instance %s (value: %s)", 
                issue_type, instanceId, tostring(value)))

    -- Take corrective action based on configuration
    if instanceManager.config.auto_restart then
        -- Restart instance if performance is critical
        if (issue_type == "low_fps" and value < 1.0) or 
           (issue_type == "high_cpu" and value > 95.0) then
            logger.info("Attempting automatic restart for instance:", instanceId)
            restartInstance(instanceId)
        end
    end

    -- Send alert (would integrate with alerting system)
    sendPerformanceAlert(instanceId, issue_type, value)
end

-- Restart instance
function restartInstance(instanceId)
    local instanceData = instanceManager.instances[instanceId]
    if not instanceData then
        return false
    end

    instanceData.restart_count = instanceData.restart_count + 1

    logger.info(string.format("Restarting instance %s (attempt #%d)", 
                instanceId, instanceData.restart_count))

    -- Stop and start instance
    stopInstance(instanceId)
    api.system.sleep(2000) -- Wait 2 seconds
    return startInstance(instanceId)
end

-- Perform health check on all instances
function performHealthCheck()
    logger.debug("Performing health check on all instances")

    for instanceId, instanceData in pairs(instanceManager.instances) do
        checkInstanceHealth(instanceId, instanceData)
    end
end

-- Check health of specific instance
function checkInstanceHealth(instanceId, instanceData)
    local currentTime = api.system.getCurrentTime()
    instanceData.last_health_check = currentTime

    local isRunning = rt_instance.isRunning(instanceData.instance)
    local isPaused = rt_instance.isPaused(instanceData.instance)

    logger.debug(string.format("Health check - Instance %s: Running=%s, Paused=%s", 
                 instanceId, tostring(isRunning), tostring(isPaused)))

    -- Check for unexpected failures
    if not isRunning and not isPaused and instanceData.config.auto_restart then
        logger.warn("Instance unexpectedly stopped:", instanceId)

        -- Attempt restart if within limits
        if instanceData.restart_count < 3 then
            logger.info("Attempting automatic restart")
            restartInstance(instanceId)
        else
            logger.error("Max restart attempts reached for instance:", instanceId)
            sendCriticalAlert(instanceId, "max_restarts_exceeded")
        end
    end
end

-- Send performance alert
function sendPerformanceAlert(instanceId, issue_type, value)
    -- Would integrate with actual alerting system
    logger.warn(string.format("ALERT: Performance issue - %s: %s (value: %s)", 
                instanceId, issue_type, tostring(value)))
end

-- Send critical alert
function sendCriticalAlert(instanceId, alert_type)
    -- Would integrate with actual alerting system
    logger.error(string.format("CRITICAL ALERT: %s - Instance: %s", alert_type, instanceId))
end

-- Load existing instances on startup
function loadExistingInstances()
    -- Would load from persistent storage
    logger.info("Loading existing instances from storage")

    -- Example instances
    local examples = {
        {
            id = "camera_1",
            name = "Main Entrance Camera",
            solution = "security_monitoring",
            auto_start = true,
            auto_restart = true,
            input = {
                uri = "rtsp://192.168.1.101:554/stream",
                real_time = true
            },
            detection = {
                enabled = true,
                confidence_threshold = 0.7,
                strategy = "motion_guided"
            },
            output = {
                handlers = {
                    {
                        type = "rest",
                        endpoint = "https://security.example.com/api/events",
                        enabled = true
                    }
                }
            }
        }
    }

    for _, config in ipairs(examples) do
        createInstance(config)
    end
end

-- Generate unique instance ID
function generateInstanceId()
    return string.format("instance_%d_%d", 
                        api.system.getCurrentTime(), 
                        math.random(1000, 9999))
end

-- Initialize the instance manager
initializeInstanceManager()

logger.info("Advanced instance management system is active")

REST API Integration Example

// REST API endpoints for instance management
class InstanceRESTController {
public:
    void registerEndpoints(RESTServer& server) {
        // Create instance
        server.POST("/v1/core/instance", [this](const Request& req) {
            return createInstanceEndpoint(req);
        });

        // Update instance
        server.PUT("/v1/core/instance/{id}", [this](const Request& req) {
            return updateInstanceEndpoint(req);
        });

        // Get instance status
        server.GET("/v1/core/instance/{id}", [this](const Request& req) {
            return getInstanceEndpoint(req);
        });

        // Control instance
        server.POST("/v1/core/instance/{id}/start", [this](const Request& req) {
            return startInstanceEndpoint(req);
        });

        server.POST("/v1/core/instance/{id}/stop", [this](const Request& req) {
            return stopInstanceEndpoint(req);
        });

        // List instances
        server.GET("/v1/core/instances", [this](const Request& req) {
            return listInstancesEndpoint(req);
        });
    }

private:
    Response createInstanceEndpoint(const Request& req) {
        try {
            auto json = nlohmann::json::parse(req.body);

            std::string displayName = json.value("DisplayName", "");
            std::string solution = json.value("Solution", "");

            auto instance = api::factory::Instance::create(
                Uuid::generate(),
                displayName
            );

            // Apply configuration
            applyConfigurationFromJSON(instance, json);

            // Initialize instance
            auto initResult = instance->initialize();
            if (!initResult) {
                return Response{400, "Failed to initialize instance"};
            }

            // Store in registry
            instanceRegistry_[instance->getId()] = instance;

            return Response{201, createInstanceResponse(instance)};
        }
        catch (const std::exception& e) {
            return Response{400, std::string("Invalid request: ") + e.what()};
        }
    }

    Response startInstanceEndpoint(const Request& req) {
        auto instanceId = Uuid::fromString(req.path_params.at("id"));
        auto instance = findInstance(instanceId);

        if (!instance) {
            return Response{404, "Instance not found"};
        }

        auto result = instance->start();
        if (!result) {
            return Response{500, "Failed to start instance: " + result.error().message()};
        }

        return Response{200, "{\"status\": \"started\"}"};
    }

    std::string createInstanceResponse(std::shared_ptr<iface::Instance> instance) {
        nlohmann::json response;
        response["InstanceId"] = instance->getId().toString();
        response["DisplayName"] = instance->getDisplayName();
        response["State"] = static_cast<int>(instance->getState());
        response["IsRunning"] = instance->isRunning();
        response["FPS"] = instance->getFPS();

        return response.dump();
    }

    std::shared_ptr<iface::Instance> findInstance(const Uuid& id) {
        auto it = instanceRegistry_.find(id);
        return (it != instanceRegistry_.end()) ? it->second : nullptr;
    }

    std::unordered_map<Uuid, std::shared_ptr<iface::Instance>> instanceRegistry_;
};

Best Practices

Instance Management

  • Resource Planning: Plan CPU, memory, and storage requirements for multiple concurrent instances
  • State Monitoring: Implement comprehensive monitoring for instance health and performance
  • Graceful Shutdown: Always stop instances gracefully to prevent data loss
  • Configuration Validation: Validate configurations before applying to prevent runtime errors

Performance Optimization

  • Worker Threads: Use appropriate number of worker threads based on system capabilities
  • Memory Management: Monitor memory usage and implement appropriate cleanup strategies
  • Context Buffers: Use context buffers efficiently for inter-plugin communication
  • Performance Counters: Enable performance monitoring selectively to minimize overhead

Solution Integration

  • Version Compatibility: Ensure solution compatibility before linking to instances
  • Configuration Templates: Use solution presets as starting points for instance configuration
  • Dynamic Loading: Implement hot-swapping of solutions for minimal downtime
  • Dependency Management: Handle solution dependencies and plugin loading order

Error Handling

  • Robust Error Handling: Implement comprehensive error handling for all operations
  • Automatic Recovery: Design instances to recover from transient failures
  • Logging Strategy: Maintain detailed logs for debugging and troubleshooting
  • Alerting Integration: Integrate with monitoring and alerting systems

Troubleshooting

Common Issues

Instance Fails to Start

// Debug instance startup issues
void debugInstanceStartup(std::shared_ptr<iface::Instance> instance) {
    LOGI << "Instance State: " << static_cast<int>(instance->getState());
    LOGI << "Is Configured: " << instance->isConfigured();

    auto config = instance->config();
    LOGI << "Configuration valid: " << (config ? "Yes" : "No");

    if (config) {
        // Check critical configuration values
        auto inputUri = config->get("Input.uri");
        LOGI << "Input URI: " << (inputUri ? inputUri->getString() : "Not set");
    }
}

Solutions: - Verify configuration file path and content - Check input source availability (cameras, files, streams) - Ensure required solutions and plugins are loaded - Review system resources (memory, CPU)

Performance Issues

-- Monitor instance performance
function monitorPerformance(instance)
    local fps = getFPS(instance)  -- Actual API call
    local state = rt_instance.getStateValue(instance, "performance")

    print(string.format("FPS: %.2f, State: %s", fps, tostring(state)))

    if fps < 10.0 then
        print("Warning: Low FPS detected, checking system resources")
        -- Implement performance analysis
    end
end

Solutions: - Reduce processing resolution or complexity - Optimize worker thread configuration - Check system resource utilization - Consider hardware acceleration options

Configuration Issues

// Validate instance configuration
bool validateInstanceConfig(std::shared_ptr<iface::Instance> instance) {
    auto config = instance->config();
    if (!config) {
        LOGE << "Configuration not available";
        return false;
    }

    // Check required fields
    auto inputUri = config->get("Input.uri");
    if (!inputUri || inputUri->getString().empty()) {
        LOGE << "Input URI not configured";
        return false;
    }

    // Validate solution
    auto solution = config->get("Solution");
    if (!solution || solution->getString().empty()) {
        LOGW << "No solution specified";
    }

    return true;
}

Solutions: - Use configuration templates and validation schemas - Check JSON syntax and structure - Verify all required parameters are set - Test configuration changes in development environment

Memory Leaks

// Monitor memory usage
void monitorMemoryUsage() {
    static size_t lastMemUsage = 0;
    size_t currentMemUsage = getCurrentMemoryUsage(); // Platform-specific

    if (currentMemUsage > lastMemUsage * 1.2) { // 20% increase
        LOGW << "Memory usage increased significantly: " 
             << (currentMemUsage - lastMemUsage) << " bytes";
    }

    lastMemUsage = currentMemUsage;
}

Solutions: - Implement proper cleanup in instance destructors - Monitor context buffer usage and cleanup - Check for circular references in shared_ptr usage - Use memory profiling tools to identify leaks

Debug Tools

-- Instance debugging utilities
function debugInstance(instance, instanceId)
    print("=== Instance Debug Info ===")
    print("Instance ID: " .. tostring(instanceId))
    print("Is Running: " .. tostring(rt_instance.isRunning(instance)))
    print("Is Paused: " .. tostring(rt_instance.isPaused(instance)))

    -- Configuration dump
    print("Configuration:")
    local config_keys = {"Input.uri", "Solution", "AutoStart", "Global.Detection.enabled"}
    for _, key in ipairs(config_keys) do
        local value = rt_instance.getConfigValue(instance, key)
        print(string.format("  %s: %s", key, tostring(value)))
    end

    -- State dump
    print("State:")
    local state_keys = {"processing.frame_count", "performance.fps", "errors.count"}
    for _, key in ipairs(state_keys) do
        local value = rt_instance.getStateValue(instance, key)
        print(string.format("  %s: %s", key, tostring(value)))
    end
end

Integration Examples

Multi-Instance Processing Pipeline

// Multi-instance processing system
class ProcessingPipeline {
public:
    void setupPipeline() {
        // Stage 1: Input processing
        auto inputInstance = createInstance("input_processor");
        configureInputStage(inputInstance);

        // Stage 2: AI inference
        auto inferenceInstance = createInstance("ai_inference");
        configureInferenceStage(inferenceInstance);

        // Stage 3: Post-processing
        auto postProcInstance = createInstance("post_processor");
        configurePostProcessingStage(postProcInstance);

        // Link instances
        linkInstances(inputInstance, inferenceInstance);
        linkInstances(inferenceInstance, postProcInstance);

        // Start pipeline
        startPipeline();
    }

private:
    void linkInstances(std::shared_ptr<iface::Instance> source,
                      std::shared_ptr<iface::Instance> target) {
        // Setup data flow between instances using context buffers
        // Implementation would handle actual data pipeline connection
    }
};

See Also