Skip to content

Ambarella Plugin

Description

The Ambarella plugin collection provides specialized support for Ambarella CV2X/S5L/S3L System-on-Chip (SoC) platforms, enabling direct hardware integration for video input and output operations on embedded vision systems.

This plugin collection delivers native hardware acceleration and direct driver integration for Ambarella-based embedded vision devices, providing optimal performance for video processing, AI inference, and display operations through specialized hardware interfaces.

Key Features

  • Hardware Integration: Direct integration with Ambarella CV2X/S5L/S3L SoC platforms
  • Zero-Copy Operations: Memory-mapped access to hardware video buffers
  • EazyAI Integration: Native support for Ambarella's EazyAI SDK and AI acceleration
  • IAV Driver Support: Direct integration with Ambarella's Image and Video driver interface
  • Cavalry Acceleration: Hardware acceleration support for specialized processing units
  • Multi-Platform Support: Optimized implementations for different Ambarella SoC variants
  • Real-Time Processing: Optimized for real-time video capture and processing workflows
  • Hardware Blur: Dedicated blur functionality for privacy protection
  • Color Space Optimization: Hardware-accelerated YUV to RGB conversion
  • Display Management: Native display output through Ambarella display subsystem

Plugin Components

AmbaReader Plugin

Purpose: Hardware-optimized video input from Ambarella platforms

Key Capabilities:

  • Direct memory-mapped access to hardware video buffers
  • Support for Y, ME0, ME1 buffer types
  • Hardware-accelerated YUV to RGB conversion
  • IAV driver integration for real-time video capture
  • Platform-specific optimizations for CV2X, S5L, S3L variants

AmbaOut Plugin

Purpose: Native video output and display for Ambarella platforms

Key Capabilities:

  • EazyAI SDK integration for hardware-accelerated processing
  • Metadata writing and overlay functionality
  • Hardware blur effects for privacy masking
  • Display subsystem management
  • Efficient video output through direct hardware interfaces

Requirements

Hardware Requirements

  • Ambarella SoC: CV2X, S5L, or S3L System-on-Chip platforms
  • Memory: Sufficient RAM for video buffer operations (typically 1GB+ depending on resolution)
  • Storage: Access to hardware drivers and EazyAI SDK components

Software Dependencies

  • EazyAI SDK: Ambarella's AI inference and video processing library
  • IAV Driver: Ambarella Image and Video driver interface (/dev/iav)
  • Device Drivers: Appropriate kernel drivers for hardware access
  • Fast I/O Library: High-performance I/O operations (CV2X platforms)
  • OpenCV Integration: EazyAI OpenCV components
  • JPEG Library: JPEG codec support
  • NEON Support: ARM NEON SIMD optimizations

Platform-Specific Requirements

Ambarella CV2X

  • CVFlow AI acceleration support
  • Fast I/O library integration
  • Cavalry hardware acceleration drivers

Ambarella S5L

  • Smart IP camera SoC features
  • Advanced video processing unit access
  • Display subsystem drivers

Ambarella S3L

  • Low-power surveillance optimizations
  • Efficient memory management
  • Power-optimized driver interfaces

Configuration

AmbaReader Configuration

{
  "ambareader": {
    "buf_id": 0,
    "real_time": true,
    "sampling_rate": 0,
    "scale_width": 1920,
    "scale_height": 1080,
    "buffer_type": "Y",
    "color_conversion": true
  }
}

AmbaOut Configuration

{
  "ambaout": {
    "target": "display0",
    "enable_blur": true,
    "metadata_overlay": true,
    "eazyai_integration": true,
    "cavalry_acceleration": true
  }
}

Advanced Configuration

{
  "ambarella_platform": {
    "soc_type": "cv2x",
    "hardware_acceleration": true,
    "driver_path": "/dev/iav",
    "eazyai_config": {
      "sdk_path": "/opt/eazyai",
      "model_cache": "/tmp/eazyai_cache",
      "inference_threads": 4
    },
    "memory_optimization": {
      "buffer_pool_size": 8,
      "zero_copy_enabled": true,
      "dma_coherent": true
    },
    "display_config": {
      "output_format": "RGB888",
      "refresh_rate": 60,
      "resolution": "1920x1080"
    }
  }
}

Configuration Schema

Parameter Type Default Description
buf_id int 0 Hardware buffer ID for video access
real_time bool true Enable real-time processing mode
sampling_rate int 0 Custom sampling rate (0 = auto)
scale_width int 0 Video scaling width (0 = original)
scale_height int 0 Video scaling height (0 = original)
buffer_type string "Y" Buffer type ("Y", "ME0", "ME1")
color_conversion bool true Enable YUV to RGB conversion
target string "display0" Output target specification
enable_blur bool false Enable hardware blur effects
metadata_overlay bool false Enable metadata overlay
eazyai_integration bool true Enable EazyAI SDK integration
cavalry_acceleration bool true Enable Cavalry hardware acceleration
soc_type string "auto" Ambarella SoC type ("cv2x", "s5l", "s3l")
hardware_acceleration bool true Enable hardware acceleration features
driver_path string "/dev/iav" Path to IAV driver device

API Reference

AmbaReader C++ API

Core Methods

class AmbaReader {
public:
    // Frame reading and control
    expected<cbuffer> readFrame(cmap frameSettings);
    void setCurrentFrame(int frameNum);
    int getCurrentFrame();
    int getFrameCount();

    // Playback control
    void setFPS(float fps);
    float getFPS(FPSType fpsType);
    bool isPaused();
    void pause(bool state);

    // Source management
    expected<void> openUri(const std::string& uri);
    expected<void> closeUri();

    // Hardware configuration
    void setBufId(int bufId);
    void setRealTime(bool realTime);
    void setSamplingRate(int rate);
};

Configuration Structure

struct AmbaReaderConfig {
    int buf_id = 0;           // Hardware buffer identifier
    bool real_time = true;    // Real-time processing mode
    int sampling_rate = 0;    // Custom sampling rate
    int scale_width = 0;      // Scaling width
    int scale_height = 0;     // Scaling height
};

AmbaOut C++ API

Core Methods

class AmbaOutCore {
public:
    // Output operations
    expected<void> writeMetadata(pCValue dict);
    expected<void> connectSink(const std::string& sink, const std::string& target);
    expected<void> refresh();

    // Display management
    expected<void> configureDisplay(const DisplayConfig& config);
    expected<void> enableBlur(bool enable);
    expected<void> setOverlay(const OverlayConfig& overlay);

    // EazyAI integration
    expected<void> initializeEazyAI();
    expected<void> processWithEazyAI(const cbuffer& frame);
};

Configuration Structure

struct AmbaOutConfig {
    std::string target;              // Output target
    bool enable_blur = false;        // Enable blur effects
    bool metadata_overlay = false;   // Enable metadata overlay
    bool eazyai_integration = true;  // Enable EazyAI SDK
};

Lua API

AmbaReader Lua Interface

-- Create AmbaReader instance
local reader = api.factory.ambareader.create(instance, "amba_input")

-- Configure hardware access
reader:setBufId(0)
reader:setRealTime(true)
reader:setSamplingRate(30)

-- Open video source
reader:openUri("amba://buffer0")

-- Read frames
local frame = reader:readFrame({
    timestamp = api.system.getCurrentTime(),
    format = "RGB"
})

-- Control playback
reader:setFPS(30.0)
reader:pause(false)

AmbaOut Lua Interface

-- Create AmbaOut instance
local output = api.factory.ambaout.create(instance, "amba_display")

-- Configure output
output:connectSink("display", "display0")
output:enableBlur(true)

-- Write metadata
local metadata = {
    timestamp = api.system.getCurrentTime(),
    objects = detections,
    overlay_text = "CVEDIA-RT on Ambarella"
}
output:writeMetadata(metadata)

-- Refresh display
output:refresh()

Examples

Basic Hardware Video Input

#include "ambareader.h"
#include "ambareaderinput.h"

// Initialize Ambarella video reader
class AmbaVideoProcessor {
public:
    void initialize() {
        // Create reader for Ambarella hardware
        reader_ = std::make_unique<AmbaReader>();

        // Configure for CV2X platform
        reader_->setBufId(0);           // Use hardware buffer 0
        reader_->setRealTime(true);     // Real-time processing
        reader_->setSamplingRate(30);   // 30 FPS capture

        // Open hardware video source
        auto result = reader_->openUri("amba://buffer0");
        if (!result) {
            LOGE << "Failed to open Ambarella video source: " << result.error().message();
            return;
        }

        LOGI << "Ambarella video reader initialized successfully";
    }

    void processVideoFrame() {
        // Read frame from hardware buffer
        cmap frameSettings;
        frameSettings["timestamp"] = CValue::create(getCurrentTimestamp());
        frameSettings["format"] = CValue::create("RGB");

        auto frame = reader_->readFrame(frameSettings);
        if (frame) {
            // Process frame with hardware acceleration
            processWithHardwareAcceleration(frame.value());

            // Display frame information
            LOGI << "Processed frame " << reader_->getCurrentFrame() 
                 << " of " << reader_->getFrameCount();
        }
    }

private:
    std::unique_ptr<AmbaReader> reader_;

    void processWithHardwareAcceleration(const cbuffer& frame) {
        // Leverage Ambarella hardware acceleration
        // Example: CVFlow AI processing, Cavalry operations
    }

    double getCurrentTimestamp() {
        return std::chrono::duration_cast<std::chrono::milliseconds>(
            std::chrono::steady_clock::now().time_since_epoch()).count() / 1000.0;
    }
};

EazyAI Integration with Display Output

#include "ambaoutcore.h"
#include "ambaoutmanaged.h"

// Ambarella output with EazyAI integration
class AmbaEazyAIProcessor {
public:
    void initialize() {
        // Create output handler
        output_ = std::make_unique<AmbaOutCore>();

        // Configure for display output
        output_->connectSink("display", "display0");

        // Initialize EazyAI SDK
        auto eazyaiResult = output_->initializeEazyAI();
        if (!eazyaiResult) {
            LOGE << "Failed to initialize EazyAI: " << eazyaiResult.error().message();
            return;
        }

        // Enable hardware blur for privacy
        output_->enableBlur(true);

        LOGI << "AmbaOut with EazyAI initialized successfully";
    }

    void processAndDisplay(const cbuffer& frame, const std::vector<Detection>& detections) {
        // Process frame with EazyAI hardware acceleration
        auto processResult = output_->processWithEazyAI(frame);
        if (!processResult) {
            LOGE << "EazyAI processing failed: " << processResult.error().message();
            return;
        }

        // Create metadata for overlay
        auto metadata = CValue::create();
        metadata->set("timestamp", getCurrentTimestamp());
        metadata->set("frame_count", frameCounter_++);

        // Add detection results
        auto detectionsArray = CValue::createArray();
        for (const auto& detection : detections) {
            auto det = CValue::create();
            det->set("x", detection.bbox.x);
            det->set("y", detection.bbox.y);
            det->set("w", detection.bbox.width);
            det->set("h", detection.bbox.height);
            det->set("confidence", detection.confidence);
            det->set("class", detection.className);
            detectionsArray->append(det);
        }
        metadata->set("detections", detectionsArray);

        // Write metadata to display
        output_->writeMetadata(metadata);

        // Refresh display output
        output_->refresh();

        LOGI << "Processed and displayed frame with " << detections.size() << " detections";
    }

private:
    std::unique_ptr<AmbaOutCore> output_;
    int frameCounter_ = 0;

    double getCurrentTimestamp() {
        return std::chrono::duration_cast<std::chrono::milliseconds>(
            std::chrono::steady_clock::now().time_since_epoch()).count() / 1000.0;
    }
};

Complete Ambarella Platform Integration

-- Complete Ambarella platform setup with CV2X optimization
local ambaReader = api.factory.ambareader.create(instance, "cv2x_reader")
local ambaOutput = api.factory.ambaout.create(instance, "cv2x_display")
local inference = api.factory.inference.create(instance, "eazyai_inference")

-- Configure reader for CV2X platform
ambaReader:configure({
    buf_id = 0,
    real_time = true,
    scale_width = 1920,
    scale_height = 1080,
    buffer_type = "Y",
    soc_optimization = "cv2x"
})

-- Configure output with EazyAI integration
ambaOutput:configure({
    target = "display0",
    eazyai_integration = true,
    cavalry_acceleration = true,
    enable_blur = true,
    metadata_overlay = true
})

-- Configure inference for Ambarella acceleration
inference:configure({
    engine = "eazyai",
    model_path = "/opt/models/detection.eazyai",
    hardware_acceleration = true,
    cavalry_optimization = true
})

-- Initialize components
ambaReader:openUri("amba://cv2x/buffer0")
ambaOutput:connectSink("display", "display0")
inference:loadModel()

-- Main processing loop
function processAmbaFrame()
    -- Read frame from hardware
    local frame = ambaReader:readFrame({
        timestamp = api.system.getCurrentTime(),
        format = "RGB",
        zero_copy = true  -- Use zero-copy for performance
    })

    if not frame then
        print("Error: Failed to read frame from Ambarella hardware")
        return
    end

    -- Run AI inference with EazyAI acceleration
    local detections = inference:runInference(frame)

    if detections then
        -- Process detections for display
        local metadata = {
            timestamp = os.time(),
            platform = "ambarella_cv2x",
            frame_info = {
                width = frame.width,
                height = frame.height,
                format = frame.format
            },
            detections = detections,
            performance = {
                inference_time = inference:getLastInferenceTime(),
                fps = ambaReader:getFPS("current")
            }
        }

        -- Apply hardware blur to sensitive areas
        if #detections > 0 then
            for _, detection in ipairs(detections) do
                if detection.class == "person" and detection.privacy_blur then
                    ambaOutput:applyBlur(detection.bbox)
                end
            end
        end

        -- Display results with metadata overlay
        ambaOutput:writeMetadata(metadata)
        ambaOutput:refresh()

        print(string.format(
            "Processed Ambarella frame: %d detections, %.2f FPS",
            #detections, metadata.performance.fps
        ))
    end
end

-- Performance monitoring
function monitorAmbaPerformance()
    local stats = {
        reader_fps = ambaReader:getFPS("average"),
        buffer_utilization = ambaReader:getBufferUtilization(),
        cavalry_usage = ambaOutput:getCavalryUtilization()
        -- Memory usage would need platform-specific implementation
    }

    print("Ambarella Performance: buffer_util=" .. stats.buffer_utilization .. "%, cavalry=" .. stats.cavalry_usage .. "%, fps=" .. stats.reader_fps)

    -- Adjust configuration based on performance
    if stats.buffer_utilization > 90 then
        print("Warning: High buffer utilization, reducing frame rate")
        ambaReader:setFPS(ambaReader:getFPS("current") * 0.9)
    end
end

Best Practices

Hardware Optimization

  • Zero-Copy Operations: Use memory-mapped access to avoid unnecessary data copying
  • Buffer Management: Optimize buffer IDs and types for specific use cases
  • Hardware Acceleration: Leverage Cavalry and CVFlow units for maximum performance
  • Driver Integration: Ensure proper IAV driver configuration and access permissions

Platform-Specific Optimization

CV2X Platforms

  • Use CVFlow AI acceleration for inference workloads
  • Leverage Fast I/O library for high-performance operations
  • Configure Cavalry acceleration for specialized processing

S5L Platforms

  • Optimize for smart IP camera workflows
  • Use advanced video processing features
  • Implement efficient display pipeline management

S3L Platforms

  • Focus on power efficiency and low-resource usage
  • Optimize memory allocation and buffer management
  • Use appropriate scaling and sampling rates

Integration Guidelines

  • EazyAI SDK: Ensure proper EazyAI SDK installation and configuration
  • Driver Access: Configure proper permissions for /dev/iav device access
  • Memory Management: Implement efficient buffer pool management
  • Error Handling: Robust error handling for hardware-specific operations

Troubleshooting

Common Issues

Hardware Access Problems

// Check IAV driver access
if (access("/dev/iav", R_OK | W_OK) != 0) {
    LOGE << "Cannot access IAV driver. Check permissions and driver installation.";
    return;
}

// Verify EazyAI SDK availability
if (!isEazyAIAvailable()) {
    LOGE << "EazyAI SDK not available. Check installation and library paths.";
    return;
}

Buffer Management Issues

  • Buffer ID Conflicts: Ensure unique buffer IDs across applications
  • Memory Exhaustion: Monitor buffer pool usage and implement cleanup
  • DMA Issues: Verify DMA-coherent memory allocation

Performance Issues

  • Frame Drops: Adjust sampling rate and buffer sizes
  • High CPU Usage: Enable hardware acceleration features
  • Memory Leaks: Implement proper buffer cleanup and release

Debugging Tools

// Hardware diagnostic functions
void diagnoseAmbaHardware() {
    // Check hardware capabilities
    auto caps = getAmbaCapabilities();
    LOGI << "Ambarella SoC Type: " << caps.socType;
    LOGI << "CVFlow Support: " << (caps.cvflowSupport ? "Yes" : "No");
    LOGI << "Cavalry Support: " << (caps.cavalrySupport ? "Yes" : "No");

    // Check driver status
    auto driverStatus = checkIAVDriver();
    LOGI << "IAV Driver Status: " << driverStatus;

    // Monitor buffer utilization
    auto bufferStats = getBufferStatistics();
    for (const auto& [bufId, stats] : bufferStats) {
        LOGI << "Buffer " << bufId << ": " << stats.utilization << "% utilized";
    }
}

Integration Examples

Complete Surveillance System

// Ambarella-optimized surveillance system
class AmbaSurveillanceSystem {
public:
    void initialize() {
        // Initialize hardware-accelerated components
        initializeAmbaReader();
        initializeEazyAIInference();
        initializeAmbaDisplay();

        // Configure for surveillance optimization
        configureForSurveillance();
    }

    void processSurveillanceFrame() {
        // Capture from hardware
        auto frame = ambaReader_->readFrame({"format", "RGB"});
        if (!frame) return;

        // AI inference with EazyAI acceleration
        auto detections = runEazyAIInference(frame.value());

        // Process surveillance events
        auto events = processSurveillanceEvents(detections);

        // Display with privacy protection
        displayWithPrivacyBlur(frame.value(), detections, events);
    }

private:
    std::unique_ptr<AmbaReader> ambaReader_;
    std::unique_ptr<AmbaOutCore> ambaDisplay_;
    std::unique_ptr<EazyAIInference> eazyaiEngine_;
};

See Also