JetsonVideoReader Plugin¶
Description¶
JetsonVideoReader is a specialized video input plugin optimized for NVIDIA Jetson platforms. It provides hardware-accelerated video decoding and input processing specifically designed to leverage Jetson's multimedia capabilities and hardware decoders. This plugin maximizes performance on Jetson devices by utilizing dedicated NVDEC engines and zero-copy GPU operations.
The plugin is built on NVIDIA's jetson-utils library and multimedia APIs, providing direct access to hardware acceleration features that are essential for real-time AI processing applications on resource-constrained edge devices.
Key Features¶
- Hardware-Accelerated Decoding: Utilizes Jetson's dedicated NVDEC engines for video decoding
- Zero-Copy Operations: Efficient GPU memory management with minimal data transfers
- Multi-Format Support: Hardware acceleration for H.264, H.265, VP8, VP9, and more
- Real-Time Processing: Optimized for real-time AI inference applications
- Camera Integration: Direct integration with Jetson camera interfaces via V4L2
- CUDA Integration: Leverages CUDA runtime for GPU operations
- Platform Optimization: ARM architecture and GPU-specific optimizations
- Concurrent Streams: Multiple video streams with hardware acceleration
When to Use¶
- Running video processing on NVIDIA Jetson devices
- Maximizing performance on edge computing platforms
- Processing high-resolution video streams in real-time
- Minimizing power consumption through hardware acceleration
- Integrating with Jetson camera modules and CSI cameras
- Building efficient edge AI applications with video input
Requirements¶
Hardware Requirements¶
- NVIDIA Jetson Devices: Nano, AGX NX, Xavier AGX, Orin Nano, Orin NX, Orin AGX
- Memory: Minimum 4GB RAM (8GB+ recommended for high-resolution streams)
- Storage: SSD recommended for high-bitrate video files
Software Dependencies¶
- JetPack: Version compatibility varies by Jetson model
- Jetson Nano: JetPack 4.6.1
- AGX and Orin (older CVEDIA-RT): JetPack 5.1
- AGX and Orin (newer CVEDIA-RT): JetPack 6.0
- NVIDIA Multimedia API: Jetson multimedia libraries
- CUDA Runtime: NVIDIA CUDA runtime for GPU operations
- jetson-utils: NVIDIA's utility library for Jetson platforms
- OpenCV: Computer vision operations
- Video4Linux (V4L2): Camera interface support
TensorRT Compatibility¶
- Supports TensorRT versions 7.1.3 through 8.6.1
- Optimal performance with TensorRT 8.6.1 (native compatible)
- Legacy mode operation on older JetPack versions
Configuration¶
Basic Configuration¶
{
"jetsonvideoreader": {
"uri": "jetson:///path/to/video.mp4",
"real_time": false,
"sampling_rate": 0
}
}
Advanced Configuration with Scaling¶
{
"jetsonvideoreader": {
"uri": "jetson:///dev/video0",
"real_time": true,
"sampling_rate": 30,
"scale_width": 1920,
"scale_height": 1080
}
}
Configuration Schema¶
Parameter | Type | Default | Description |
---|---|---|---|
uri |
string | required | Video source URI with "jetson://" scheme |
real_time |
boolean | false | Slow down disk-based video to simulate actual FPS |
sampling_rate |
integer | 0 | Override FPS if real_time == false (0 = use source FPS) |
scale_width |
integer | 0 | Video scaling width (0 = original width) |
scale_height |
integer | 0 | Video scaling height (0 = original height) |
API Reference¶
C++ API¶
The JetsonVideoReader implements the InputHandler
interface:
class JetsonVideoReader {
public:
// Frame reading operations
cvec readNextFrame();
cvec readFrame();
cvec getNextFrame();
// Playback control
void setCurrentFrame(int frameNum);
int getCurrentFrame();
int getFrameCount();
bool setFPS(float fps);
float getFPS(FPSType fpsType);
// Stream management
bool openUri(std::string const& uri);
void close();
bool canRead();
bool canSeek();
// State management
bool isPaused();
bool isEnded();
void pause(bool state);
// Metadata access
double getCurrentTimestamp();
std::string getSourceURI();
std::string getSourceDesc();
// Configuration
struct config {
bool real_time = false;
int sampling_rate = 0;
int scale_width = 0;
int scale_height = 0;
};
};
URI Registration¶
extern "C" EXPORT void registerHandler() {
api::input::registerUriHandler("jetson", &JetsonVideoReaderHandler::create);
}
Lua API¶
JetsonVideoReader is typically used through the Input plugin interface:
-- Create input instance for Jetson video
local instance = api.thread.getCurrentInstance()
local input = api.factory.input.create(instance, "Input")
-- Configure for hardware-accelerated decoding
local config = {
uri = "jetson:///path/to/video.h264",
real_time = true,
scale_width = 1280,
scale_height = 720
}
input:saveConfig(config)
input:setSourceFromConfig()
Supported Video Formats¶
Hardware-Accelerated Formats¶
Based on NVDEC compatibility, the plugin supports:
Format | Max Resolution | Jetson Models | Notes |
---|---|---|---|
H.264/AVC | 4096x4096 | All Jetson | Full hardware acceleration |
H.265/HEVC | 8192x8192 | Xavier, Orin | Higher efficiency codec |
VP8 | 4096x4096 | Xavier, Orin | Web streaming format |
VP9 | 8192x8192 | Xavier, Orin | Advanced web codec |
MPEG-1/MPEG-2 | 2048x2048 | All Jetson | Legacy format support |
AV1 | 8192x8192 | Orin series | Latest generation codec |
Camera Support¶
- CSI Cameras: Direct integration with Jetson camera modules
- USB Cameras: V4L2 compatible cameras
- IP Cameras: RTSP streams with hardware decoding
- Custom Cameras: Through V4L2 interface
Examples¶
Basic Video File Processing¶
-- Create Jetson video input
local instance = api.thread.getCurrentInstance()
local input = api.factory.input.create(instance, "Input")
-- Configure for H.264 file with hardware acceleration
local config = {
uri = "jetson:///data/video.h264",
real_time = false, -- Process as fast as possible
sampling_rate = 0 -- Use original FPS
}
input:saveConfig(config)
input:setSourceFromConfig()
-- Process frames with hardware acceleration
while input:canRead() do
local frames = input:readMetaFrames(false)
if frames and #frames > 0 then
-- Frames are already in GPU memory for efficient processing
processJetsonFrame(frames[1])
end
end
Real-Time Camera Processing¶
-- Configure for CSI camera input
local instance = api.thread.getCurrentInstance()
local input = api.factory.input.create(instance, "Input")
local config = {
uri = "jetson:///dev/video0", -- CSI camera
real_time = true, -- Real-time processing
sampling_rate = 30, -- 30 FPS
scale_width = 1920, -- Scale to Full HD
scale_height = 1080
}
input:saveConfig(config)
input:setSourceFromConfig()
-- Real-time camera processing
while input:canRead() do
local frames = input:readMetaFrames(false)
if frames and #frames > 0 then
-- Process camera frames in real-time
analyzeFrameRealTime(frames[1])
end
end
High-Resolution Video Scaling¶
-- Process 4K video with hardware scaling
local instance = api.thread.getCurrentInstance()
local input = api.factory.input.create(instance, "Input")
local config = {
uri = "jetson:///data/4k_video.h265",
real_time = false,
sampling_rate = 0,
scale_width = 1920, -- Scale down from 4K to 1080p
scale_height = 1080 -- for faster processing
}
input:saveConfig(config)
input:setSourceFromConfig()
-- Monitor processing performance
local frame_count = 0
local start_time = os.time()
while input:canRead() do
local frames = input:readMetaFrames(false)
if frames and #frames > 0 then
frame_count = frame_count + 1
-- Log performance every 100 frames
if frame_count % 100 == 0 then
local elapsed = os.time() - start_time
if elapsed > 0 then
local fps = frame_count / elapsed
api.logging.LogInfo("Processing FPS: " .. fps)
end
end
end
end
Multi-Stream Processing¶
-- Process multiple video streams with hardware acceleration
local streams = {}
local stream_configs = {
{
name = "stream1",
uri = "jetson:///data/stream1.h264",
scale_width = 1280,
scale_height = 720
},
{
name = "stream2",
uri = "jetson:///data/stream2.h265",
scale_width = 1920,
scale_height = 1080
}
}
-- Create multiple Jetson input instances
local instance = api.thread.getCurrentInstance()
for i, config in ipairs(stream_configs) do
streams[i] = api.factory.input.create(instance, "Input" .. i)
local stream_config = {
uri = config.uri,
real_time = true,
scale_width = config.scale_width,
scale_height = config.scale_height
}
streams[i]:saveConfig(stream_config)
streams[i]:setSourceFromConfig()
end
-- Process all streams concurrently
while true do
for i, stream in ipairs(streams) do
if stream:canRead() then
local frames = stream:readMetaFrames(false)
if frames and #frames > 0 then
processMultiStream(i, frames[1])
end
end
end
end
Best Practices¶
Hardware Optimization¶
- Use hardware scaling when possible to reduce processing load
- Keep data in GPU memory to avoid CPU-GPU transfers
- Match resolution to your AI model requirements
- Monitor decode engine usage to avoid overloading hardware
Performance Tuning¶
- Profile on target hardware - Different Jetson models have varying capabilities
- Use appropriate video formats - H.265 is more efficient but requires more compute
- Configure scaling appropriately - Hardware scaling is more efficient than software
- Monitor system resources - Balance between multiple streams and processing
Memory Management¶
- Monitor GPU memory usage - Jetson devices have limited GPU memory
- Use zero-copy when possible - Minimize memory transfers
- Configure appropriate buffer sizes - Balance latency and memory usage
- Clean up resources properly - Ensure proper resource cleanup
Real-Time Processing¶
- Set appropriate FPS targets - Match your processing capabilities
- Use real_time mode for live streams and cameras
- Monitor frame drops - Ensure processing keeps up with input
- Optimize inference models - Use TensorRT optimization for AI models
Troubleshooting¶
Common Issues¶
-
"NVDEC not available" error
- Verify JetPack installation is complete
- Check NVIDIA multimedia API installation
- Ensure hardware is compatible with video format
- Test with simpler video format (H.264)
-
Poor video performance
- Check if hardware acceleration is being used
- Monitor GPU utilization with tegrastats
- Reduce video resolution or framerate
- Verify sufficient cooling for sustained performance
-
Memory allocation failures
- Check available GPU memory
- Reduce buffer sizes or concurrent streams
- Monitor memory usage with tegrastats
- Ensure adequate system memory
-
Camera not detected
- Verify camera connection and power
- Check V4L2 device availability (/dev/video*)
- Test camera with v4l2-ctl utility
- Ensure proper camera module configuration
Performance Issues¶
-
Frame drops or stuttering
- Check CPU and GPU utilization
- Verify adequate cooling and power
- Reduce concurrent processing load
- Optimize processing pipeline
-
High latency
- Reduce buffer sizes for lower latency
- Use hardware scaling instead of software
- Optimize decode settings
- Check for thermal throttling
Platform-Specific Issues¶
-
Jetson Nano limitations
- Limited to H.264 hardware decoding
- Reduced concurrent stream capacity
- Monitor power consumption carefully
- Use lower resolution streams when possible
-
Xavier/Orin performance tuning
- Utilize multiple decode engines
- Take advantage of higher memory bandwidth
- Use advanced codecs (H.265, VP9)
- Optimize for specific model capabilities
Debugging Tips¶
-- Monitor Jetson video reader performance
local function debugJetsonVideo(input)
local stats = {
current_frame = input:getCurrentFrame(),
total_frames = input:getFrameCount(),
fps = input:getFPS(3), -- Real FPS (type 3)
timestamp = input:getCurrentTimestamp(),
can_read = input:canRead(),
can_seek = input:canSeek()
}
local json = dofile(luaroot .. "/api/json.lua")
api.logging.LogDebug("Jetson Video Stats: " .. json.encode(stats))
-- Note: System resource monitoring requires external tools
-- Use tegrastats command-line tool for Jetson-specific metrics:
-- os.execute("tegrastats --interval 1000")
-- Or implement custom monitoring through /proc filesystem
end
-- Monitor hardware acceleration through system tools
local function checkHardwareAccel()
-- Use tegrastats to monitor hardware utilization
-- This requires parsing tegrastats output or using system monitoring
-- Example of running tegrastats (output needs parsing):
local handle = io.popen("tegrastats --interval 1000 | head -1")
if handle then
local result = handle:read("*a")
handle:close()
-- Parse result for GPU/NVDEC usage
api.logging.LogInfo("Tegrastats output: " .. result)
end
end
Integration Examples¶
Integration with TensorRT Inference¶
-- Optimize video input for TensorRT processing
local instance = api.thread.getCurrentInstance()
local input = api.factory.input.create(instance, "Input")
local inference = api.factory.inference.create(instance, "Inference")
-- Configure input to match TensorRT model requirements
local config = {
uri = "jetson:///data/input.h264",
real_time = true,
scale_width = 640, -- Match model input size
scale_height = 640
}
input:saveConfig(config)
input:setSourceFromConfig()
-- Process with hardware-accelerated pipeline
while input:canRead() do
local frames = input:readMetaFrames(false)
if frames and #frames > 0 then
-- Frame stays in GPU memory for efficient inference
-- Note: Actual inference method depends on your inference plugin
-- inference:runInference(frames[1]) or similar
-- processInferenceResults(results)
end
end
Edge AI Application¶
-- Complete edge AI application with Jetson optimization
local instance = api.thread.getCurrentInstance()
local input = api.factory.input.create(instance, "Input")
-- Solution is accessed through instance, not api.solutions
local solution = instance:getSolution()
-- Configure for edge deployment
local config = {
uri = "jetson:///dev/video0", -- Local camera
real_time = true,
sampling_rate = 15, -- Balanced performance
scale_width = 1280,
scale_height = 720
}
input:saveConfig(config)
input:setSourceFromConfig()
-- Edge processing loop
while input:canRead() do
local frames = input:readMetaFrames(false)
if frames and #frames > 0 then
-- Process frames with your AI pipeline
-- Actual processing depends on your solution implementation
end
end