Motion Plugin¶
Description¶
Motion is a motion detection plugin for CVEDIA-RT that identifies areas of movement in video streams. It provides robust motion detection capabilities using advanced background subtraction algorithms and morphological processing to detect moving objects and areas of interest.
This plugin implements sophisticated motion detection algorithms including MOG2 (Mixture of Gaussians) background subtraction, configurable sensitivity thresholds, shadow detection, and privacy masking capabilities. It's designed for real-time surveillance applications where accurate motion detection is critical for security and monitoring systems.
Key Features¶
- Advanced Background Subtraction: Multiple algorithms including MOG2 for adaptive background modeling
- Real-Time Processing: Optimized for real-time motion detection in video streams
- Privacy Masking: Configurable inclusion and exclusion masks for privacy protection
- Shadow Detection: Built-in shadow detection to reduce false positives
- Noise Reduction: Configurable Gaussian blur and morphological operations
- Scalable Processing: Configurable downscaling for performance optimization
- Multiple Detection Areas: Support for up to 50 simultaneous motion areas
- Adaptive Learning: Configurable learning rate for background model adaptation
- Debug Visualization: Access to preprocessing and postprocessing frames for debugging
Requirements¶
Hardware Requirements¶
- CPU: Multi-core processor (Intel/AMD x64 or ARM64)
- Memory: Minimum 2GB RAM (4GB+ recommended for high-resolution processing)
- GPU: Optional GPU acceleration support through OpenCV
Software Dependencies¶
- OpenCV: Computer vision library with background subtraction algorithms
- RTCORE: CVEDIA-RT core library for plugin infrastructure
- Threading Library: Multi-threading support for concurrent processing
- Mathematical Libraries: Linear algebra operations for image processing
Platform Requirements¶
- Linux: Primary supported platform
- Kernel: Modern Linux kernel with video processing capabilities
- Drivers: Appropriate video input drivers and hardware acceleration drivers
Configuration¶
Basic Configuration¶
{
"motion": {
"static_scene": true,
"threshold": 16,
"subtractor": "MOG2",
"max_motion_blobs": 10,
"detect_shadows": true,
"scale_down_width": 300,
"scale_down_height": 300
}
}
Advanced Configuration¶
{
"motion": {
"static_scene": true,
"history_size": 500,
"learning_rate": 0.001,
"max_motion_blobs": 50,
"threshold": 16,
"subtractor": "MOG2",
"scale_down_width": 640,
"scale_down_height": 480,
"detect_shadows": true,
"blur_size": 5,
"motion_reuse_count": 3,
"noise_reduction": {
"enabled": true,
"kernel_size": 5,
"morphological_operations": true
},
"sensitivity": {
"min_area": 100,
"max_area": 50000,
"contour_threshold": 0.02
}
}
}
Configuration Schema¶
Parameter | Type | Default | Description |
---|---|---|---|
static_scene |
bool | true | Assume stationary camera for background modeling |
history_size |
int | 500 | Number of frames for background model history |
learning_rate |
float | -1.0 | Background adaptation rate (-1 for automatic) |
max_motion_blobs |
int | 50 | Maximum number of detected motion areas |
threshold |
int | 16 | Motion detection sensitivity threshold (0-255) |
subtractor |
string | "MOG2" | Background subtraction algorithm ("MOG2", "KNN") |
scale_down_width |
float | 300.0 | Processing resolution width for performance |
scale_down_height |
float | 300.0 | Processing resolution height for performance |
detect_shadows |
bool | true | Enable shadow detection to reduce false positives |
blur_size |
int | 0 | Gaussian blur kernel size for noise reduction |
motion_reuse_count |
int | 0 | Number of frames to reuse stable motion results |
API Reference¶
C++ API (MotionManagedImpl)¶
Core Motion Detection¶
class MotionManagedImpl : public MotionManaged {
public:
// Initialization and configuration
expected<void> initialize() override;
expected<pCValue> getConfigDescriptors() override;
// Motion detection processing
expected<cvec> detectMotion(cbuffer frame) override;
expected<cvec> getDetections() override;
expected<pCValue> getDetection(int detectionId) override;
// Detection management
expected<void> deleteDetection(int detectionId) override;
expected<void> deleteDetections() override;
// Mask configuration
expected<void> setMask(std::vector<std::vector<cv::Point2f>> mask, bool isPrivacyMask) override;
expected<void> calculateMask() override;
// Debug and diagnostics
expected<cbuffer> getPreprocessFrame() override;
expected<cbuffer> getPostprocessFrame() override;
expected<pCValue> getStats() override;
};
Configuration Structure¶
struct MotionConfig {
bool static_scene = true; // Stationary camera assumption
int history_size = 500; // Background model history
double learning_rate = -1.0; // Adaptation rate
int max_motion_blobs = 50; // Maximum motion areas
int threshold = 16; // Detection sensitivity
std::string subtractor = "MOG2"; // Background subtractor type
float scale_down_width = 300.0f; // Processing width
float scale_down_height = 300.0f; // Processing height
bool detect_shadows = true; // Shadow detection
int blur_size = 0; // Noise reduction blur
int motion_reuse_count = 0; // Result reuse frames
};
Statistics Structure¶
struct MotionStats {
uint32_t num_detections = 0; // Total detections processed
int motion_count = 0; // Current number of motion areas
double processing_time_ms = 0.0; // Last processing time
double fps = 0.0; // Processing frame rate
size_t background_model_size = 0; // Background model memory usage
};
Lua API¶
Motion Detection Setup¶
-- Create motion detection instance
local motion = api.factory.motion.create(instance, "motion_detector")
-- Configure motion detection
motion:configure({
static_scene = true,
threshold = 20,
subtractor = "MOG2",
max_motion_blobs = 15,
detect_shadows = true,
scale_down_width = 320,
scale_down_height = 240
})
-- Initialize motion detection
local success = motion:initialize()
if success then
api.logging.LogInfo("Motion detection initialized successfully")
else
api.logging.LogError("Failed to initialize motion detection")
end
Motion Processing¶
-- Process frame for motion detection
function processMotionFrame(frame)
local detections = motion:detectMotion(frame)
if detections then
api.logging.LogInfo("Detected " .. #detections .. " motion areas")
for i, detection in ipairs(detections) do
api.logging.LogDebug("Motion Area " .. i .. ":")
api.logging.LogDebug(string.format(" Bounding Box: %d,%d,%d,%d", detection.x, detection.y, detection.w, detection.h))
api.logging.LogDebug(" Area: " .. detection.area)
api.logging.LogDebug(" Confidence: " .. detection.confidence)
api.logging.LogDebug(" Timestamp: " .. detection.timestamp)
end
return detections
end
return {}
end
Privacy Masking¶
-- Configure privacy masks
function setupPrivacyMasks()
-- Define privacy mask areas (normalized coordinates)
local privacyMasks = {
-- Rectangular privacy area
{
{x = 0.1, y = 0.1},
{x = 0.3, y = 0.1},
{x = 0.3, y = 0.3},
{x = 0.1, y = 0.3}
},
-- Triangular privacy area
{
{x = 0.7, y = 0.2},
{x = 0.9, y = 0.2},
{x = 0.8, y = 0.4}
}
}
-- Apply privacy masks
motion:setMask(privacyMasks, true) -- true = privacy mask
motion:calculateMask()
api.logging.LogInfo("Privacy masks configured")
end
Examples¶
Basic Motion Detection System¶
#include "motionmanaged.h"
#include "rtcore.h"
// Basic motion detection implementation
class MotionDetectionSystem {
public:
void initialize() {
// Create motion detector
motion_ = std::unique_ptr<MotionManaged>(
static_cast<MotionManaged*>(
MotionManaged::create("motion_system").release()
)
);
// Configure motion detection
auto config = CValue::create();
config->set("static_scene", true);
config->set("threshold", 20);
config->set("subtractor", "MOG2");
config->set("max_motion_blobs", 10);
config->set("detect_shadows", true);
config->set("scale_down_width", 320);
config->set("scale_down_height", 240);
motion_->setConfig(config);
// Initialize motion detection
auto result = motion_->initialize();
if (!result) {
LOGE << "Failed to initialize motion detection: " << result.error().message();
return;
}
LOGI << "Motion detection system initialized successfully";
}
std::vector<MotionArea> processFrame(const cbuffer& frame) {
std::vector<MotionArea> motionAreas;
// Detect motion in frame
auto detections = motion_->detectMotion(frame);
if (!detections) {
LOGE << "Motion detection failed: " << detections.error().message();
return motionAreas;
}
// Process motion detections
for (const auto& detection : detections.value()) {
MotionArea area = processMotionDetection(detection);
if (area.isValid()) {
motionAreas.push_back(area);
}
}
LOGI << "Processed " << motionAreas.size() << " motion areas";
return motionAreas;
}
private:
std::unique_ptr<MotionManaged> motion_;
struct MotionArea {
cv::Rect boundingBox;
float confidence;
double timestamp;
int area;
bool isValid() const {
return boundingBox.area() > 0 && confidence > 0.5f;
}
};
MotionArea processMotionDetection(pCValue detection) {
MotionArea area;
// Extract bounding box
area.boundingBox.x = detection->get("x").getInt();
area.boundingBox.y = detection->get("y").getInt();
area.boundingBox.width = detection->get("w").getInt();
area.boundingBox.height = detection->get("h").getInt();
// Extract metadata
area.confidence = detection->get("confidence").getFloat();
area.timestamp = detection->get("timestamp").getDouble();
area.area = detection->get("area").getInt();
return area;
}
};
Advanced Motion Detection with Privacy Masking¶
// Advanced motion detection with privacy protection
class AdvancedMotionSystem {
public:
void initializeWithPrivacyMasks() {
// Initialize motion detection
initializeMotionDetection();
// Setup privacy masks
setupPrivacyMasks();
// Configure advanced settings
configureAdvancedSettings();
LOGI << "Advanced motion system with privacy masking initialized";
}
void processVideoStream() {
while (running_) {
auto frame = getNextFrame();
if (frame) {
processMotionWithPrivacy(frame.value());
}
std::this_thread::sleep_for(std::chrono::milliseconds(33)); // ~30 FPS
}
}
private:
std::unique_ptr<MotionManaged> motion_;
std::vector<cv::Point2f> privacyMaskPoints_;
std::atomic<bool> running_{true};
void setupPrivacyMasks() {
// Define privacy zones (normalized coordinates)
std::vector<std::vector<cv::Point2f>> privacyMasks = {
// Building entrance area
{
{0.0f, 0.0f}, {0.3f, 0.0f}, {0.3f, 0.5f}, {0.0f, 0.5f}
},
// Sensitive area in center
{
{0.4f, 0.3f}, {0.6f, 0.3f}, {0.6f, 0.7f}, {0.4f, 0.7f}
}
};
// Apply privacy masks
auto result = motion_->setMask(privacyMasks, true);
if (result) {
motion_->calculateMask();
LOGI << "Privacy masks configured successfully";
}
}
void processMotionWithPrivacy(const cbuffer& frame) {
// Detect motion
auto detections = motion_->detectMotion(frame);
if (!detections) {
return;
}
// Filter detections outside privacy zones
std::vector<MotionEvent> validMotionEvents;
for (const auto& detection : detections.value()) {
if (isOutsidePrivacyZone(detection)) {
MotionEvent event = createMotionEvent(detection);
validMotionEvents.push_back(event);
// Trigger security alerts if needed
if (event.confidence > 0.8f && event.area > 1000) {
triggerSecurityAlert(event);
}
}
}
// Process valid motion events
if (!validMotionEvents.empty()) {
processMotionEvents(validMotionEvents);
}
}
bool isOutsidePrivacyZone(pCValue detection) {
cv::Rect motionRect(
detection->get("x").getInt(),
detection->get("y").getInt(),
detection->get("w").getInt(),
detection->get("h").getInt()
);
// Check if motion area intersects with privacy zones
// Implementation would check polygon intersection
return !intersectsPrivacyMask(motionRect);
}
};
Complete Motion-Based Security System¶
-- Complete motion-based security system with Lua scripting
local motion = api.factory.motion.create(instance, "security_motion")
local storage = api.factory.storage.create(instance, "security_storage")
local alerts = api.factory.alerts.create(instance, "security_alerts")
-- Security system configuration
local securityConfig = {
motion_sensitivity = 25,
min_motion_area = 500,
privacy_zones = {
{name = "entrance", points = {{0.0, 0.0}, {0.2, 0.0}, {0.2, 0.4}, {0.0, 0.4}}},
{name = "office", points = {{0.6, 0.3}, {0.9, 0.3}, {0.9, 0.7}, {0.6, 0.7}}}
},
alert_cooldown = 30, -- seconds
recording_duration = 60 -- seconds
}
-- Motion detection statistics
local motionStats = {
total_detections = 0,
false_positives = 0,
security_events = 0,
last_motion_time = 0
}
-- Initialize security motion system
function initializeSecuritySystem()
api.logging.LogInfo("Initializing motion-based security system")
-- Configure motion detection for security
motion:configure({
static_scene = true,
threshold = securityConfig.motion_sensitivity,
subtractor = "MOG2",
max_motion_blobs = 20,
detect_shadows = true,
history_size = 1000, -- Longer history for stable background
learning_rate = 0.0005, -- Slow adaptation for security
scale_down_width = 640,
scale_down_height = 480
})
-- Initialize motion detection
local success = motion:initialize()
if not success then
api.logging.LogError("Failed to initialize motion detection")
return false
end
-- Setup privacy zones
setupPrivacyZones()
-- Start motion processing
startMotionMonitoring()
api.logging.LogInfo("Security system initialized successfully")
return true
end
-- Setup privacy protection zones
function setupPrivacyZones()
local privacyMasks = {}
for _, zone in ipairs(securityConfig.privacy_zones) do
table.insert(privacyMasks, zone.points)
end
motion:setMask(privacyMasks, true)
motion:calculateMask()
api.logging.LogInfo("Configured " .. #privacyMasks .. " privacy zones")
end
-- Main motion monitoring loop
function startMotionMonitoring()
-- Implement motion processing in main loop
-- Process frames at desired interval
local lastStatsLog = os.time()
-- In your main processing loop:
-- processSecurityMotion() for motion detection
-- Check elapsed time for statistics logging:
-- if os.time() - lastStatsLog > 60 then logMotionStatistics() end
end
-- Process motion for security events
function processSecurityMotion()
local currentTime = os.time()
-- Get current frame (would be provided by video input)
local frame = getCurrentSecurityFrame()
if not frame then
return
end
-- Detect motion
local detections = motion:detectMotion(frame)
if not detections or #detections == 0 then
return
end
motionStats.total_detections = motionStats.total_detections + #detections
-- Process each motion detection
for _, detection in ipairs(detections) do
processMotionDetection(detection, currentTime)
end
end
-- Process individual motion detection
function processMotionDetection(detection, currentTime)
local motionArea = detection.area or (detection.w * detection.h)
-- Filter by minimum area
if motionArea < securityConfig.min_motion_area then
return
end
-- Check if motion is in secure area (not privacy zone)
if isInSecureArea(detection) then
-- Update motion statistics
motionStats.last_motion_time = currentTime
-- Create security event
local securityEvent = {
type = "motion_detected",
timestamp = currentTime,
location = {
x = detection.x,
y = detection.y,
w = detection.w,
h = detection.h
},
area = motionArea,
confidence = detection.confidence,
zone = determineSecurityZone(detection)
}
-- Process security event
processSecurityEvent(securityEvent)
end
end
-- Check if motion is in a secure (monitored) area
function isInSecureArea(detection)
local motionCenter = {
x = detection.x + detection.w / 2,
y = detection.y + detection.h / 2
}
-- Check against privacy zones (motion should be outside these)
for _, zone in ipairs(securityConfig.privacy_zones) do
if isPointInPolygon(motionCenter, zone.points) then
return false -- Motion is in privacy zone
end
end
return true -- Motion is in secure area
end
-- Process security event
function processSecurityEvent(event)
local currentTime = event.timestamp
-- Check alert cooldown
local timeSinceLastAlert = currentTime - (motionStats.last_alert_time or 0)
if timeSinceLastAlert < securityConfig.alert_cooldown then
return
end
motionStats.security_events = motionStats.security_events + 1
motionStats.last_alert_time = currentTime
api.logging.LogWarning("Security Event: Motion detected in " .. event.zone .. " (" .. event.area .. " px², conf: " .. event.confidence .. ")")
-- Trigger security actions
triggerSecurityActions(event)
end
-- Trigger security response actions
function triggerSecurityActions(event)
-- Send alert notification
alerts:sendAlert({
type = "motion_security",
priority = "high",
message = string.format("Motion detected in security zone: %s", event.zone),
location = event.location,
timestamp = event.timestamp
})
-- Start recording
storage:startRecording({
duration = securityConfig.recording_duration,
trigger = "motion_detection",
metadata = event
})
-- Log security event
api.logging.LogWarning("SECURITY: Motion event recorded - Zone: " .. event.zone .. ", Area: " .. event.area .. ", Time: " .. os.date("%Y-%m-%d %H:%M:%S", event.timestamp))
end
-- Determine which security zone the motion occurred in
function determineSecurityZone(detection)
local center = {
x = (detection.x + detection.w / 2) / frame_width,
y = (detection.y + detection.h / 2) / frame_height
}
-- Define security zones
if center.x < 0.3 then
return "perimeter_left"
elseif center.x > 0.7 then
return "perimeter_right"
elseif center.y < 0.3 then
return "perimeter_top"
elseif center.y > 0.7 then
return "perimeter_bottom"
else
return "center_area"
end
end
-- Log motion detection statistics
function logMotionStatistics()
local stats = motion:getStats()
if stats then
api.logging.LogInfo("Motion Detection Statistics:")
api.logging.LogInfo(" Total Detections: " .. motionStats.total_detections)
api.logging.LogInfo(" Security Events: " .. motionStats.security_events)
api.logging.LogInfo(" Processing FPS: " .. (stats.fps or "N/A"))
api.logging.LogInfo(" Current Motion Areas: " .. (stats.motion_count or 0))
api.logging.LogInfo(" Background Model Size: " .. (stats.background_model_size or "N/A"))
end
end
-- System health monitoring
function monitorMotionHealth()
-- Implement periodic health monitoring in main loop
-- Check every 5 minutes (300 seconds)
local function checkHealth()
local currentTime = os.time()
local timeSinceLastMotion = currentTime - motionStats.last_motion_time
-- Check if system is responsive
if timeSinceLastMotion > 3600 then -- 1 hour
api.logging.LogWarning("No motion detected for over 1 hour - checking system health")
-- Perform system health check
local debugFrame = motion:getPreprocessFrame()
if not debugFrame then
api.logging.LogError("Motion system appears unresponsive")
-- Trigger system restart or alert
end
end
end)
end
-- Initialize the complete security system
initializeSecuritySystem()
monitorMotionHealth()
api.logging.LogInfo("Motion-based security system is active")
Best Practices¶
Performance Optimization¶
- Resolution Scaling: Use appropriate
scale_down_width
andscale_down_height
for your performance requirements - Algorithm Selection: MOG2 is generally more accurate but slower than KNN
- History Size: Balance between adaptation speed and stability (500-1000 frames typical)
- Learning Rate: Use slower learning rates (0.001-0.01) for security applications
Accuracy Improvements¶
- Shadow Detection: Enable shadow detection to reduce false positives
- Noise Reduction: Use Gaussian blur for noisy video feeds
- Threshold Tuning: Adjust detection threshold based on lighting conditions
- Morphological Operations: Apply opening/closing operations to reduce noise
Security Considerations¶
- Privacy Masking: Implement proper privacy zones for sensitive areas
- False Positive Reduction: Use minimum area thresholds and temporal filtering
- Alert Management: Implement cooldown periods to prevent alert flooding
- Backup Systems: Consider multiple detection methods for critical applications
Integration Guidelines¶
- Event Handling: Integrate with event action systems for automated responses
- Storage Management: Configure appropriate recording triggers and durations
- Monitoring: Implement health checks and performance monitoring
- Configuration Management: Use appropriate settings for different environments
Troubleshooting¶
Common Issues¶
High False Positive Rate¶
// Adjust sensitivity and filtering parameters
motion_->configure({
{"threshold", 25}, // Increase threshold
{"detect_shadows", true}, // Enable shadow detection
{"blur_size", 5}, // Add noise reduction
{"history_size", 1000} // Increase background stability
});
// Add minimum area filtering
if (detection_area < 200) {
continue; // Skip small detections
}
Poor Performance¶
- Reduce Processing Resolution: Lower scale_down dimensions
- Increase Motion Reuse: Use
motion_reuse_count
to reuse stable results - Optimize Algorithm: Consider switching from MOG2 to KNN for speed
- Hardware Acceleration: Enable OpenCV GPU acceleration if available
Background Model Issues¶
- Slow Adaptation: Increase learning rate for dynamic scenes
- Fast Adaptation: Decrease learning rate for stable scenes
- History Size: Adjust history size based on scene stability
Memory Usage¶
- Monitor Background Model Size: Check memory usage through statistics
- Reduce History Size: Lower history_size parameter if memory is limited
- Process Resolution: Use lower processing resolution for memory-constrained systems
Debugging Tools¶
// Motion detection diagnostics
void diagnoseMotionDetection(MotionManaged* motion) {
// Get preprocessing frame for debugging
auto preprocessFrame = motion->getPreprocessFrame();
if (preprocessFrame) {
cv::imshow("Motion Preprocess", preprocessFrame.value());
}
// Get postprocessing frame
auto postprocessFrame = motion->getPostprocessFrame();
if (postprocessFrame) {
cv::imshow("Motion Postprocess", postprocessFrame.value());
}
// Check motion statistics
auto stats = motion->getStats();
if (stats) {
LOGI << "Motion Statistics:";
LOGI << " Detection Count: " << stats->get("num_detections").getInt();
LOGI << " Current Motion Areas: " << stats->get("motion_count").getInt();
LOGI << " Processing Time: " << stats->get("processing_time_ms").getDouble() << "ms";
LOGI << " FPS: " << stats->get("fps").getDouble();
}
}
Integration Examples¶
Video Management System Integration¶
// Complete VMS integration with motion detection
class VMSMotionIntegration {
public:
void initialize() {
// Initialize motion detection
initializeMotionDetection();
// Setup event handling
setupEventHandling();
// Configure recording triggers
setupRecordingTriggers();
}
void processVideoFeed(const VideoFrame& frame) {
// Detect motion
auto motionAreas = detectMotionInFrame(frame);
// Process motion events
for (const auto& area : motionAreas) {
processMotionForVMS(area, frame);
}
}
private:
void processMotionForVMS(const MotionArea& area, const VideoFrame& frame) {
// Create VMS event
VMSEvent event;
event.type = EventType::Motion;
event.timestamp = getCurrentTimestamp();
event.boundingBox = area.boundingBox;
event.confidence = area.confidence;
// Trigger recording
if (area.confidence > 0.7f) {
triggerRecording(event, frame);
}
// Update VMS database
updateVMSDatabase(event);
}
};
See Also¶
- EventAction Plugin - Event processing and response
- Processing Plugins Overview - All processing plugins
- Plugin Overview - Complete plugin ecosystem
- Input Plugins - Video input integration
- Security Guide - Security system integration patterns