Paravision Plugin¶
Description¶
Paravision is a specialized integration plugin for Paravision face recognition technology. It provides facial recognition and biometric identification capabilities through integration with Paravision's proprietary algorithms and hardware acceleration platforms.
This plugin enables enterprise-grade facial recognition and biometric identification through integration with Paravision's advanced computer vision algorithms, supporting both Windows and specialized hardware acceleration platforms for high-performance real-time face recognition applications.
Key Features¶
- Advanced Face Recognition: State-of-the-art facial recognition using Paravision's proprietary algorithms
- Biometric Processing: Comprehensive biometric data processing, extraction, and analysis
- Hardware Acceleration: Support for specialized hardware acceleration (Hailo AI processors, GPU acceleration)
- Real-time Processing: High-performance real-time face detection and recognition processing
- Identity Management: Integration with identity databases and management systems
- Multi-Platform Support: Optimized implementations for Windows and embedded platforms
- Security Features: Enhanced security and privacy protection mechanisms
- Quality Assessment: Advanced face quality assessment and filtering
- Embedding Generation: High-quality face embeddings for similarity matching
- Asynchronous Processing: Async inference support for high-throughput applications
- Scalable Architecture: Support for multi-threaded and distributed processing
- Performance Monitoring: Built-in performance monitoring and optimization
Requirements¶
Hardware Requirements¶
Windows Platform¶
- CPU: Multi-core x64 processor (Intel/AMD)
- Memory: Minimum 8GB RAM (16GB+ recommended for high-throughput)
- GPU: NVIDIA GPU with CUDA support (recommended for acceleration)
- Storage: SSD storage for model and database access
Hailo Platform¶
- Hailo AI Processor: Hailo-8 or Hailo-15 AI acceleration chips
- Memory: Sufficient system memory for model loading and processing
- Host Processor: ARM or x86 host processor for system management
Software Dependencies¶
Core Dependencies¶
- Paravision SDK: Proprietary Paravision face recognition libraries and runtime
- CVEDIA-RT Core: Base plugin infrastructure and interfaces
- Threading Libraries: Multi-threading support for concurrent processing
- Mathematical Libraries: Linear algebra and statistical computation libraries
Platform-Specific Dependencies¶
Windows Platform¶
- Visual C++ Redistributable: Microsoft Visual C++ runtime libraries
- CUDA Runtime: NVIDIA CUDA runtime (if GPU acceleration used)
- OpenCV: Computer vision library for image processing
- DirectX: DirectX runtime for GPU operations
Hailo Platform¶
- HailoRT: Hailo runtime library for AI processor communication
- Hailo SDK: Hailo development kit and drivers
- ARM Libraries: ARM-specific libraries for embedded deployment
Licensing Requirements¶
- Paravision License: Valid Paravision SDK license for face recognition algorithms
- Hardware License: Appropriate hardware acceleration licenses (if applicable)
- CVEDIA-RT License: Valid CVEDIA-RT commercial license
Configuration¶
Basic Configuration¶
{
"paravision": {
"model_path": "/models/paravision_face_model.bin",
"detection_threshold": 0.7,
"recognition_threshold": 0.8,
"max_faces": 10,
"enable_quality_filter": true,
"platform": "auto"
}
}
Advanced Configuration¶
{
"paravision": {
"model_path": "/models/paravision_face_model.bin",
"detection_threshold": 0.75,
"recognition_threshold": 0.85,
"quality_threshold": 0.6,
"max_faces": 20,
"enable_quality_filter": true,
"enable_pose_filter": true,
"platform": "hailo",
"hardware_acceleration": {
"enabled": true,
"device_type": "hailo15",
"batch_size": 4,
"async_processing": true
},
"processing_options": {
"input_width": 224,
"input_height": 224,
"color_format": "RGB",
"normalization": true,
"face_alignment": true
},
"performance": {
"pool_size": 4,
"async_results_duration": 1000,
"enable_profiling": true,
"memory_optimization": true
},
"quality_assessment": {
"min_face_size": 80,
"max_face_size": 400,
"blur_threshold": 0.5,
"brightness_range": [30, 200],
"pose_angle_limit": 30
}
}
}
Configuration Schema¶
Parameter | Type | Default | Description |
---|---|---|---|
model_path |
string | required | Path to Paravision face recognition model |
detection_threshold |
float | 0.7 | Face detection confidence threshold (0.0-1.0) |
recognition_threshold |
float | 0.8 | Face recognition similarity threshold (0.0-1.0) |
quality_threshold |
float | 0.5 | Face quality assessment threshold (0.0-1.0) |
max_faces |
int | 10 | Maximum number of faces to process per frame |
enable_quality_filter |
bool | true | Enable face quality filtering |
enable_pose_filter |
bool | false | Enable face pose filtering |
platform |
string | "auto" | Target platform ("auto", "windows", "hailo") |
batch_size |
int | 1 | Processing batch size for hardware acceleration |
async_processing |
bool | false | Enable asynchronous processing |
input_width |
int | 224 | Model input width in pixels |
input_height |
int | 224 | Model input height in pixels |
color_format |
string | "RGB" | Input color format ("RGB", "BGR") |
normalization |
bool | true | Enable input normalization |
face_alignment |
bool | true | Enable face alignment preprocessing |
pool_size |
int | 1 | Inference engine pool size |
async_results_duration |
int | 1000 | Async results collection duration (ms) |
enable_profiling |
bool | false | Enable performance profiling |
memory_optimization |
bool | true | Enable memory usage optimization |
min_face_size |
int | 50 | Minimum face size in pixels |
max_face_size |
int | 500 | Maximum face size in pixels |
blur_threshold |
float | 0.5 | Blur detection threshold |
pose_angle_limit |
int | 45 | Maximum pose angle in degrees |
API Reference¶
C++ API (ParavisionManaged)¶
Core Inference Methods¶
class ParavisionManaged : public iface::InferenceEngine {
public:
// Model management
expected<void> loadModelFromConfig() override;
expected<void> loadModel(std::string const& path) override;
expected<void> loadModel(std::string const& path, CValue& handlerConfig) override;
// Inference operations
expected<cvec> runInference(cvec const& jobs) override;
expected<void> runInferenceAsync(cvec const& jobs, int ttl = 60) override;
expected<cvec> getAsyncInferenceResults() override;
// Configuration and capabilities
expected<pCValue> getCapabilities() override;
expected<pCValue> getModelConfig() override;
void setBackendConfig(pCValue conf) override;
// Model information
ssize_t inputBatchSize() override;
ssize_t inputWidth() override;
ssize_t inputHeight() override;
ssize_t inputChannels() override;
std::vector<int> inputShape() override;
std::vector<int> outputShape() override;
};
Face Recognition Data Structures¶
// Face bounding box (platform-specific)
#ifdef WIN32
using FaceBBox = Paravision::Recognition::BoundingBox;
#elif HAILO_15_GREENBASE_BUILD
using FaceBBox = Paravision::BoundingBox;
#endif
// Face embedding result
struct ParavisionInferenceResult {
std::shared_ptr<Paravision::Recognition::Embedding> embedding;
float quality;
float confidence;
FaceBBox boundingBox;
};
// Inference job data
struct InferenceJobData {
std::shared_ptr<FaceBBox> bbox;
cbuffer frame;
std::string job_id;
double timestamp;
};
Platform-Specific Components¶
#if HAILO_15_GREENBASE_BUILD
class ParavisionManaged {
private:
std::unique_ptr<Paravision::CoreComponents> coreComponents_;
std::unique_ptr<Paravision::Detection::SDK> detectionSdk_;
std::unique_ptr<Paravision::Recognition::SDK> recognitionSdk_;
};
#else
class ParavisionManaged {
private:
std::unique_ptr<Paravision::Recognition::SDK> recognitionSdk_;
};
#endif
Async Processing¶
class ParavisionManaged {
public:
// Async inference management
expected<void> setAsyncResultsCollectionDuration(int durationMs) override;
expected<void> setPoolSize(int poolSize) override;
private:
std::unique_ptr<AsyncInferenceWorker> asyncWorker_;
std::atomic<bool> modelLoaded_ = false;
int inferenceCount_ = 0;
};
Lua API¶
Model Loading and Configuration¶
-- Create Paravision inference engine
local paravision = api.factory.inference.create(instance, "paravision_engine")
-- Configure for face recognition
paravision:configure({
engine = "paravision",
model_path = "/models/paravision_face.bin",
detection_threshold = 0.75,
recognition_threshold = 0.85,
platform = "auto",
hardware_acceleration = true
})
-- Load model
local success = paravision:loadModel()
if success then
print("Paravision model loaded successfully")
-- Get model information
local modelConfig = paravision:getModelConfig()
print("Input size:", modelConfig.input_width .. "x" .. modelConfig.input_height)
print("Batch size:", paravision:inputBatchSize())
else
print("Failed to load Paravision model")
end
Face Recognition Processing¶
-- Process face recognition
function processFaceRecognition(frame, faceDetections)
local recognitionJobs = {}
-- Create recognition jobs from face detections
for _, detection in ipairs(faceDetections) do
local job = {
frame = frame,
bbox = {
x = detection.x,
y = detection.y,
w = detection.w,
h = detection.h
},
job_id = "face_" .. detection.id,
timestamp = api.system.getCurrentTime()
}
table.insert(recognitionJobs, job)
end
-- Run face recognition inference
local results = paravision:runInference(recognitionJobs)
if results then
for _, result in ipairs(results) do
processRecognitionResult(result)
end
end
end
-- Process recognition result
function processRecognitionResult(result)
print("Face Recognition Result:")
print(" Job ID:", result.job_id)
print(" Quality:", result.quality)
print(" Confidence:", result.confidence)
if result.embedding then
print(" Embedding size:", #result.embedding)
-- Compare with database embeddings
local match = findBestMatch(result.embedding, faceDatabase)
if match and match.similarity > 0.85 then
print(" Match found:", match.identity, "(" .. match.similarity .. ")")
else
print(" No match found")
end
end
end
Async Processing¶
-- Configure async processing
paravision:setPoolSize(4)
paravision:setAsyncResultsCollectionDuration(1000)
-- Submit async jobs
function submitAsyncRecognition(faceJobs)
local success = paravision:runInferenceAsync(faceJobs, 60) -- 60 second TTL
if success then
print("Submitted", #faceJobs, "face recognition jobs")
end
end
-- Collect async results
function collectAsyncResults()
local results = paravision:getAsyncInferenceResults()
if results and #results > 0 then
print("Retrieved", #results, "async results")
for _, result in ipairs(results) do
processRecognitionResult(result)
end
end
end
-- Process with async pattern - implement in main loop
-- Check every N frames or use a frame counter
-- Example: if frame_count % 15 == 0 then collectAsyncResults() end
Examples¶
Basic Face Recognition System¶
#include "paravision_managed.h"
// Basic face recognition implementation
class FaceRecognitionSystem {
public:
void initialize() {
// Create Paravision inference engine
paravision_ = std::unique_ptr<ParavisionManaged>(
static_cast<ParavisionManaged*>(
ParavisionManaged::create("face_recognition").release()
)
);
// Configure for face recognition
auto config = CValue::create();
config->set("model_path", "/models/paravision_face.bin");
config->set("detection_threshold", 0.75);
config->set("recognition_threshold", 0.85);
config->set("max_faces", 10);
config->set("enable_quality_filter", true);
paravision_->setBackendConfig(config);
// Load model
auto loadResult = paravision_->loadModelFromConfig();
if (!loadResult) {
LOGE << "Failed to load Paravision model: " << loadResult.error().message();
return;
}
// Get model information
auto capabilities = paravision_->getCapabilities();
if (capabilities) {
LOGI << "Paravision model capabilities:";
LOGI << " Input size: " << paravision_->inputWidth()
<< "x" << paravision_->inputHeight();
LOGI << " Batch size: " << paravision_->inputBatchSize();
}
LOGI << "Face recognition system initialized";
}
std::vector<RecognitionResult> recognizeFaces(const cbuffer& frame,
const std::vector<FaceDetection>& faces) {
std::vector<RecognitionResult> results;
// Prepare inference jobs
cvec jobs;
for (const auto& face : faces) {
auto job = createRecognitionJob(frame, face);
if (job) {
jobs.push_back(job);
}
}
if (jobs.empty()) {
return results;
}
// Run inference
auto inferenceResults = paravision_->runInference(jobs);
if (!inferenceResults) {
LOGE << "Face recognition inference failed: "
<< inferenceResults.error().message();
return results;
}
// Process results
for (const auto& result : inferenceResults.value()) {
auto recognition = processInferenceResult(result);
if (recognition.has_value()) {
results.push_back(recognition.value());
}
}
LOGI << "Recognized " << results.size() << " faces from "
<< faces.size() << " detections";
return results;
}
private:
std::unique_ptr<ParavisionManaged> paravision_;
struct FaceDetection {
int x, y, w, h;
float confidence;
std::string id;
};
struct RecognitionResult {
std::string face_id;
std::vector<float> embedding;
float quality;
float confidence;
FaceDetection detection;
};
pCValue createRecognitionJob(const cbuffer& frame, const FaceDetection& face) {
auto job = CValue::create();
// Set frame data
job->set("frame", frame);
// Set bounding box
auto bbox = CValue::create();
bbox->set("x", face.x);
bbox->set("y", face.y);
bbox->set("w", face.w);
bbox->set("h", face.h);
job->set("bbox", bbox);
// Set metadata
job->set("job_id", face.id);
job->set("timestamp", getCurrentTimestamp());
return job;
}
std::optional<RecognitionResult> processInferenceResult(pCValue result) {
if (!result || result->isNull()) {
return std::nullopt;
}
RecognitionResult recognition;
// Extract basic information
recognition.face_id = result->get("job_id").getString();
recognition.quality = result->get("quality").getFloat();
recognition.confidence = result->get("confidence").getFloat();
// Extract embedding
auto embeddingValue = result->get("embedding");
if (embeddingValue.isArray()) {
for (const auto& val : embeddingValue.getVector()) {
recognition.embedding.push_back(val->getFloat());
}
}
// Extract bounding box
auto bboxValue = result->get("bbox");
if (!bboxValue.isNull()) {
recognition.detection.x = bboxValue->get("x").getInt();
recognition.detection.y = bboxValue->get("y").getInt();
recognition.detection.w = bboxValue->get("w").getInt();
recognition.detection.h = bboxValue->get("h").getInt();
}
return recognition;
}
double getCurrentTimestamp() {
return std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now().time_since_epoch()).count() / 1000.0;
}
};
High-Performance Async Face Recognition¶
// High-performance async face recognition system
class AsyncFaceRecognitionSystem {
public:
void initialize() {
// Initialize Paravision with async support
paravision_ = std::unique_ptr<ParavisionManaged>(
static_cast<ParavisionManaged*>(
ParavisionManaged::create("async_face_recognition").release()
)
);
// Configure for high-performance processing
auto config = CValue::create();
config->set("model_path", "/models/paravision_fast.bin");
config->set("detection_threshold", 0.7);
config->set("recognition_threshold", 0.8);
config->set("batch_size", 8);
config->set("async_processing", true);
config->set("hardware_acceleration", true);
paravision_->setBackendConfig(config);
paravision_->loadModelFromConfig();
// Configure async processing
paravision_->setPoolSize(8); // 8 concurrent workers
paravision_->setAsyncResultsCollectionDuration(500); // 500ms collection
// Start result collection thread
startResultCollection();
LOGI << "Async face recognition system initialized";
}
void submitFaceRecognitionBatch(const std::vector<FaceJob>& faceJobs) {
// Convert to CValue jobs
cvec jobs;
for (const auto& faceJob : faceJobs) {
auto job = createJobFromFaceJob(faceJob);
jobs.push_back(job);
}
// Submit for async processing
auto result = paravision_->runInferenceAsync(jobs, 60); // 60 second TTL
if (result) {
LOGI << "Submitted " << jobs.size() << " face recognition jobs";
totalJobsSubmitted_ += jobs.size();
} else {
LOGE << "Failed to submit async jobs: " << result.error().message();
}
}
private:
std::unique_ptr<ParavisionManaged> paravision_;
std::thread resultCollectionThread_;
std::atomic<bool> running_{true};
std::atomic<int> totalJobsSubmitted_{0};
std::atomic<int> totalResultsProcessed_{0};
struct FaceJob {
std::string job_id;
cbuffer frame;
FaceDetection detection;
double submission_time;
};
void startResultCollection() {
resultCollectionThread_ = std::thread([this]() {
collectAsyncResults();
});
}
void collectAsyncResults() {
while (running_) {
auto results = paravision_->getAsyncInferenceResults();
if (results && !results.value().empty()) {
processAsyncResults(results.value());
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
void processAsyncResults(const cvec& results) {
for (const auto& result : results) {
processRecognitionResult(result);
}
totalResultsProcessed_ += results.size();
// Log performance metrics
if (totalResultsProcessed_ % 100 == 0) {
double processingRate = static_cast<double>(totalResultsProcessed_) /
(getCurrentTimestamp() - startTime_);
LOGI << "Face recognition rate: " << processingRate << " faces/sec";
}
}
double startTime_ = getCurrentTimestamp();
};
Complete Identity Management Integration¶
-- Complete face recognition and identity management system
local paravision = api.factory.inference.create(instance, "identity_system")
local database = api.factory.database.create(instance, "identity_db")
local detector = api.factory.inference.create(instance, "face_detector")
-- Identity database management
local identityDatabase = {
known_faces = {},
recognition_log = {},
active_sessions = {}
}
-- Initialize complete system
function initializeIdentitySystem()
print("Initializing Paravision identity management system")
-- Configure face detection
detector:configure({
engine = "tensorrt",
model_path = "/models/face_detection.trt",
detection_threshold = 0.7,
nms_threshold = 0.4
})
-- Configure Paravision face recognition
paravision:configure({
engine = "paravision",
model_path = "/models/paravision_identity.bin",
detection_threshold = 0.75,
recognition_threshold = 0.85,
quality_threshold = 0.6,
platform = "hailo",
hardware_acceleration = true,
async_processing = true,
pool_size = 6
})
-- Load models
detector:loadModel()
paravision:loadModel()
-- Initialize database
database:connect("sqlite:///data/identity.db")
loadKnownIdentities()
-- Start async result collection
startAsyncProcessing()
print("Identity system initialized with " .. #identityDatabase.known_faces .. " known identities")
end
-- Load known identities from database
function loadKnownIdentities()
local query = "SELECT id, name, embedding, metadata FROM identities WHERE active = 1"
local results = database:query(query)
for _, row in ipairs(results) do
identityDatabase.known_faces[row.id] = {
id = row.id,
name = row.name,
embedding = api.json.decode(row.embedding),
metadata = api.json.decode(row.metadata or "{}")
}
end
end
-- Process frame for identity recognition
function processFrameForIdentity(frame)
local startTime = api.system.getCurrentTime()
-- Detect faces
local detections = detector:runInference({frame})
if not detections or #detections == 0 then
return {}
end
-- Filter quality faces
local qualityFaces = filterFacesByQuality(detections[1].detections)
if #qualityFaces == 0 then
return {}
end
-- Submit for async recognition
local recognitionJobs = {}
for _, face in ipairs(qualityFaces) do
local job = {
frame = frame,
bbox = face.bbox,
job_id = "face_" .. face.id .. "_" .. startTime,
timestamp = startTime,
detection_confidence = face.confidence
}
table.insert(recognitionJobs, job)
end
-- Submit async jobs
paravision:runInferenceAsync(recognitionJobs, 60)
-- Debug: Submitted N faces for recognition
return qualityFaces
end
-- Filter faces by quality criteria
function filterFacesByQuality(detections)
local qualityFaces = {}
for _, detection in ipairs(detections) do
local face = detection
-- Size filter
local faceSize = math.max(face.bbox.w, face.bbox.h)
if faceSize < 80 or faceSize > 400 then
goto continue
end
-- Confidence filter
if face.confidence < 0.7 then
goto continue
end
-- Aspect ratio filter
local aspectRatio = face.bbox.w / face.bbox.h
if aspectRatio < 0.7 or aspectRatio > 1.4 then
goto continue
end
table.insert(qualityFaces, face)
::continue::
end
return qualityFaces
end
-- Start async result processing
function startAsyncProcessing()
-- Implement periodic checking in your main processing loop
-- Check every few frames for async results
-- Example implementation:
local function checkAsyncResults()
local results = paravision:getAsyncInferenceResults()
if results and #results > 0 then
for _, result in ipairs(results) do
processRecognitionResult(result)
end
end
end
end
-- Process individual recognition result
function processRecognitionResult(result)
if not result.embedding or result.quality < 0.6 then
-- Debug: Low quality recognition result, skipping
return
end
-- Find best match in database
local bestMatch = findBestIdentityMatch(result.embedding)
local identityResult = {
job_id = result.job_id,
timestamp = result.timestamp,
quality = result.quality,
confidence = result.confidence,
bbox = result.bbox,
embedding = result.embedding
}
if bestMatch and bestMatch.similarity > 0.85 then
-- Known identity recognized
identityResult.identity_id = bestMatch.identity.id
identityResult.identity_name = bestMatch.identity.name
identityResult.similarity = bestMatch.similarity
identityResult.match_type = "known"
print("Identity recognized: " .. bestMatch.identity.name .. " (" .. bestMatch.similarity .. " similarity, " .. result.quality .. " quality)")
-- Update session tracking
updateActiveSession(bestMatch.identity.id, identityResult)
else
-- Unknown identity
identityResult.identity_id = "unknown"
identityResult.identity_name = "Unknown"
identityResult.similarity = bestMatch and bestMatch.similarity or 0.0
identityResult.match_type = "unknown"
-- Potentially add to unknown faces database
if result.quality > 0.8 then
addUnknownFace(identityResult)
end
end
-- Log recognition event
logRecognitionEvent(identityResult)
-- Trigger alerts if needed
checkForAlerts(identityResult)
end
-- Find best matching identity
function findBestIdentityMatch(queryEmbedding)
local bestMatch = nil
local bestSimilarity = 0.0
for _, identity in pairs(identityDatabase.known_faces) do
local similarity = calculateCosineSimilarity(queryEmbedding, identity.embedding)
if similarity > bestSimilarity then
bestSimilarity = similarity
bestMatch = {
identity = identity,
similarity = similarity
}
end
end
return bestMatch
end
-- Calculate cosine similarity between embeddings
function calculateCosineSimilarity(embedding1, embedding2)
if #embedding1 ~= #embedding2 then
return 0.0
end
local dotProduct = 0.0
local norm1 = 0.0
local norm2 = 0.0
for i = 1, #embedding1 do
dotProduct = dotProduct + embedding1[i] * embedding2[i]
norm1 = norm1 + embedding1[i] * embedding1[i]
norm2 = norm2 + embedding2[i] * embedding2[i]
end
local magnitude = math.sqrt(norm1) * math.sqrt(norm2)
return magnitude > 0 and dotProduct / magnitude or 0.0
end
-- Update active session tracking
function updateActiveSession(identityId, recognitionResult)
local currentTime = api.system.getCurrentTime()
if not identityDatabase.active_sessions[identityId] then
-- New session
identityDatabase.active_sessions[identityId] = {
identity_id = identityId,
start_time = currentTime,
last_seen = currentTime,
recognition_count = 1,
avg_quality = recognitionResult.quality,
avg_similarity = recognitionResult.similarity
}
else
-- Update existing session
local session = identityDatabase.active_sessions[identityId]
session.last_seen = currentTime
session.recognition_count = session.recognition_count + 1
-- Update averages
session.avg_quality = (session.avg_quality + recognitionResult.quality) / 2
session.avg_similarity = (session.avg_similarity + recognitionResult.similarity) / 2
end
end
-- System performance monitoring
function monitorSystemPerformance()
-- Implement periodic monitoring in your main loop
-- Use frame counters or time checks with os.time()
local function checkPerformance()
local stats = {
known_identities = #identityDatabase.known_faces,
active_sessions = 0,
recognition_rate = 0
}
-- Count active sessions (seen in last 60 seconds)
local currentTime = api.system.getCurrentTime()
for _, session in pairs(identityDatabase.active_sessions) do
if currentTime - session.last_seen < 60 then
stats.active_sessions = stats.active_sessions + 1
end
end
print("Identity System Stats:")
-- Print stats details
end)
end
-- Initialize the complete system
initializeIdentitySystem()
monitorSystemPerformance()
print("Paravision identity management system ready")
Best Practices¶
Model Optimization¶
- Platform-Specific Models: Use models optimized for target platform (Windows/Hailo)
- Quality Assessment: Implement comprehensive face quality filtering
- Batch Processing: Use appropriate batch sizes for hardware acceleration
- Memory Management: Optimize memory usage for embedded deployments
Performance Guidelines¶
- Async Processing: Use async inference for high-throughput applications
- Hardware Acceleration: Leverage available hardware acceleration (GPU, Hailo)
- Pool Sizing: Configure appropriate pool sizes based on concurrent load
- Quality Filtering: Filter low-quality faces before recognition
Integration Strategies¶
- Face Detection: Combine with high-quality face detection for optimal results
- Database Integration: Implement efficient identity database management
- Session Tracking: Track identity sessions for analytics and monitoring
- Alert Systems: Integrate with alert systems for security applications
Troubleshooting¶
Common Issues¶
Model Loading Failures¶
// Check model path and permissions
if (!std::filesystem::exists(modelPath)) {
LOGE << "Paravision model file not found: " << modelPath;
return;
}
// Verify Paravision SDK installation
if (!isParavisionSDKAvailable()) {
LOGE << "Paravision SDK not properly installed";
return;
}
Hardware Acceleration Issues¶
- Hailo Platform: Verify HailoRT installation and device accessibility
- GPU Acceleration: Check CUDA installation and GPU compatibility
- Driver Issues: Ensure proper hardware drivers are installed
Recognition Accuracy Problems¶
- Quality Thresholds: Adjust quality and confidence thresholds
- Face Alignment: Ensure proper face alignment preprocessing
- Lighting Conditions: Address poor lighting conditions in input images
- Database Quality: Ensure high-quality reference images in database
Debugging Tools¶
// Paravision system diagnostics
void diagnoseParavisionSystem() {
// Check SDK availability
if (!checkParavisionSDK()) {
LOGE << "Paravision SDK not available";
}
// Check platform capabilities
auto caps = paravision_->getCapabilities();
if (caps) {
LOGI << "Paravision Capabilities:";
for (const auto& [key, value] : caps->getMap()) {
LOGI << " " << key << ": " << value.toString();
}
}
// Monitor performance
auto startTime = std::chrono::high_resolution_clock::now();
// ... run test inference ...
auto endTime = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(endTime - startTime);
LOGI << "Test inference time: " << duration.count() << "ms";
}
Integration Examples¶
Enterprise Access Control System¶
// Complete access control system with Paravision
class AccessControlSystem {
public:
void initialize() {
// Initialize face recognition
initializeParavision();
// Load authorized personnel database
loadAuthorizedPersonnel();
// Setup access control logic
initializeAccessControl();
// Start monitoring
startSystemMonitoring();
}
void processAccessRequest(const VideoFrame& frame,
const std::vector<FaceDetection>& faces) {
for (const auto& face : faces) {
auto recognition = recognizeFace(frame, face);
if (recognition.has_value()) {
processAccessDecision(recognition.value());
}
}
}
private:
void processAccessDecision(const RecognitionResult& result) {
if (result.confidence > 0.85 &&
isAuthorizedPersonnel(result.identity_id)) {
grantAccess(result.identity_id);
logAccessEvent(result, "GRANTED");
} else {
denyAccess();
logAccessEvent(result, "DENIED");
}
}
};
See Also¶
- Inference Plugins Overview - AI inference integration and engines
- Platform Plugins Overview - All platform-specific plugins
- Plugin Overview - Complete plugin ecosystem
- JetsonUtils Plugin - NVIDIA Jetson platform integration
- Identity Management Guide - Identity and biometric system integration