FaceLog6 (Real-time face identification)

Description

FaceLog6 is an advanced analytic that tracks unique faces in a video stream, classifies and optionally identifies them against a supplied watchlist. This plugin offers multithread support and is able to process several high resolution video streams on a multicores CPU.

  • Product name: FaceLog6
  • Version: 2.0

Configure and Create a FaceLog6 instance

All configuration should be done during initialisation using the functions of PFaceLog6Parameters.

// Include the header file
// Create a default set of parameters
PFaceLog6Parameters faceLogParameters;
// Do every frame
faceLogParameters.SetMaxFaceDetectorFR(-1);
// Use first NVidia GPU for face-detector
faceLogParameters.SetFaceDetectorGpuId(0);
// Use second NVidia GPU for face-recognition
faceLogParameters.SetGpuId(1);
// Load a watchlist to use for face-rec and enable face-rec
PWatchlist watchlist;
PWatchlist::ReadFromFile("/path/to/watchlist.bin", watchlist).OrDie("Failed to load watchlist");
P_LOG_INFO << "Loaded watchlist with " << watchlist.GetNumberOfDescriptors() << " descriptors";
faceLogParameters.SetFaceRecognition(true);
faceLogParameters.SetWatchlist(watchlist);
// Finally create the analytic
PAnalytics::Create("FaceLog6", parameters, faceLog6).OrDie();

We aim to have a sensible set of default parameters that work effectively in many scenarios, however given the range of different use-cases for this analytic this is not always possible. For the various options please read the documentation for PFaceLog6Parameters.

Input Data

Input data is a single PFrame object, see PAnalytics::Apply().

Face Recognition

If a PWatchlist is provided to FaceLog6 as a parameter (PFaceLog6Parameters::SetWatchlist), then face recognition is performed on each detection. Inside a watchlist each subject is assigned a unique id (subject-id) when a description. If a match is found (above a user set threshold), the face/sighting is labeled with the subject-id from the watchlist. A confidence of match is also assigned.

It should be noted, the most reliable identity is from the Sighting event as it is aggregated over all detections in the sighting.

Face Recognition Performance

The face recognition algorithm in Papillon is based on a powerful new technique called Deep Learning. It vastly superior to what was available a few years ago. However, they still have their limitations.

You will get best performance when the faces in the video are within +/-30 degrees from frontal, the faces are diffusely illuminated (with no strong shadows) and there is a minimum of 50 pixels between the eyes. The algorithms will still function outside these parameters, however the performance will be degraded. Ideally, face-recognition should be used as an aid to a human operator performing their task, helping them reduce the amount of data they have to search or ordering the priority of data they have to review.

Read Face Recognition to learn more.

Face MetaData Classifier

A face-meta classifiers are applied to to some face detections in each sighting. This provides extra information about the faces in the track and sighting. Currently this is limited to Gender and Mask information.

Example: how to extract meta information from a "Face" event?

#include <PapillonCore.h>
PString ExtractMask(const PEvent& event) {
if (event.GetType() != "Face")
return PString::Empty();
const PProperties& eventPayload = event.GetPayload();
PDescription description;
if (eventPayload.Get("Description", description).Failed())
return PString::Empty(); // this is and error, you can add an error message here
PFaceMetaData faceMetaData = PFaceMetaData::FromDescription(description);
const PString classificationNameToUse{"mask"};
// get classification in form of class+score
int32 classIndex{-1};
float classConfidence{0.f};
if(faceMetaData.GetClassIndex(classificationNameToUse, classIndex, classConfidence).Failed())
return PString::Empty(); // this is and error, you can add an error message here
// get classification label
PString classLabel;
if(faceMetaData.GetClassLabel(classification, classLabel).Failed())
return PString::Empty(); // this is and error, you can add an error message here
return classLabel;
}

Speed

Running several instances of FaceLog6 on the same machine and Slowdown issues. If you encounter slowdowns while running several instances of FaceLog6, check CPU and GPU usage: if 100% is reached, you should adjust parameters to reduce resource usage and get better load balancing between Analytics engines. For example, you can try to limit video stream resolution and/or limit the number of frames processed by second (see parameter MaxFaceDetectorFR below). Another option is to update your hardware and use a CPU with more cores and/or a bigger GPU.

Simple Example

The code below shows a small snippet in how to create a FaceLog6 analytic, attach a watchlist and run it on frames in a video. A more complex example is shown at the end of this page.

#include <PapillonCore.h>
// Create and use faceLog6 analytic
void DoFaceLog6(const PWatchlist& watchlist, PInputVideoStream& inputVideoStream) {
// Set face-log options
PFaceLog6Parameters faceLogOptions;
faceLogOptions.Set("Watchlist", watchlist);
PAnalytics faceLog;
PAnalytics::Create("FaceLog6", faceLogOptions, faceLog).OrDie();
// Loop through the video and get face-log events
PFrame frame;
PList events;
while (inputVideoStream.GetFrame(frame).Ok()) {
// run FaceLog on current image
faceLog.Apply(frame, events);
// process events (see Events section below)
for (int32 i = 0, ni = events.Size(); i < ni; i++) {
PEvent event;
if (events.Get(i, event).Failed())
continue;
const PProperties& eventProperties = event.GetPayload();
const PString& eventType = event.GetType();
if (eventType == "Sighting") {
PGuid subjectId;
eventProperties.Get("SubjectId", subjectId);
PDescription watchlistDescription;
bool subjectKnown = false;
if (m_watchlist.GetDescription(subjectId, watchlistDescription).Ok())
subjectKnown = true;
PString detectionName = "Unknown";
if (subjectKnown)
detectionName = subjectDescription.GetName();
// ...
} else if (eventType == "Face") {
PDetection detection;
if (eventProperties.Get("Detection", detection).Failed()) continue;
PRectanglef detectionBbox = detection.GetFaceRectangle().GetRectangle();
// ...
}
}
}
// Mop up any left over events
faceLog.Finish(events);
// process final events...
}

Events

This analytic provides following events:

  • FrameStart: start to process a frame
  • FrameSkipped: skip a frame (see MaxFaceDetectorFR parameter below)
  • FrameEnd: the analysis of a frame is finished
  • Face: a face has been detected in a frame
  • Sighting: a person has been seen in a video stream (and is now no more visible)

The type of event can be checked through PEvent::GetType().

Each of these events have identifiers (PGuid) to be able to link them together (e.g. to know that a face is the same person between consecutive frames). See documentation below (section Face Events) to learn more.

Because of asynchronous processing, several frame events can sometimes be released at the the same time. In the opposite, list of events can be empty for some calls to PAnalytics::Apply().

What is a sighting?

A sighting represents a person seen in a video stream. It is a collection of "faces" of the same subject. The sightings do not need to be in successive frames, there can be a break (lag) between them.

For example, often a sighting of a person walking through a video can be lost; for example, if they turn their head, or become obscured by someone else walking in front of them. In FaceLog6 we attempt to track of a person with assistance of face-recognition.

A sighting event is generated when a person has left the scene for more than 10 frames (value by default, see MaxGap parameter below).

Frame Events

These events introduced to put "milestones" in the events stream:

  • "FrameStart" event signals that next events were generated from processing of this frame.
  • "FrameSkipped" shows that the frame was skipped from processing. This normally happens when the limit on number of frames to be processed is set in order to reduce CPU/GPU load.
  • "FrameEnd" event marks that there will be no more events generated from this frame.

The payload of the frame events includes the following information (use PEvent::GetPayload() to access these data):

Parameter Type Description
Frame PFrame The frame

Example: how to get frame number and timestamp for "FrameStart" event?

if (event.GetType() == "FrameStart") {
PFrame frame;
if (event.GetPayload().Get("Frame", frame).Ok()) {
int frameNumber = frame.GetFrameNumber();
PDateTime timestamp = frame.GetTimestampUTC();
// ...
}
}

Face Events

If PEvent::GetType() returns "Face", then the event is a face event. A face event is a detection of a face in an image.

The payload of the face event includes the following information (use PEvent::GetPayload() to access these data):

Parameter Type Description
Thumbnail PImage Thumbnail of the face
Detection PDetection The detection
SubjectId PGuid Id of the subject if identified (PGuid::Null() id if unknown)
SubjectIdConfidence PFloat Confidence of identification (see below)
SightingId PGuid Id of the sighting (this id is the link between several faces of the same person)
Description PDescription Description of this detection together with the sighting

If no PWatchlist was supplied to the FaceLog6 analytic then the subject information will contain the PGuid::Null() and the confidence will be set to -1.

About SubjectIdConficence: This is a float number representing the identification score.

  • For Unknown people, this score will be -1 for the first two frames, then it will become 1 from the third frame, indicating that an "Unknown" person has been detected and this detection is confirmed. The -1 value should be interpreted as "probably a face"; according to your needs, you can choose to display bounding boxes on early detection or prefer to wait for "confirmed" faces.
  • For Known people, this score is typically in the range ]-1, 1] but can sometimes exceed 1, depending on the underlying algorithm.

About Description:

  • The description will contain information about the subject in the sighting that can be stored in a watchlist, and later searched.
  • The description will also contain meta-data about the face in sighting, e.g. its gender.

Sighting Events

If PEvent::GetType() returns "Sighting", then the event is a sighting event.

The payload of a sighting event contains the following information (use PEvent::GetPayload() to access these data):

Parameter Type Description
Thumbnail PImage A thumbnail that represents the sighting
StartFrame PInt64 The start frame index
EndFrame PInt64 The end frame index
NumberOfFrames PInt64 The number of frames
StartTime PDateTime The start time of the track
EndTime PDateTime The end time of the track
SubjectId PGuid The global id of the subject in the sighting, PGuid::Null() if not known
SubjectIdConfidence PFloat The confidence of the identity (see below)
SightingId PGuid Id of the sighting (this id is the link between several faces of the same person)
Description PDescription The description of the subject (you can store this in a watchlist, see PWatchlist::Add())

About SubjectIdConfidence: This is a float number representing the identification score.

  • For Unknown people, this score will be -1 if the person has been seen less than 3 frames (it means the sighting is not confirmed), 1 otherwise. In this case, a value of 1 indicates a confirmed sighting.
  • For Known people, this score represents the confidence of identification (a typical score for a good confidence is 0.5 to 0.8; a value > 0.8 represents an extremely high confidence).

Full Example

/*
* Copyright (C) 2015-2016 Digital Barriers plc. All rights reserved.
* Contact: http://www.digitalbarriers.com/
*
* This file is part of the Papillon SDK.
*
* You can't use, modify or distribute any part of this file without
* the explicit written agreements of Digital Barriers plc.
*/
#include <PTimer.h>
#include <PapillonCore.h>
#include <algorithm>
#include <map>
#include <queue>
#include <sstream>
#include <unordered_map>
#define DODEBUG 0
#if DODEBUG
#define ONDEBUG(x) x
#else
#define ONDEBUG(x)
#endif
USING_NAMESPACE_PAPILLON
const PString SAMPLE_DIR = PPath::Join(PUtils::GetEnv("PAPILLON_INSTALL_DIR"),
"Data",
"Samples"); // path to find sample data: $PAPILLON_INSTALL_DIR/Data/Samples
const float SCALE_FACTOR = 1.0f;
PResult HandleSighting(const PEvent& sighting, const PWatchlist& watchlist, bool writeVideo, bool display = false) {
PGuid sightingId;
sighting.GetPayload().Get("SightingId", sightingId);
PString shortSightingId = sightingId.ToString().Substring(0, 3);
PString videoFile = PString("sighting_%0.avi?encode_with=opencv&fourcc=XVID&fps=10").Arg(shortSightingId);
PString thumbnailFile = PString("thumbnail_%0.jpg").Arg(shortSightingId);
if (writeVideo) {
if (POutputVideoStream::Open(videoFile, ovs).Failed()) {
P_LOG_ERROR << "Failed to open " << videoFile << " for writing";
writeVideo = false;
}
}
// Each sighting has a thumb nail
PImage thumbnail;
sighting.GetPayload().Get(papillon::C_OPTION_THUMBNAIL, thumbnail);
if (display)
thumbnail.Display("Sighting", 40);
// We can get subjectId from sighting
PGuid subjectId;
sighting.GetPayload().Get("SubjectId", subjectId);
// We can get confidence we are of subject
double subjectConfidence = sighting.GetPayload().GetDouble("SubjectIdConfidence");
// We can also get the description
PDescription description;
sighting.GetPayload().Get("Description", description);
// In the description we can the face meta-data
PFaceMetaData faceMetaData = PFaceMetaData::FromDescription(description);
// We go to the watch list to get the identity
PString name = "Unknown";
if (watchlist.GetDescription(subjectId, wd)) {
name = PString("%0: %1").Arg(wd.GetName()).Arg(subjectConfidence, 0, 'g', 2);
}
// Draw label, but not bounding box
if (faceMetaData.GetGender() == PFaceMetaData::E_GENDER_FEMALE) {
colourScheme = PUtils::E_PINK;
} else if (faceMetaData.GetGender() == PFaceMetaData::E_GENDER_UNKNOWN) {
colourScheme = PUtils::E_ORANGE;
}
int32 startFrame;
sighting.GetPayload().Get("StartFrame", startFrame);
int32 endFrame;
sighting.GetPayload().Get("EndFrame", endFrame);
int numberOfFrames;
sighting.GetPayload().Get("NumberOfFrames", numberOfFrames);
P_LOG_INFO << PString("Sighting Id: %0 Start: %1 End: %2 Length: %3 Subject Id: %4 Confidence: %5")
.Arg(sightingId.ToString().Substring(0, 3))
.Arg(startFrame).Arg(endFrame).Arg(numberOfFrames)
.Arg(subjectId.ToString().Substring(0, 3))
.Arg(subjectConfidence);
return PResult::C_OK;
}
PResult ProcessEvents(const PList& events, const PWatchlist& watchlist, bool writeVideo, bool display) {
// Got through each event
for (int32 i = 0, ni = events.Size(); i < ni; ++i) {
PEvent event;
events.Get(i, event);
if (event.GetType() == "Sighting") {
HandleSighting(event, watchlist, writeVideo, display);
}
}
return PResult::C_OK;
}
class CTrack {
public:
CTrack(int startFrame, const PPoint2Df& point)
: m_startFrame{startFrame}
, m_lastFrame{startFrame} {
m_track.push_back(point);
static int colourIdx = 0;
m_clolour = PColour3i::C_COLOUR_TABLE[colourIdx];
colourIdx = (colourIdx + 1) % (sizeof(PColour3i::C_COLOUR_TABLE) / sizeof(PColour3i));
}
void AddPoint(int curFrame, PPoint2Df point) {
for(int i = m_lastFrame; i < curFrame; i++) {
m_track.push_back(point);
}
m_lastFrame = curFrame;
}
std::vector<PPoint2Df> m_track;
int m_startFrame;
int m_lastFrame;
PColour3i m_clolour;
};
class CShowEvents {
public:
explicit CShowEvents(bool skipPlotting, bool showPoints, bool drawTracks)
: m_skipImage(false)
, m_framesTotal(0)
, m_framesSkipped(0)
, m_totalFaces(0)
, m_skipPlotting(skipPlotting)
, m_showPoints(showPoints)
, m_drawTracks(drawTracks) {}
void SetWatchList(const PWatchlist& watchlist) { m_watchlist = watchlist; }
void SetRoi(const PRectanglei& roi) { m_roi = roi; }
void StartTimer() {
m_timer = PTimer();
m_framesTotal = m_framesSkipped = m_totalFaces = 0;
}
void QueueEvents(const PList& events) {
for(int32 i = 0, ni = events.Size(); i < ni; i++) {
PEvent event;
events.Get(i, event);
m_eventQueue.push(event);
}
}
bool ShowEvents() {
bool imageReady = false;
while(!m_eventQueue.empty()) {
PEvent event = m_eventQueue.front();
m_eventQueue.pop();
const PString& eventType = event.GetType();
P_LOG_TRACE << "Processing event:" << eventType;
if(eventType == "Face") {
m_totalFaces++;
if(!m_skipPlotting) {
const PProperties& eventProperties = event.GetPayload();
PDetection detection;
if(!eventProperties.Get("Detection", detection)) {
P_LOG_ERROR << "Failed to get detection";
continue;
}
// show detection
// P_LOG_INFO << "detection:" << detection.GetFaceRectangle().GetRectangle();
// get meta data
PGuid subjectId;
if(!eventProperties.Get("SubjectId", subjectId))
P_LOG_ERROR << "Failed to get subjectId";
double subjectIdConfidence = 0;
if(!eventProperties.Get("SubjectIdConfidence", subjectIdConfidence))
P_LOG_ERROR << "Failed to get subjectIdConfidence";
P_LOG_TRACE << "Subject:" << subjectId << " confidence:" << subjectIdConfidence;
PDescription description;
if(!eventProperties.Get("Description", description))
P_LOG_ERROR << "Failed to get description";
PDescription subjectDescription;
bool subjectKnown = false;
if(m_watchlist.GetDescription(subjectId, subjectDescription).Ok())
subjectKnown = true;
// Draw bounding box
PRectanglei detRect =
detection.GetFeatureMap().GetBoundingBox().Translated(detection.GetOrigin()).ToPRectanglei();
if(!m_workImage.DrawRectangle(detRect, PColour3i::Red(), 1)) {
P_LOG_ERROR << "Failed to draw detection rectangle";
}
// If we have mask classifier
PPoint2Di at{int32(detRect.GetCentre().GetX()), detRect.GetY() + detRect.GetHeight()};
PFaceMetaData faceMetaData = PFaceMetaData::FromDescription(description);
PString maskLabel;
if(faceMetaData.GetClassLabel("mask", maskLabel).Ok() ) {
if(maskLabel == "NoMask") {
colour = PColour3i::Red();
}
m_workImage.DrawLabel(maskLabel, at, colour, PColour3i::White());
at = at.Translated(0, -21);
}
// If we have gender classifier
PString genderLabel;
if(faceMetaData.GetClassLabel("gender", genderLabel)) {
if(!m_workImage.DrawLabel(genderLabel, at, PColour3i::Green(), PColour3i::White())) {
P_LOG_ERROR << "Failed to draw classification label";
}
}
//PUtils::DrawDetection(m_workImage, detection, detectionColour, false, !m_showPoints);
// Draw name label
{
PColour3i fontColour(0, 0, 0);
PString tagColour = "#95a5a6";
PString message;
if(subjectKnown) {
PString tag;
PString name =
PString("%0: %1").Arg(subjectDescription.GetName()).Arg(subjectIdConfidence, 0, 'g', 2);
if(subjectDescription.GetProperties().Get("Tag", tag).Ok()) {
subjectDescription.GetProperties().Get("TagColour", tagColour);
message = PString("%0 @ %1").Arg(name).Arg(tag);
} else {
message = name;
}
} else {
message = "Unknown";
}
PPoint2Di at{int32(detRect.GetCentre().GetX()), detRect.GetY()};
if(!m_workImage.DrawLabel(message, at, PColour3i(tagColour), fontColour)) {
P_LOG_ERROR << "Failed to draw name label";
}
}
// add face to tracks
PGuid sightingId;
if(!eventProperties.Get("SightingId", sightingId)) {
P_LOG_ERROR << "Failed to get sightingId";
continue;
}
auto track = m_tracks.find(sightingId);
if(track == m_tracks.end()) {
m_tracks.emplace(sightingId, CTrack(detection.GetFrame().GetFrameNumber(),
} else {
track->second.AddPoint(detection.GetFrame().GetFrameNumber(),
}
// show thumbnail for the track
//{
// PImage thumbnail;
// event.GetPayload().Get(papillon::C_PROPERTY_THUMBNAIL, thumbnail);
// thumbnail.Display(sightingId.ToString());
//}
}
} else if(eventType == "FrameStart") {
if(!m_skipPlotting) {
const PProperties& eventProperties = event.GetPayload();
PFrame frame;
if(!eventProperties.Get("Frame", frame)) {
P_LOG_ERROR << "Failed to get frame";
continue;
}
P_LOG_TRACE << "Went on new frame:" << frame.GetFrameNumber();
static int lastFrameNo = 0;
if(frame.GetFrameNumber() - lastFrameNo > 1) {
P_LOG_ERROR << "Missed frames between: " << lastFrameNo << " - " << frame.GetFrameNumber();
}
lastFrameNo = frame.GetFrameNumber();
m_workImage = frame.GetImage().Clone();
// show roi
if(m_roi.IsValid()) {
m_workImage.DrawRectangle(m_roi, PColour3i(255, 0, 0), 1);
}
}
m_skipImage = false;
} else if(eventType == "FrameEnd") {
// draw tracks
const PProperties& eventProperties = event.GetPayload();
PFrame frame;
if(!eventProperties.Get("Frame", frame)) {
P_LOG_ERROR << "Failed to get frame";
continue;
}
int startFrame = frame.GetFrameNumber() - 50;
if(m_drawTracks) {
for(auto track : m_tracks) {
int startIdx = std::max(0, startFrame - track.second.m_startFrame);
if(startIdx >= track.second.m_track.size()) {
continue;
}
PPoint2Di prevPoint = track.second.m_track[startIdx].ToPPoint2Di();
for(int i = startIdx + 1; i < track.second.m_track.size(); i++) {
PPoint2Di curPoint = track.second.m_track[i].ToPPoint2Di();
m_workImage.DrawLine(prevPoint.GetX(), prevPoint.GetY(), curPoint.GetX(), curPoint.GetY(),
track.second.m_clolour, 2);
prevPoint = curPoint;
}
}
}
// clean old tracks
int frameThreshold = frame.GetFrameNumber() - 50;
auto it = m_tracks.cbegin();
while(it != m_tracks.cend()) {
if(it->second.m_lastFrame < frameThreshold) {
it = m_tracks.erase(it);
} else {
++it;
}
}
++m_framesTotal;
if(!m_skipImage) {
m_readyImage = m_workImage;
imageReady = true;
break;
}
} else if(eventType == "FrameSkipped") {
++m_framesSkipped;
m_skipImage = true;
}
}
return imageReady;
}
const PImage& GetImage() const { return m_readyImage; }
const PString GetFpsString() const {
return PString("fps %0 (%1) faces per sec %3 faces per frame %4")
.Arg(double(m_framesTotal - m_framesSkipped) / m_timer.ElapsedSec(), 4, 'f', 2)
.Arg(double(m_framesTotal) / m_timer.ElapsedSec(), 4, 'f', 1)
.Arg(double(m_totalFaces) / m_timer.ElapsedSec(), 6, 'f', 1)
.Arg(double(m_totalFaces) / double(m_framesTotal - m_framesSkipped), 4, 'f', 1);
}
private:
PImage m_workImage;
bool m_skipImage;
PImage m_readyImage;
PWatchlist m_watchlist;
PTimer m_timer;
int64 m_framesTotal;
int64 m_framesSkipped;
int64 m_totalFaces;
PRectanglei m_roi;
bool m_skipPlotting;
bool m_showPoints;
bool m_drawTracks{false};
std::map<PGuid, CTrack> m_tracks;
std::queue<PEvent> m_eventQueue;
};
int CountProcessedFrames(const PList& events) {
int result = 0;
for (int32 i = 0, ni = events.Size(); i < ni; i++) {
PEvent event;
events.Get(i, event);
const PString& eventType = event.GetType();
if (eventType == "FrameEnd") {
++result;
} else if (eventType == "FrameSkipped") {
--result;
}
}
return result;
}
PResult RunDemo(int argc, char** argv) {
POption opt(argc, argv);
opt.AddStandardOptions(); // set-up logging
PLog::SetConsoleLevel(papillon::PLog::E_LEVEL_INFO);
PLog::SetConsoleFormat("date (sev) [sf:line]: msg");
PString inputVideoFile =
opt.String("inputVideo,iv", PPath::Join(SAMPLE_DIR, "busy_office.avi"), "Input video file");
int32 maxFrames = opt.Int(",f", 0, "Maximum number of frames to process");
int32 skip = opt.Int("skip,skip", 0, "Skip this number of frames at the beginning");
bool display = !opt.Option("noDisplay,nd", "Display video as we go");
int fdGpuId = opt.Int("fdGpuId,fdGpuId", -1, "Face detector gpuId");
int dGpuId = opt.Int("dGpuId,dGpuId",-1,"Describers and classifiers gpuId.");
double maxFrameRate = opt.Double("fameRate,fr",-1,"Maximum processing frame rate");
PString watchlistFile = opt.String("watchlist,w", "", "Use watchlist when running FaceLog tests");
ReturnIfFailed(opt.Check());
if (opt.Has("h") || opt.Has("help")) {
P_LOG_INFO << opt.ToStringHelp();
return PResult::C_OK;
}
CShowEvents eventShow(false, true, true);
// Open video stream
ReturnIfFailed(PInputVideoStream::Open(PString("%0").Arg(inputVideoFile), ivs));
PFaceLog6Parameters faceLogParameters;
faceLogParameters.SetMaxFaceDetectorFR(maxFrameRate);
faceLogParameters.SetFaceDetectorGpuId(fdGpuId);
faceLogParameters.SetGpuId(dGpuId);
PWatchlist watchlist;
if(!watchlistFile.IsEmpty()) {
PWatchlist::ReadFromFile(watchlistFile, watchlist).OrDie("Failed to load watchlist");
P_LOG_INFO << "Loaded watchlist with " << watchlist.GetNumberOfDescriptors() << " descriptors";
faceLogParameters.SetFaceRecognition(true);
faceLogParameters.SetWatchlist(watchlist);
eventShow.SetWatchList(watchlist);
}
bool writeVideo = false;
// Finally create the face-logger
PAnalytics faceLog;
ReturnIfFailed(PAnalytics::Create("FaceLog6", faceLogParameters, faceLog));
PList events;
int frameNumber = 0;
while (true) {
// Get the frame
PFrame frame;
if (!ivs.GetFrame(frame).Ok())
break;
// Apply face-log to frame (asynchronous operation)
if (faceLog.Apply(frame, events).Failed())
continue;
frameNumber++;
if (frameNumber < skip)
continue;
// Queue events
eventShow.QueueEvents(events);
if (display) {
if(eventShow.ShowEvents()) {
eventShow.GetImage().Display("Face Log", 1);
}
}
// Let process the events
ProcessEvents(events, watchlist, writeVideo, display);
// Check if we are exiting early
if (frameNumber == maxFrames) {
P_LOG_INFO << "Maximum number of frames reached.";
break;
}
} // end while
// OK we have finished video, lets check we have no events hanging around
faceLog.Finish(events);
eventShow.QueueEvents(events);
// P_LOG_DEBUG<< "(" << frameNumber << "/" << maxFrames << ") found " << events << " events.";
if (display) {
if(eventShow.ShowEvents()) {
eventShow.GetImage().Display("Face Log", 1);
}
}
P_LOG_INFO << eventShow.GetFpsString();
ProcessEvents(events, watchlist, writeVideo, display);
// Quie the SDK
return PResult::C_OK;
}
int main(int argc, char** argv) {
PapillonSDK::Initialise().OrDie(); // by default, open a console logger with INFO level
RunDemo(argc, argv).LogIfError();
return 0;
}