SafeZone2D (Intelligent Motion Detection)


SafeZoneEdge 2D from Digital Barriers: this plugin performs an advanced plug-and-play motion detection on a video stream; it does not need any configuration or calibration.
It is typically used to detect human activity, filtering efficiently artifacts coming from weather conditions (e.g. wind, rain or snow).

The output data are a list of alarms (PAnalytics::Apply() method returns a PList of PEvent which represent alarms and detected objects).

The analytics engine of this plugin comes from SmartVis SafeZone: see presentation

  • Plugin: PPluginAnalyticsSafeZoneEdge2d
  • Product name: SafeZoneEdge2d
  • Version: 2.0


Technical requirements to get good results using SafeZoneEdge2D

  • Use in a sterile zone ONLY – it does not work in a busy area and will stop detecting.
  • An object must be visible for at least 2 seconds in the scene.
  • Works best with fixed cameras (small vibrations are tolerated).
  • Using SafeZone with a PTZ camera is possible, but has limitations. Once the camera stops moving it can take a few seconds for the analytics to learn the new background, and any object already in the scene will likely be treated as background.
  • There must be suitable contrast between background and objects to detect.
  • To detect an object, it must be:
    • Taller than 15 pixels
    • Not taller than 60% of the height of the image
    • Not bigger than 35% of the area of the image
  • Images should be provide to a framerate as close as possible to 8 fps. Different framerates will lead to degraded performances both in terms of detection accuracy and false alarm rate.
  • To avoid interference from insects, we recommend not using the built-in IR lighting of the camera (especially cameras with sun shield).
  • Raindrops on camera lens may strongly reduce detection and false alarm mitigation performances. Avoid fixed dome for outdoor use. Install a shield on the top of the camera when necessary to prevent from raindrops on camera lens.

CPU requirements

CPU Mark on windows (figures on Linux can vary slightly):

  • QCIF analysis, 4x3 image: 150 CPU Mark, 50 MB RAM
  • QCIF analysis, 16x9 image: 200 CPU Mark, 50 MB RAM

Input requirements

Image resolution : QCIF like Most common resolution depending on the source aspect ratio:

Aspect Ratio Resolution
16/9 256 x 144
16/10 224 x 140
8/5 224 x 140
4/3 192 x 144
5/4 180 x 144
11/9 176 x 144

For optimal performance, it is HIGHLY recommended to use a QCIF like resolution (e.g. 176x144 pixels) at 8 frames per second.
Basically, it means you have to control the framerate by calling PAnalytics::Apply() method about 8 times per second on a live stream. If you want to process a video file, be sure the video has been registered at 6 to 10 FPS. If less than 6 FPS, the detection could fail. If more than 10 FPS, you should skip some frames to approximate 8 FPS (e.g. if 24 FPS than skip 2 frames over 3).

Good design consists in using a separate thread to perform SafeZone2D analytics so that SafeZone2D has enough CPU power to process images at the right framerate.

You do not have to resize input images, it will be done automatically by the plugin, using the specified resolution (see parameters below).

Please contact us at if you have any trouble (e.g. poor performance) while trying to use this plugin with different video streams.

How to create a PAnalytics SafeZoneEdge2d object?

// assumes: using namespace papillon;
PAnalytics safeZoneEdge2d;
PSafeZoneEdge2dParameters safeZoneEdge2dParameters = PSafeZoneEdge2dParameters::BuildDefaultParameters();
PResult ret = PAnalytics::Create("SafeZoneEdge2d", safeZoneEdge2dParameters, safeZoneEdge2d);
if (ret.Failed()) ...

See PAnalytics to learn more.


For simpler usage, use PSafeZoneEdge2dParameters to setup video analytics parameters. This class has a set of Set/Get methods to easily configure the algorithm without knowing raw parameter names described below.

In this section, ROI stands for "Region Of Interest". It is the rectangle area where detection is performed.

Important note: all parameters must be passed as PString.

The following parameters are mandatory:

Parameter Type Description
camera-type string type of sensor: can be "dayNight" (colour sensor), "BW" (black-and-white) or "thermal" (thermal sensor)
image-width, image-height string resolution in pixels used to perform video analytics. Typical (optimal) parameters are QCIF resolution, i.e. image-width="176" and image-height="144". Note: actual process resolution might be slightly different because of aspect ratio of input images
sensitivity string a value in "0".."1" - default is "0.5" - which specify detection sensitivity. Higher sensitivity (towards 1) means more detections but more false alarms
filter-headlights string either "true" indicating that there is headlight illumination in the scene (Head light filtering will be activated if the camera type is not thermal) or "false" if not

Other optional parameters:

Parameter Type Description
min-object-width string filter: minimum object width (width of the bounding box), in pixels
max-object-width string filter: maximum object width (width of the bounding box), in pixels
min-object-height string filter: minimum object height (height of the bounding box), in pixels
max-object-height string filter: maximum object height (height of the bounding box), in pixels
post-alarm-time-ms string an integer value in milliseconds, "0" by default - amount of time during which alarm will be kept after disappearance of a target (used to smooth alarms)
use-roi string either "true" (use a Region Of Interest) or "false" (analyse the full image); "false" by default: the entire image will be analyzed
detection-roi string a string representing a PRectanglef (can be used to set "roi-x", "roi-y", "roi-w", "roi-h" values at once). Overwrite "roi-?" parameters if both are set. No detection-roi by default. Note: ROI will be automatically clamped to be inside analysed images.
roi-x string a string representing a normalised float value (in "0".."1") = x-coordinate of the upper-left corner of the ROI (Region Of Interest)
roi-y string a string representing a normalised float value (in 0..1) = y-coordinate of the upper-left corner of the ROI (Region Of Interest)
roi-w string a string representing a normalised float value (in 0..1) = width of the ROI (Region Of Interest): "1" corresponds to the original image width
roi-h string a string representing a normalised float value (in 0..1) = height of the ROI (Region Of Interest): "1" corresponds to the original image height

Small objects filtering note: small objects are filtered if their width is smaller than min-object-width AND their height smaller than min-object-height. Large objects filtering note: large objects are filtered if their width is larger than max-object-width AND their height larger than max-object-height.

Output Events

Each time an object (e.g. Person) is detected, an alarm (PEvent) will be generated.

The output of this plugin is a list (PList) containing only PEvent objects with the following type:

  • Type = "metadata"
  • Annotation = a XML document which is the metadata associated with the current frame (use GetAnnotation() to get this document).

Example 1: XML document when no detection performed. Note: OSD_SIZE can be slightly different from the original resolution because SafeZone2D try to optimise resolution while preserving aspect ratio.

<NODE NAME="ivs_results" REFERENTIAL="0" SITE="" OSD_SIZE="256x144" VERSION="5.0.0">
<TIMESTAMP> 2016-08-22T12-08-21-072-000000005_XXXXXXXXXX </TIMESTAMP>
<ZONE NAME="zone-1">
<POINT NUMBER="0" X2D="252" Y2D="142" X3D="0.0" Y3D="0.0" Z3D="0.0"/>
<POINT NUMBER="1" X2D="0" Y2D="142" X3D="0.0" Y3D="0.0" Z3D="0.0"/>
<POINT NUMBER="2" X2D="0" Y2D="0" X3D="0.0" Y3D="0.0" Z3D="0.0"/>
<POINT NUMBER="3" X2D="252" Y2D="0" X3D="0.0" Y3D="0.0" Z3D="0.0"/>

Example 2: XML document when an actor (a PERSON) is detected. The bounding box of the person is given in pixels by the _2D_BBOX element.

<NODE NAME="ivs_results" REFERENTIAL="0" SITE="" OSD_SIZE="256x144" VERSION="5.0.0">
<TIMESTAMP> 2016-08-22T12-08-45-713-000000202_XXXXXXXXXX </TIMESTAMP>
<ACTOR ID="5" TYPE="PERSON" TIMESTAMP="2016-08-22T12-08-45-713-000000202_XXXXXXXXXX" IS_MAINTAINED="false">
<_2D_BBOX XMIN="122" XMAX="131" YMIN="75" YMAX="103"/>
<_3D_POSITION X="122" Y="75" Z="0"/>
<_3D_SIZE WIDTH="0" HEIGHT="0"/>
<ALERT ID="0" TYPE="intrusion" ZONE_NAMES="zone-1" ACTOR_ID="5" SCENARIO_NAME="intrusion-1" BEGIN-TIMESTAMP="2016-08-22T12-08-41-338-000000167_XXXXXXXXXX" END-TIMESTAMP="2016-08-22T12-08-45-713-000000202_XXXXXXXXXX">
<ZONE NAME="zone-1">
<POINT NUMBER="0" X2D="252" Y2D="142" X3D="0.0" Y3D="0.0" Z3D="0.0"/>
<POINT NUMBER="1" X2D="0" Y2D="142" X3D="0.0" Y3D="0.0" Z3D="0.0"/>
<POINT NUMBER="2" X2D="0" Y2D="0" X3D="0.0" Y3D="0.0" Z3D="0.0"/>
<POINT NUMBER="3" X2D="252" Y2D="0" X3D="0.0" Y3D="0.0" Z3D="0.0"/>

See PEvent to learn more.


* Copyright (C) 2015-2017 Digital Barriers plc. All rights reserved.
* Contact:
* This file is part of the Papillon SDK.
* You can't use, modify or distribute any part of this file without
* the explicit written agreements of Digital Barriers.
#include <PapillonCore.h>
#include "PSafeZoneEdge2dParameters.h"
const PString SAMPLE_DIR = PPath::Join(PUtils::GetEnv("PAPILLON_INSTALL_DIR"), "Data", "Samples"); // path to find sample data: $PAPILLON_INSTALL_DIR/Data/Samples
static void RunDemo()
// ************************************************************************
// 1. Open Video Stream
// ************************************************************************
PInputVideoStream::Open(PPath::Join(SAMPLE_DIR, "vegetation.avi"), ivs).OrDie();
// ************************************************************************
// 2. Create SafeZoneEdge2d video analytics engine
// See @ref plugin_analyticsSafeZoneEdge2D to learn more
// ************************************************************************
PSafeZoneEdge2dParameters safeZoneEdge2dParameters = PSafeZoneEdge2dParameters::BuildDefaultParameters();
PAnalytics safeZoneEdge2d;
PAnalytics::Create("SafeZoneEdge2d", safeZoneEdge2dParameters, safeZoneEdge2d).OrDie();
// ************************************************************************
// 3. Now, analyse the video stream and retrieve alarms/events detected
// by SafeZoneEdge2d
// ************************************************************************
// Get the frame rate of the video stream
int fps = static_cast<int>(ivs.GetFrameratePerSecond());
if (fps <= 0)
fps = 8;
// Analytics should be performed at about 8 FPS for best results => skip some frames if FPS is too high
int skipFrames;
if (fps < 12)
skipFrames = 0;
skipFrames = fps / 8 - 1;
PFrame frame;
while (true)
PTimer time;
// Skip some frames if FPS is too high to try to reach 8 FPS
for (int i=0; i<skipFrames; ++i)
// Get the frame to be analysed
if (ivs.GetFrame(frame).Failed())
// Apply video analytics: a PEvent object is
// generated each time an object (e.g. a person) is detected
PList events;
if (safeZoneEdge2d.Apply(frame, events).Failed())
// Has SafeZone2D detected something in this frame?
if (!events.IsEmpty())
PEvent event;
events.Get(0, event);
P_LOG_INFO << event; // display XML metadata
// Display video stream with OSD (On-Screen-Display) drawn on top of it
PImage displayImage = frame.GetImageShared();
float scaleFactor = 720.0f / frame.GetWidth();
// Simulate real time stream
int pause = 1000 / (fps / (skipFrames + 1)) - static_cast<int>(time.ElapsedMs());
if (pause < 0)
pause = 0;
frame.Display("Papillon SDK - SafeZoneEdge2d example", pause);
int main()
return 0;