[tool] [tutorial] Free AI Person Detection for Blue Iris

Discussion in 'Customizing' started by GentlePumpkin, Mar 19, 2019.

Share This Page

  1. GentlePumpkin

    GentlePumpkin Getting the hang of it

    Joined:
    Sep 4, 2017
    Messages:
    39
    Likes Received:
    45
    This program analyzes motion in Blue Iris cameras in real-time using Artificial Intelligence. It notifies you if a human or an object of your choice is detected and can send alert images to Telegram.

    It is based on DeepQuestAI, a free multipurpose AI that you can install on your own server (p.e. the server Blue Iris is running on).


    processing1.53.png

    Version history:

    v1.55
    bug b6 fixed; contains bug b4

    v1.54
    bug in the camera tab fixed, contains bug b4 and b6

    v1.53
    added privacy mask feature, clean redesign of the history tab, detected objects now can be marked with a rectangle around them (replacing the image cutout feature), AI Tool now can be resized/maximized, some more visual enhancements; contains bug b4

    v1.43
    BIG upgrade: introducing user interface, running in background, statistics etc; still contains bug b4 and b5

    v0.6:
    fixed bug b3; change regarding b2: program will no longer try to upload an image to Telegram again because retrying to upload does not help to resolve the problem; contains b4

    v0.5: added the possibility to call multiple trigger urls; contains bug b3, b4

    v0.4: like v0.3, fixed bug b2; contains bug b4

    v0.3:
    added Telegram image notifications (optional); contains bug b2, b4

    v0.2:
    like v0.1, fixed the bug b1

    v0.1:
    first release; contains bug b1

    Known bugs:

    b6: sometimes the log says the resolution of the camera maks .png is too small although it isn't.

    b5: the image cutout feature (stores cutouts containing the detected elements in 'Output Path' if specified) is 100% buggy as I haven't found time to get to the bottom of it yet. As long as no 'Output Path' is specified, this feature is disabled anyways.

    b4: Telegram image upload sometimes fails. This probably has something to do with Telegrams' anti-spam measures. Not fixed yet.

    b3: If no trigger url is specified, the program crashes. Occured in v0.5, fixed since v0.6.

    b2: Sometimes Telegram is not reachable, which caused the program to crash. Starting from v0.4 the program will then try to upload the alert photo again. If that does not work, the upload will be skipped. Annoyingly, the problem that Telegram uploads sometimes fail remains. Occured in v0.3.

    b1: When after the first analysis more images were found and analyzed, the program would not delete them. Occured in v0.1, fixed since v0.2.

    NEW version main enhancements:
    • new handy GUI
    • huge speed enhancement (one image analysis just takes 2 seconds)
    • running in background (Windows tray)
    • a camera cooldown feature (one alert per event)
    • detection mask for selected areas
    • statistics
    • simplified DeepQuest AI install

    Key features:
    • analyze images stored in an input folder using DeepQuestAI for selected objects *and humans ;)
    • call one or multiple trigger urls if a specified object is found
    • send alert images to Telegram using a bot (optional)
    • it can be configured which objects trigger an alert, for example person, bicycle, car etc.
    • a cooldown can be set so that only one alert is sent per event
      • If the software sents an alert the cooldown time starts (p.e. 3 min). If more objects are detected during the following 3 minutes, the cooldown is resetted. During this time, no further alerts will be triggered. If nothing is detected during the cooldown time, the specific camera will be "rearmed".
    • create statistics for every camera
    • camera-specific profiles
    • mask image areas where nothing should be detected

    So how does the AI detection with Blue Iris work in general:
    If Blue Iris detects an alert on a camera (motion detection or external input), a still picture is stored in the input folder of the AI Software. The AI Software firstly assoicates this image with the matching camera profile (which tells the software which objects should cause an alert on this camera, if a trigger URL shound be called etc). Then the image is analyzed and in case a relevant object was detected, the actions configured in the camera profile are executed, p.e. triggering the camera.

    Blue Iris only creates an alert images that can be analyzed when the camera is triggered, so we have to duplicate the camera we want to monitor using AI to ensure that only if the AI actually detected something, an alert is caused. Therefore we create a camera duplicate in Blue Iris that acts as the AI input and is configured to be quite sensitive to catch possible movements. The stills coming from this camera then are analyzed using the AI Software and only if something relevant is detected, the actual camera is triggered. This camera - of course - should not be triggered by motion detection but only by the AI Software.

    This might sound a bit difficult, but as Blue Iris is a very sophisticated piece of software, the camera duplicates don't even need additional resources. Chapeau!

    So the principle procedure is:
    1. the camera duplicate that acts as the inputting camera detects a motion and saves an image into the 'Input Path'
    2. the AI Software detects the new image, associates it to a configured camera profile and sends the image to the locally installed DeepQuestAI instance to analyze the image.
    3. DeepQuestAI analyzes the images and returns the results to the AI Software
    4. If the image contains one of the 'relevant objects' specified in the camera tab for this individual camera, the actions specified for this camera are executed (call trigger url, send image to Telegram).

    Interface Overview:
    Overview Tab
    Shows the current state (Processing/Running), software version and informs in case an error occured. Clicking on the red error message will then open the Log file.
    If you use the Telegram feature, you have to expect about 1 error per week which is caused by the Telegram upload bug. Telegram blocks the upload of the image, the AI Software detects that, informs you and stores the image that could not be uploaded in [Software Main Directory]/errors/.

    overview.png
    Stats Tab
    Obviously it is planned to extend this tab in the future.
    Currently, the 'Input Rates' statistic contains per-camera information on how many of the images that were checked contained a) no objects ('False Alerts'), b) irrelevant objects ('irrelevant Alerts') or c) relevant objects that finally caused an alert ('Alerts').
    This is meant to help configuring the sensitivity of BI's motion detection and compare the effectiveness of different configurations and technologies (software motion detection vs. PIR sensors).

    stats1.53.png
    History Tab
    Shows alert images that recently passed analysis together with the time and date the still was taken, the camera it was associated with and whether (✓) or not (X) relevant objects were detected in it.

    For convenient browsing, the left list can show the exact objects that were detected if the AI Tool Window is larger (i.e. maximized).

    Enabling 'Show Objects' will mark all relevant detected objects in the image with red rectangles and 'Show Mask' will additionally show the areas of the selected camera where detections will not cause an alert.

    history1.53.png

    historyfull1.53.png

    Cameras Tab
    Here you can give every camera its own profile, specifying which objects cause an alert, which actions should be executed when one of these objects is detected on this specific camera and the individual camera cooldown time.

    cameras1.53.png
    Settings Tab
    Here you tell the software where Blue Iris saves the alert images ('Input Path') and the DeepStack URL ("localhost:81").
    In case you want to use the Telegram feature, you input the Telegram credentials here aswell.
    If you experience errors, you can enable 'Log everything' for troubleshooting purposes. The AI Software will log the most of what it does into the Log.txt file you can find in the Software Main Directory (where 'aitool.exe' is stored).

    settings1.53.png

    Setup:
    The setup seems to be quite complicated, but actually it's quite simple and I'm just writing alot that should help you understand how the software works.

    Overview:
    1. Install DeepQuestAI
    2. Configure Blue Iris
    3. Configure the AI Software according to your setup and requirements

    optional: Send alert images to Telegram
    optional: Create a detection mask
    optional: Run AI Tool as a Windows Service

    1. Install DeepQuestAI

    DeepQuestAI recently released a Windows installer, so we no longer need the complicated Docker installation. If have the Docker version running, you should uninstall it before.

    1.1 Although free, DeepQuestAI needs an API key, so we have to register an account. Create an account at Sign Up, choose the free plan (that is sufficient for our use), go to the portal (Dashboard), click 'Install DeepStack', select 'Windows' and download the installer. While downloading, you can return to the dashboard and copy your API key. Notice that on the Dashboard it says 'Expires: Unlimited'.

    1.2 As soon as DeepQuestAI is installed, start DeepQuestAI(from now on I'll call it DQAI :D ) , hit 'Start Server', input your API-Key, select the 'Detection API' and change the Port from 80 (Blue Iris needs this Port for UI3) to p.e. 81. Finally click 'Start Now'.

    1.2 You will merely need it, but the web interface of DQAI is now accessible by opening "localhost:81" with your web browser. Other devices on your network can access the interface using your Blue Iris IP and Port 81, so p.e. "192.168.178.2:81". Notice that the web interface now gives an expiry date (2 years). I don't know if the API expires or or not, but getting a new API key every 2 years isn't a great problem imho.​

    Now the actual software that analyzes the images is already running.

    2. Configure Blue Iris

    I anticipate you are familiar with Blue Iris, so I keep the description simple. You have to do the steps 2.3 - 2.6 for every camera that you want to be analyzed by AI Software.

    2.1 Create 'Input Path' folder
    We need an directory where BI stores all the images possibly containing alerts. We can already add this path to Blue Iris by opening the settings of Blue Iris, then 'Clips and archiving', then click on one of aux folders in the list on the left (if you click on p.e. 'Aux_7' and don't move the cursor for 1-2s, you will be able to change the displayed name) and then create a new folder in the Blue Iris main directory. We can name this folder for example "aiinput". We can furthermore limit the folder size to p.e. 5Gb, so that old images are automatically removed.

    2.2 Enable URL triggering feature in Blue Iris
    URL triggering is disabled by default, so to be able to trigger a camera in Blue Iris via URL, you have to do the following in Blue Iris:
    1. go to Setting->Webserver->Advanced and disable 'use secure session keys and login page'.
    2. go to Settings->Users and eighter select a user and copy the password, or create a new administrator user. The credentials will be needed in step 3.4.5 to make the trigger URL.
    2.3 Duplicate camera
    Now we have to create a camera duplicate whose only purpose is to save an image when a motion is detected, so that the AI Software can analyze it. So add a new camera, give it a name that makes sense (e.g. if your original camera was called 'frontyard', call it 'aifrontyard'), and under type select 'copy from another camera' and choose the appropriate one.

    2.4 Disable unneccessary stuff in the duplicated camera
    Keep in mind that this cameras only job is to detect motion and then save a still image into the folder we created in 2.1, so disable all features on this camera that are not needed (recording, pretrigger, etc). Because Blue Iris is already prepared to work with camera clones it is not neccessary to lower the resolution to save on CPU resources. Quite the opposite: If the camera stream url isn't changed, there will be zero additional CPU usage. Instead, changing the stream url to p.e. a profile with a lower resolution will cause additional CPU load.

    Additionally you can go to the 'General' tab and check 'Hidden', which will hide this duplicate camera from the Blue Iris UI3 (otherwise you suddently have twice as many cameras as before). This is really useful, as it keeps your Live View page tidy.

    2.5 Store alert images in 'Input Path'
    now go to Record, check 'JPEG snapshot each (mm:ss)', select the folder you created in step 2.1, check the box 'Only when triggered' and set the interval to p.e. 0:05.0 (one image every 5 second). Furthermore, you might want to disable 'Create Alert list images when triggered', because otherwise alot of false-alarm images (remember we set the motion detection to be very sensitive) will be stored in your alerts folder.
    Now go to 'Trigger' and set the Break time 'End trigger unless retriggered' to p.e. 4s, so that a short alert only causes one image to analyze. If you think that the AI Software might overlook an object "on first sight" because it's only party visible (which most times is no problem at all for the AI Software), you can also make the break time longer than the 5s interval. In this case, multiple images will be analyze by the AI Software.

    2.6 Disable motion detection for original camera
    Finally, we have to disable motion detection and other triggers on the original camera ('frontyard'), so that nothing except the AI Software triggers the original camera. To do that we open the camera settings of our original camera, go to 'Trigger' and uncheck all boxes in the the 'Sources' area.​
    As the new AI Software continously runs in the system tray, it is no longer neccessary to call the program every time a new image is created, so in case you already used previous versions of the AI Software, you should disable 'run a program or execute a script' in the 'Alerts' tab.

    Furthermore, in case you are working with multiple profiles, ensure to apply all changes to all profiles.
    So now you should have every camera twice, one that inputs potential alerts and the second one that is only triggered if there is actually something detected.


    3. Configure the program according to your setup and requirements

    Setup and configure the AI Software
    3.1
    Download the latest version of the attached program, unzip it where you want and start 'aitool.exe'.

    3.2 Go to the 'Settings' tab, add the 'Input Path' we created in step 2.1 and ensure that the 'Deepstack URL' is correct. Hit 'Save'.

    3.3 The AI Software already contains a default profile and as long as no other profile matches an inputted image, whatever is specified in the Default profile will happen. That is useful, but we will want to give every camera it own profile, so that the specific camera can be triggered in Blue Iris etc. .

    Configure the individual camera profiles
    3.4
    The following steps must be repeated for every camera and will be conducted using the example cam 'frontyard', of which we got the inputting camera 'aifrontyard' after following the steps 2.1 to 2.6. So the alert image names start with 'aifrontyard' and the camera 'frontyard' is the actual camera that we use to record and watch via UI3.

    .1 Open the 'Cameras' tab, click 'Add Camera', name the camera "frontyard" and hit ENTER.

    .2 Blue Iris will store the alert images under names that start with the camera name, so p.e. 'aifrontyard.20180326_054241.0.64.jpg'. As we store the alert images of multiple cameras in one input folder, we must filter the images from the duplicated camera out by setting the option 'Input file begins with:' to the name of our duplicated camera, in this case 'aifrontyard'. Now select the created entry 'frontyard' and type "aifrontyard" into the field 'input file begins with' to ensure that all images from the camera 'aifrontyard' are allocated to this profile.

    .3 Check all objects that you want to trigger an alert on this camera, p.e. 'Person' and 'Car'.

    .4 Input a 'Cooldown Time', you could try 3 minutes. 0 minutes means that the cooldown feature is disabled on this camera.

    .5 If we don't specify one or multiple 'Trigger URL(s)', no trigger call will be made. If we want to call multiple urls, we have to seperate the URLs with commas. Every URL should start with 'http://'. Meanwhile, the trigger URL is not limited to Blue Iris, practically everything can be triggered that has such an URL that one can call for a trigger (p.e. home automation).

    Take the following url template and replace user, password (both from step 2.2) and the short cam name with yours:

    Code:
     http://localhost:80/admin?trigger&camera=[short cam name]&user=[user]&pw=[password] 

    In our example with the admin account name "admin" and the password "todsicher":
    Code:
    http://localhost:80/admin?trigger&camera=frontyard&user=admin&pw=todsicher
    If you filled in everything, copy/paste the whole url into your webbrowser and make sure it causes an alert on the camera 'frontyard'. Finally, input the URL into the 'Trigger URL(s)' field.

    .6 Hit 'Save'.

    Leave 'Send alert images to Telegram' unchecked for now, you can activate it later.


    The program can send trigger images using a Telegram bot. The program needs two strings to connect to Telegram, 1. the Telegram API key and 2. the chat-id of the chat between you and the bot.
    4.1 To create a bot and get the api key:Bots: An introduction for developers
    4.2 Now contact the bot you created with the telegram account you want to receive the notifications on.
    4.3 Retrieve the chat-id: https://stackoverflow.com/questions/32423837/telegram-bot-how-to-get-a-group-chat-id
    4.4 open the AI Software, head over to 'Settings', input 'Telegram Token' and 'Telegram Chat ID' and hit 'Save'.
    4.5 Enable the option 'Send alert images to Telegram' for all cameras of which you want to send alert images using Telegram.​

    You can define a detection mask for every camera to keep possible detections in these masked areas from causing alerts. This is very useful if the AI detection keeps finding false objects in one area.

    The privacy mask currently must be created using an external paint program, p.e. Paint.Net.
    The mask file needs to have the exact same resolution as the camera image that are inputted into the Input folder. The mask must be stored as a .png file in the sub directory ./cameras/ (where the camera profile files are stored aswell).
    The mask image needs to have the same name as the profile file for the selected camera. So if the profile file is 'Garage indoor.txt', the mask file for this camera needs to be called 'Garage indoor.png'.

    All areas in the image that have a opacity of 10 or more are masked (where 255 is 100% solid), so you can paint with an opacity of p.e. 150 so that you later on still can see through masked areas of the overlay in the history tab of AI Tool. You can select a color of your choice, each one will work.

    Using Paint.Net, the following is very convenient:
    1. load a still of the selected camera (p.e. from the Input folder)
    2. add a second layer and paint the mask in the 2nd layer
    3. then remove the first layer containing the camera image
    4. save the image using the correct name as a .png into ./cameras/​

    For AITool to be able to run as Windows service a third-party program is required – NSSM (or Non-Sucking Service Manager).

    As the DQAI Windows version doesn't support autostart yet, the DQAI Docker version is required for the following (otherwise AI Tool will be running as a service, but DQAI won't be started). You can find the install guide for DQAI Docker in the 'deprecated CMD Version (v0.1 - v0.6)' spoiler.

    Please follow these steps:

    1. Download NSSM from here: Direct Download or open Download Page
    2. Extract it to a folder on your hard drive
    3. Open an administrative command prompt
    3.1 Win 10: press the Search button, Win7: open the Start menu
    3.2 Type in cmd
    3.3 Right click on Command Prompt and select Run as administrator
    1.png

    3.4 Click Yes on the prompt
    2.png

    4. Within the CMD navigate to where you have extracted NSSM (eg. cd / press enter, cd nssm-2.24-101-g897c7ad press enter, cd win32 press enter)
    3.png

    5. In CMD now type nssm.exe install AITool and press enter
    4.png

    6. You will be presented with the NSSM GUI. You need to:
    6.1 Browse to the AITool path and double click on the AITool.exe
    6.2 Ensure the startup directory is auto-filled with the path to the AITool.exe folder
    6.3 Ensure the Service name is correct
    5.png

    6.4 Click on Details and fill out Display name and description (for example AITool in both)
    6.png

    6.5 Click on Log On and select This account and enter your Windows username and password (password needs to be entered twice in the correct boxes)
    7.png

    6.6 Press Install service. If you get a success press OK an reboot your Windows PC.
    8.png
    7. After reboot check services and ensure the AITool service is running
    9.png

    8. Without manually running AITool, generate some valid alerts and ensure they are being sent to your mobile/tablet device.

    Many thanks to MnM for testing and describing this solution!


    This will keep all the camera profiles and the software settings:
    1. Close the old AI Tool.
    2. Delete everything in the software main folder (where the old aitool.exe is located), except the /cameras subfolder.
    3. Open the zip containing the new version.
    4. Extract everything except the /cameras subfolder into the software main folder.

    I hope that the AI Software works fine for you. If you have trouble with the tutorial or the software, don't hesitate to write me a PM!




    deprecated CMD Version (v0.1 - v0.6)

    Screenshot of the program running and the output folder in the background:
    screen2.png



    Key features:
    • analyze images stored in an input folder using DeepQuestAI for certain object *and humans ;)
    • call one or multiple urls if a triggering object is found (optional although this actually is the main functionality)
    • it can be configured which objects trigger an alert, for example person, bicycle, car etc.
    • save cutouts containing the detected objects in an output folder (optional)
      • objects that trigger an alert are saved in the output folder
      • objects that do not trigger an alert but where detected anyways are saved in a subfolder '/other objects/' of the output folder
    • send alert images to Telegram using a bot (optional)
    BlueIris specific features:
    • only analyze images in the input folder that start with a certain string
      • blue iris alert images natively start with the camera name, so using this feature the program only analyzes images from a specific camera
    • delay the analysis start by x seconds and run another analysis if, during the previous analysis, new images were saved in the input folder
    • delete images that were analyzed
    A screenshot of the program running:
    screen.png


    Who is this for:
    Everyone who
    a) has qualms about sending private CCTV data to third parties and
    b) does not want to pay a monthly fee (because this AI solution presented here is completely free) and
    c) maybe even has a NVidia gpu that speeds up AI analysis from multiple seconds without a gpu to splitseconds.​
    For whom is this not neccessarily for:
    Everyone who
    a) does not have enough CPU performance left (Docker running the AI Software is demanding) and
    b) needs high reliability and stability. This is the first larger program I've written, so you might have to expect unexpected behavior;) and sudden crashes.​

    So how does the AI detection with BlueIris work in general:
    The software can only analyze .jpg images, so we have to get such an image everytime BlueIris thinks there might be a motion. Blue Iris actually has this feature integrated (it is used to work with the Sentry AI), but it is not accessible by programs like mine, so we have to find another way.

    We simply duplicate the camera in Blue Iris. In the camera duplicate, we disable recording and all other CPU-heavy features (Optimizing Blue Iris's CPU Usage | IP Cam Talk) and then we configure the motion detection to be quite sensitive. Furthermore, we configure that every time a motion is detected, an image should be taken and the AI analysis program should be started.

    The program then analyzes the image taken and, in case a person (or whatever we configured to cause an alert) is found in it, the program will call the Blue Iris alert URL for the original camera, so this camera (not the duplicate we created) is triggered. Because the AI software needs some time (per image: a splitsecond with a decent NVidia gpu and 5-15s without a gpu), we have to set the pre-trigger buffer of the original camera to the delay time.



    Setup:
    The setup seems to be very complicated, but actually it's quite simple and I'm just writing alot that should help you understand how the software works.

    Overview:
    1. Install Docker and install and start DeepQuestAI container in Docker
    2. Configure the program according to your setup and requirements
    3. Configure Blue Iris

    optional: Send alert images to Telegram


    1. Install Docker and install and start DeepQuestAI container in Docker

    Docker can be installed on Linux, macOS and Windows. As BI-users most likely :D will have a Windows PC running already, I will describe the setup process for Windows 7 (because imho someone who runs a Win10 pc containing all sensitive CCTV files probably does not care that much about privacy #nooffense). If you want to use your NVidia gpu, as far as I'm concerned, you need to run docker on linux. Here is a guide on how to use DeepQuestAI with the gpu: Using DeepStack with NVIDIA GPU — DeepStack 0.1 documentation .
    1.1 First we want to download an install Docker Toolbox: Docker Toolbox overview

    1.2 Install DeepQuestAI in Docker: open the Docker Quickstart Terminal (shortcut on the desktop) and enter 'docker pull deepquestai/deepstack' as soon as docker finish setting up everything. The installation of DeepQuestAI takes some time aswell, meanwhile you can proceed with step 1.3

    1.3 Although free, DeepQuestAI needs an API key, so we have to register an account (email not needed). Create an account at Sign Up, choose the free plan (that is sufficient for our use), go to the portal (Dashboard) and copy your API key. Notice that on the Dashboard it says 'Expires: Unlimited'.

    1.4 As soon as DeepQuestAI is installed, start DeepQuestAI(from now on I'll call it DQAI :D ) using the command 'docker run --restart=always -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack'

    1.5 Docker natively uses a strange ip address 192.168.99.100, so open this address with you webbrowser and you'll see the DQAI interface, now input the API key and activate it. Notice that now, an expiry date is give (in 2 years). I don't know if the API expires or or not, but getting a new API key every 2 years isn't a great problem imho.​
    Now the actual software that analyzes the images is already running as a webservice, so if Docker hadn't set it up with this unusual IP address,we would be able to access DQAI not only from the server it is running, but from every computer in the network.



    2. Configure the program according to your setup and requirements

    2.1 Download the attached program and unzip it. The program is designed to analyze images from one camera (or at least to call only one trigger url [of one camera]), but you can run multiple programs, one for every camera, if you wish.

    2.2 open the config.txt using Notepad++ or some other text editor that does not drive you crazy :D. Now what you see looks like the following:​
    Code:
    DeepStack URL and Port: "192.168.99.100:80" (format: "url: port", example: "192.168.99.100:80")
    Trigger URL: "http://192.168.1.133:80/admin?trigger&camera=frontyard&user=admin&pw=secretpassword" (format: "url", example: "http://192.168.1.133:80/admin?trigger&camera=frontyard&user=admin&pw=secretpassword")
    Relevant objects: "person, bicycle" (format: "object, object, ...", options: see below, example: "person, bicycle, car")
    Input path: "C:/BlueIris/New/" (example: "C:/BlueIris/New/")
    Output path: "C:/BlueIris/AIDetections/" (empty to disable saving cutouts with detected objects, example: "C:/BlueIris/AIDetections/")
    Continue after detection of relevant object: "no" (options: "yes" or "no", explanation: if the first image with relevant objects is detected, analyse the remaining images aswell?)
    Input file begins with: "" (only analyze images which names start with this text, leave empty to disable the feature, example: "backyardcam")
    Start delay: "" (input how many seconds the program shall wait before starting, example: "3")
    
    Telegram option (leave empty to disable):
    Telegram Token: ""
    Telegram Chat ID: ""
    
    possible trigger objects: person, bicycle, car, motorcycle, airplane,bus, train, truck, boat, traffic light, fire hydrant, stop_sign,parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant,bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase,frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove,skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork,knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot,hot dog, pizza, donot, cake, chair, couch, potted plant, bed, dining table,toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave,oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair dryer, toothbrush
    Please read through the explanations and modify all parameters according to your needs and save the file. When setting up your configuration, please mind the following information:

    Input path: The input path is where Blue Iris stores the alert images. We can already add this path to Blue Iris by opening the settings of Blue Iris, then 'Clips and archiving', then click on one of aux folders in the list on the left (if you click on p.e. 'Aux_7' and don't move the cursor for 1-2s, you will be able to change the displayed name) and then set it to the input path we also specified in the config.txt .
    If we want the input path to be a subfolder of the folder where the program is located, we can use a relative path, p.e. "./input/" would create a new folder 'input' in the programs base directory.

    Output path: If we don't specify an output folder, the program won't store the image cutouts containing detected objects (saves resources). The output path can be a relative path just like the input path.

    Relevant objects: Please note that despite that fact that it is written, trigger objects containing multiple word (like 'fire hydrant') probably wont work due to the way the config file is processed by the program. It's the first actual program I wrote, so please excuse that.

    Input file begins with: Blue Iris will store the alert images under names that start with the camera name, so p.e. 'aifrontyard.20180326_054241.0.64.jpg'. If we store the alert images of multiple cameras in that folder, we can filter the images from the duplicated camera out by setting the option 'Input file begins with:' to the name of our duplicated camera, in this case 'aifrontyard'.

    Start delay: It makes sense to set 'Start delay' to maybe 2-5s, because BI needs a short time to save the images. If we set Blue Iris to make e.g. a snapshot every second for the next 5s after an alert, we furthermore profit from the fact, that after the program has analyzed the first images, it will check if new images are in the folder that were saved while the program was analyzing. Unless configured otherwise, it will analyze the new images aswell.

    Continue after detection of relevant object: This option is quite relevant. If we just want to get an alert when something is detected, "no" is a good configuration. But this also means, that as soon as the first person is detected, the program will call the alert URL and then stop analyzing the remaining images (less CPU-heavy). So if we want to be able to see ALL objects cutouts from ALL alert images in the output path, we will want to set this feature to "yes".

    Trigger URL(s): If we don't specify one or multiple 'Trigger URL(s)', no trigger call will be made. If we want to call multiple urls, we have to seperate the urls with commas. Every url has to start with 'http://'. Meanwhile, the trigger URL is not limited to Blue Iris, practically everything can be triggered that has such an URL that one can call for a trigger.

    Alerting Blue Iris with the Trigger URL: To use the trigger url, you have to do the following in Blue Iris:
    1. go to Setting->Webserver->Advanced and disable 'use secure session keys and login page'.
    2. go to Settings->Users and eighter select a user and copy the password, or create a new administrator user.
    3. Open the camera aproperties of the camera you want to to the AI analysis on and under General, copy the short camera name(p.e. 'frontyard').
    4. Take the following url and input blue iris IP, user, password and short cam name: http://[Blue Iris IP]:80/admin?trigger&camera=[shart cam name]&user=[user]&pw=[password] .

    If you filled in everything, copy/paste the whole url into your webbrowser and make sure it causes an alert.​
    Telegram option: Leave this empty for now, if you would like to receive Telegram notifications later, then you can do it after getting everything to run properly.

    Restore config.txt: If you accidently messed the config.txt up, just remove it and run testAI.exe once, it will recreate a working template.​

    A configuration might, for example, look like this:
    Code:
    DeepStack URL and Port: "192.168.99.100:80" (format: "url: port", example: "192.168.99.100:80")
    Trigger URL: "http://192.168.1.133:80/admin?trigger&camera=frontyard&user=admin&pw=secretpassword" (format: "url", example: "http://192.168.1.133:80/admin?trigger&camera=frontyard&user=admin&pw=secretpassword")
    Relevant objects: "person, bicycle, car, truck" (format: "object, object, ...", options: see below, example: "person, bicycle, car")
    Input path: "C:/BlueIris/New/" (example: "C:/BlueIris/New/")
    Output path: "C:/BlueIris/AIDetections/" (empty to disable saving cutouts with detected objects, example: "C:/BlueIris/AIDetections/")
    Continue after detection of relevant object: "no" (options: "yes" or "no", explanation: if the first image with relevant objects is detected, analyse the remaining images aswell?)
    Input file begins with: "aifrontyard" (only analyze images which names start with this text, leave empty to disable the feature, example: "backyardcam")
    Start delay: "3" (input how many seconds the program shall wait before starting, example: "3")
    
    Telegram option (leave empty to disable):
    Telegram Token: ""
    Telegram Chat ID: ""
    
    possible trigger objects: person, bicycle, car, motorcycle, airplane,bus, train, truck, boat, traffic light, fire hydrant, stop_sign,parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant,bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase,frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove,skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork,knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot,hot dog, pizza, donot, cake, chair, couch, potted plant, bed, dining table,toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave,oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair dryer, toothbrush


    3. Configure Blue Iris

    I anticipate you are familiar with Blue Iris, so I keep the description simple.

    3.1 Firstly we have to create a camera whose only purpose is to start the AI detection program when a motion is detected. So add a new camera, give it a name that makes sense (e.g. if your original camera was called 'frontyard', call it 'aifrontyard'), and under type select 'copy from another camera' and choose the appropriate one.

    3.2 Keep in mind that this cameras only job is to detect motion and then start the AI program, so disable all features on this camera that are not needed (recording, pretrigger, etc). Because Blue Iris is already prepared to work with camera clones it is not neccessary to lower the resolution to save on CPU resources. Quite the opposite: If the camera stream url isn't changed, there will be zero additional CPU usage. Instead, changing the stream url to p.e. a profile with a lower resolution will cause additional CPU load.

    3.3 now go to Record, check 'JPEG snapshot each (mm:ss)', select the folder you defined as the input folder in the AI detection program, check the box 'Only when triggered' and set the interval to p.e. 0:01.0 (one image every 1 second). Furthermore, you might want to disable 'Create Alert list images when triggered', because otherwise alot of false-alarm images (remember we set the motino detection to be very sensitive) will be stored in your alerts folder.

    3.4 go to 'Trigger' and set the Break time 'End trigger unless retriggered' to p.e. 5s. While setting this value, remember the following: If you set the interval in step 3.3 to 1s, then 5s means, that 5 images are created and have to be analyzed one after the other.

    3.5 last but not least go to 'Alerts', check 'run a program or execute a script', click 'configure' and select testAI.exe (in the attachment) as the file to run. For the initial phase of using the AI, it might be useful to set the window to 'normal' and not to 'hide', because this facilitates troubleshooting.​
    Now run around in front of your cameras or - if you are terribly lazy - right click on the new camera we created and select 'Trigger now'. After the AI program did the analysis, check the trigger clips of the original camera and the image cutouts in the output folder (well if you just clicked trigger now, you should not have images outputted and an alert because - hopefully - there is no one sneaking around on your ground, trying to steal your car. Otherwise get the shotgun and .... no no no that's a bad attitude :D).

    If you are still to lazy to go out, you can take any jpg picture containing an object that you configured to trigger an alert, put it into the input folder and put the name of the camera (p.e. 'aifrontyard') in front of the image name. And then manually trigger the camera.

    If everything works fine then you now have lowered your false detection rate significantly while enhancing the rate of correct detections.

    The program can send trigger images using a Telegram bot. The program needs two strings to connect to Telegram, 1. the Telegram API key and 2. the chat-id of the chat between you and the bot.
    4.1 To create a bot and get the api key:Bots: An introduction for developers
    4.2 Now contact the bot you created with the telegram account you want to receive the notifications on.
    4.3 Retrieve the chat-id: https://stackoverflow.com/questions/32423837/telegram-bot-how-to-get-a-group-chat-id
    4.4 open the config.txt file of the AI program and input 'Telegram Token' and 'Telegram Chat ID'.
    4.5 start the testAI.exe and check if under Options it is noted that Telegram notifications are enabled.​
     

    Attached Files:

    Last edited: Jul 21, 2019 at 7:00 AM
  2. GentlePumpkin

    GentlePumpkin Getting the hang of it

    Joined:
    Sep 4, 2017
    Messages:
    39
    Likes Received:
    45
    I just updated the software because the was a bug. When after the first analysis more images were found and analyzed, the program would not delete them.
     
    MnM, andycots and mat200 like this.
  3. GentlePumpkin

    GentlePumpkin Getting the hang of it

    Joined:
    Sep 4, 2017
    Messages:
    39
    Likes Received:
    45
    The latest version v0.5 can send alert images to Telegram and call multiple trigger urls (p.e. home automation server, blue iris, etc).
     
    MnM, social26, motoolfan and 2 others like this.
  4. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    Hi - good work on this.
    I have installed it and gave it a test. Day time detection seems to works really good. Nighttime is not working as it supposed to. The real camera detects movement at night but the cloned camera + deepstack cannot detect anything.

    I wonder if I am doing something wrong?
    I plan to test different values in Object Size and Contrast maybe that will make a difference.

    How is your night time detection?
     
  5. Cameraguy

    Cameraguy Getting comfortable

    Joined:
    Feb 15, 2017
    Messages:
    828
    Likes Received:
    463
    Cool, I'll be keeping an eye on this thread
     
    MnM and GentlePumpkin like this.
  6. GentlePumpkin

    GentlePumpkin Getting the hang of it

    Joined:
    Sep 4, 2017
    Messages:
    39
    Likes Received:
    45
    Hello and sorry for the late response. I was very busy, which is why I haven't released the completely new gui version until now.

    I have two ideas regarding your problem:
    1. maybe you have a night profile and forgot to enable 'run a program or execute a script' in the night profile for the camera. That would cause that the software isn't even started. I guess you did not forget that, but I just wanted to make sure.
    2. maybe the contrast at nighttime is too low, so that Deepstack does not detect the objects? I made extremely positive experience regarding that even when the contrast between object and background was nearly zero, but maybe?
    But actually I would recommend you to upgrade to the version I finally released today. I rewrote it nearly from scratch, so many childhood illnesses from previous version should be removed. If your problem persists, just write a PM and we'll see what we can do.
     
    bp2008, MnM and looney2ns like this.
  7. GentlePumpkin

    GentlePumpkin Getting the hang of it

    Joined:
    Sep 4, 2017
    Messages:
    39
    Likes Received:
    45
    Huge! Upgrade: GUI, running in background, statistics, one alert per event and more. Available with v1.43 .
     
    bp2008 and MnM like this.
  8. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    Hi,

    Thank you again for your great work on this. I think my issue before (nioght time detection) were my BI setitngs. I managed to solve it.

    Now after installing the new version I keep getting this in the logs (full logging enabled) form all cameras.

    08.07.2019, 06:38:38]: Starting analysis of D:\Alerts/Cam8AI.20190708_063837.0.2.jpg
    [08.07.2019, 06:38:38]: 1. uploading image to DeepQuestAI Server ...
    [08.07.2019, 06:38:38]: 2. Waiting for results ...
    [08.07.2019, 06:38:38]: 3. Received results.
    [08.07.2019, 06:38:38]: ERROR: Processing the image D:\Alerts/Cam8AI.20190708_063837.0.2.jpg failed. Enabling 'Log everything' might give more information.

    I have looked over all my settings and all looked OK.
     
  9. Forid200

    Forid200 n3wb

    Joined:
    Jul 11, 2019
    Messages:
    23
    Likes Received:
    3
    Location:
    London
    I just gave this a shot and I'm having the same problem, tried running the program as Admin too.

    @GentlePumpkin Any ideas? Unfortunately "log everything" isn't helping me much trying to determine what the issue here is.
     
    Last edited: Jul 11, 2019
  10. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    This seems to be a DeppStackAI issue.
    If I point AI Tool to my docker DeepStackAI (running on a separate linux virtual server) it works just fine.
    Maybe go here Windows10 1903 - DeepstackAI crashes and post in my thread so we can get some traction on this issue?
    Had that thread opened for a while but no word from the developers yet.
     
  11. Forid200

    Forid200 n3wb

    Joined:
    Jul 11, 2019
    Messages:
    23
    Likes Received:
    3
    Location:
    London
    Weird, ideally I'd like it running on the same box. I appear to be able to open the DeepStack local web page on a browser just fine, so physically it appears to be somewhat up. I've not gotten any crashes but it's just not working. I'd give the docker a shot but I use ESXi for virtualisation and it's getting quite late here, so I'll need to leave this project chilling for now.
     
  12. Forid200

    Forid200 n3wb

    Joined:
    Jul 11, 2019
    Messages:
    23
    Likes Received:
    3
    Location:
    London
    In a bizarre twist of events, I just restarted my machine. Upgraded Blue Iris to latest version and it appears to have started working... It's definitely picking up false alerts and logging them in the stats. I'll continue to monitor and see how this goes.

    Code:
    [11.07.2019, 23:29:41]: Starting analysis of G:\
    
    
    BlueIris\AI_ID/PorchAI.20190711_232940805.jpg
    [11.07.2019, 23:29:41]: 1. uploading image to DeepQuestAI Server ...
    [11.07.2019, 23:29:41]: 2. Waiting for results ...
    [11.07.2019, 23:29:42]: 3. Received results.
    [11.07.2019, 23:29:42]:    Detected objects:
    [11.07.2019, 23:29:42]: Response success
    [11.07.2019, 23:29:42]: 5. Camera Porch AI caused a false alert, nothing detected.
    [11.07.2019, 23:29:42]: Adding false to history list.
    
    EDIT: Just an update, everything seems to be working perfectly. However, I need to test the AI capabilities a bit more. Just on initial tests, it doesn't appear to recognise partial body. I'll see how this goes, really needed AI in my setup. Sick of false alerts, and this appears to somewhat work unlike Sentry...
     
    Last edited: Jul 11, 2019
    GentlePumpkin likes this.
  13. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    What version of Windows are you running?
     
  14. Forid200

    Forid200 n3wb

    Joined:
    Jul 11, 2019
    Messages:
    23
    Likes Received:
    3
    Location:
    London
    Windows 10 Pro version 1803, up to date with Windows updates.
     
  15. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    Ahhh OK - I am running the latest which is 1903
    I suspect you will encounter same issue as me after you will update to 1903
     
  16. Forid200

    Forid200 n3wb

    Joined:
    Jul 11, 2019
    Messages:
    23
    Likes Received:
    3
    Location:
    London
    Did you try a reboot? Because I had the exact same errors as you but the AI didn't crash. After reboot it started to all work like magic.

    I might experiment with the Docker version to see if it's any faster, I'll just need to do a few extra steps as my server runs ESXi. It's already pretty fast taking just below a second running locally on my Blue iris box though. That's still pretty insane.
     
  17. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    Yes tried reboot - didn't help.
    With the docker I am getting sub second image analysis but I will like to see what I get with local docker.
     
  18. Forid200

    Forid200 n3wb

    Joined:
    Jul 11, 2019
    Messages:
    23
    Likes Received:
    3
    Location:
    London
    It's about 600ms for me. Which is pretty nice, I might even put the image capture time down to give the AI a better chance at detecting blurry or bad images.
     
  19. Tinbum

    Tinbum Young grasshopper

    Joined:
    Sep 5, 2017
    Messages:
    55
    Likes Received:
    4
    Location:
    UK
    Just installed this last night and I must say I really like it. Thank you.

    At the moment I haven't duplicated the cameras and I'm letting BI5 run as it always has but I'm now getting Telegram alerts which is great. Telegram took a little bit of setting up but only because I couldn't tell the difference between a 0 and an O in the token.

    I have a lot of my cameras with images of stationary (permanent) cars in. Will the AI learn that these aren't alerts?

    It hasn't quite got to grips with a horse isn't a person though.

    Still setting it all up though.

    Could I make one suggestion that you add the possibility to resize the console.
     
  20. MnM

    MnM Young grasshopper

    Joined:
    May 14, 2014
    Messages:
    44
    Likes Received:
    3
    Tinbum - what windows version are you running?