[tool] [tutorial] Free AI Person Detection for Blue Iris

I have attached some .bat files that GentlePumpkin created.

start ai tool - will stop the service and start the exe so changes to AITool.exe config can be made
start ai service - after you close down AiTool.exe run this bat file to start the service
Both .bat files will need to be edited (in notepad for example) and enter you path to AITool.exe - see the screenshot for example

In regards to the docker version - are you trying to install it on the same windows server as BI? I have never done that - I run it in a separate virtual Ubuntu 18.04 without any issues
 

Attachments

Thank you. Not tried it yet but I'm guessing the only way to look at the history will be in the log.

Yes I was.
 
  • It is not possible to adjust the confidence threshold of DeepStack detection
-> This definitely is a planned feature

  • Sometimes the aitool uses more than 30% of CPU (I have an I5 4590) which is strange because the process of detection is made by deepstack, anyone knows why this is happening?
-> that is really strange. If you go trough the history tab images fast than aitool might reserve a few hundred mb of ram for the images (for the time viewing them + a few seconds), but I've never seen a high CPU utilisation. Please PM me with details such as duration of high usage and - if possible - what aitool was doing at the moment.

Source Code:

I sincerely have no idea how this works with github and visual studio projects, but I think that's a good thing if you give m some tips. :)
 
Request - Would it be possible to control the telegram notifications (on and off) and link it to the Blue Iris profiles?

Probably, but it's unlikely that I soon find time to implement such a thing (usage of BI APIs etc). In the meantime:
1. just make a second camera duplicate in BI and set it as active while your "I'm home" profile is active
2. disable your first camera duplicate in BI during the times when the profile "I'm home" is active
3. add a camera profile in aitool for your newly created camera clone and disable Telegram message upload for it.
 
-> that is really strange. If you go trough the history tab images fast than aitool might reserve a few hundred mb of ram for the images (for the time viewing them + a few seconds), but I've never seen a high CPU utilisation. Please PM me with details such as duration of high usage and - if possible - what aitool was doing at the moment.

The highest cpu I've seen on mine for AITool has been 8% when it was working hard and I was trying to install deepstack docker and that wasn't working.
 
AI Tool interface for me barely uses anything, I am running it on a pretty beefy system though. However Deepstack is making me tear my hair out, I've been having constant memory leaks and have nailed it down to deepstack, eventually using up over 16GB of RAM on it's own and even crashing windows processes like explorer.

How have you guys got the docker version working? I've tried following their guide on their website and keep getting thread aborting. I've had this on Ubuntu and Debian now...
 
How have you guys got the docker version working? I've tried following their guide on their website and keep getting thread aborting.

This is what I do to install an Ubuntu Server.

- I use the minimal Ubuntu install image (18.04) and only install the SSH server
- Apply all Ubuntu updates after finishing the install
- I install Docker CE (How To Install and Use Docker on Ubuntu 18.04 | DigitalOcean - follow this if you can).
- Define a network in docker (I have mine called internal)
- I install Docker Compose (How To Install and Use Docker Compose on Ubuntu 18.04 | Linuxize)
- I setup my docker compose file (in /opt/docker-compose.yml) like the below. Adjust "/local path to your config dir" to a path on your Ubuntu server where you want to keep the DeepStackAI config. Indentation is important. I use port 5000

Code:
version: '3'
services:
    # ----------------------------------------
    # DeepStackAI
    # ----------------------------------------
    deepstackai:
        image: deepquestai/deepstack:latest
        container_name: deepstackai
        hostname: deepstackai
        volumes:
            - /local path to your config dir:/datastore
            - /etc/localtime:/etc/localtime:ro
        environment:
            - VISION-DETECTION=True
        networks:
            - internal
        ports:
            - 5000:5000
        restart: unless-stopped
networks:
  internal:
    external:
      name: internal

- cd to /opt dir if you are not already there then run

Code:
sudo docker-compose up -d deepstackai

DeepstackAi should be installing and then run

Using a NEW email address register a new account with DeepStackAI and get a new code. Browse to http://your server ip:5000 and enter your new code

Check DeepStackAI logs with:

Code:
sudo docker logs -f deepstackai

Using the above all works for me with no issues.

One thing to note - make sure your hardware support AVX. if not supported replace image: deepquestai/deepstack:latest in your docker compose file with image: deepquestai/deepstack:noavx (see Docker Hub)
 
Last edited:
Just a bit of info for if someone is thinking of doing what I have done here.

I have different profiles for my cameras depending on if I am at home or not or day and night.

My drive camera, for example, wants to be able to detect people at night but not during the day when I am often walking across it. But I want to detect cars all the time.

I have set it to save snapshots, but with different file names, so the AITool can have 2 different cameras set up within it with different profiles. The problem I found was that the snapshots would not show up under the camera in BI as BI had problems recognizing the file names for the camera. On speaking to Ken it turns out the filename has to end or begin with a number, this is now mentioned in the new help PDF for BI.

So for example this won't work;

DriveCars.%Y%m%d_%H%M%S%t.&CAM
DrivePeople.%Y%m%d_%H%M%S%t.&CAM


I've found the following will work;

1DriveCars.%Y%m%d_%H%M%S%t.&CAM
2DrivePeople.%Y%m%d_%H%M%S%t.&CAM
 
Idea...as I am working on my own python script to do generic people and vehicle detection that works with minimal requirements and half-second responses from the Yolo v3 model.

What about storing the coordinates of the car(s) in the image, to compare it to the last alert image. This is to prevent duplicate alerts if say a bug or passing headlights set off the motion alert in blueiris and there is a car parked in your driveway or parking lot. The only issue I have found is that the bounding boxes of the car change slightly between alert images (even if the car remains in the same spot), I'm not sure how to account for this (you could say "ignore if it is +/- within X pixels" but then it may ignore legitimate vehicles moving).
 
I had a reply back from Ken re the * and clones where the * would move, below is what he said, as I’m away at the mo I’ve not had the chance to test.

“There is a little known checkbox on the Video tab ... "designated group master" ... this can be used to force a specific camera in a clone group to always be the "real" camera.

However, the way clone works it just copies the video stream over to each clone. Each clone then operates independently with regard to triggering and alerts.”
 
  • Like
Reactions: MnM
I'm trying to install the docker version of deepstack but when I open localhost in a web browser and enter my API key the page just hangs with 3 changing colour dots as if it can't authenticate. Any ideas? The same key as the windows version shows as activations exceeded but a new key just gets this hanging.

I got an email yesterday that the Depstack key is now able to be used multiple times and I also saw an update for the Windows Docker Desktop so gave it another try and had no problem installing and running it. :)

Initial findings are that I think its using less CPU and it will be interesting to see if it's any more accurate.
 
  • Like
Reactions: johnny2678
Hey @Tinbum, are you saying the Windows version of Docker was updated and that was giving you better CPU util? I tried updating the Deepstack container running on Ubuntu but there was no update for the deepquestai/deepstack:latest image.

john@dockerVM:~/docker-deepstack$ docker-compose pull
Pulling deepstackai (deepquestai/deepstack:latest)...
latest: Pulling from deepquestai/deepstack
Digest: sha256:383c1ad7e7c0dda01d7bd3fdc3c4bfa97c6421f502a311fcbaf6e2fa2d0d5b6e
Status: Image is up to date for deepquestai/deepstack:latest
 
I tried installing 'Windows docker desktop ' but couldn't get it to work. It always hung after putting in my API key. That was on 30th July. Today I noticed the update to Windows docker desktop so thought I would give it another try. Up till now I've been using the DeepQuestAI Windows installer.

Yes i think its using the CPU less but haven't done any definitive test.
 
  • Like
Reactions: johnny2678
Idea...as I am working on my own python script to do generic people and vehicle detection that works with minimal requirements and half-second responses from the Yolo v3 model.

What about storing the coordinates of the car(s) in the image, to compare it to the last alert image. This is to prevent duplicate alerts if say a bug or passing headlights set off the motion alert in blueiris and there is a car parked in your driveway or parking lot. The only issue I have found is that the bounding boxes of the car change slightly between alert images (even if the car remains in the same spot), I'm not sure how to account for this (you could say "ignore if it is +/- within X pixels" but then it may ignore legitimate vehicles moving).

Yes I thought about this too, good idea. Maybe a reference image (containing everything that should not cause an alert) will do the job aswell. We'll see :D
 
update v1.56
  • now 9 instead of just 1 detection points are used to determine whether an object is covered by the privacy mask or not
  • fixed the bug that caused unsaved changed to disappear if an alert is analyzed while editing the camera settings (b7)
 
  • Like
Reactions: MnM
Thank you.

Is their a way to see what is going on in AITool when its run as a service? I know it's possible to look in the log but you can't see the picture/ mask and object.