[tool] [tutorial] Free AI Person Detection for Blue Iris

I keep getting errors in AITool when using Telegram Cooldown. Error message is "ERROR sending image to telegram". This happens when I have the Telegram cooldown set to say 20 seconds but I have Blue Iris dumping JPEGS every 5 seconds while triggered. I want those images to continue to be processed and flags issued to Blue Iris, I just only want Telegram messages every 20 seconds. I don't understand why it's throwing an error in this situation. The behavior is as expected and it's not supposed to be sending to Telegram, so why does it throw an error?

Has anyone come up with a solution to this? I am experiencing this issue as well... running the latest version from github.
 
I am struggling to get my head around the whole triggers vs snapshot vs break time and how they all link together.

i've got a camera right on my front door (well i've got 8 cameras) but trying to fix the front door first.

So from my understanding, blue iris will trigger an alert, say motion at the front door, i set my end trigger to 8 seconds, and create snapshots every 5 seconds.

Blue iris creates TWO JPG snapshots in which AI TOOL then analyses the TWO images and then because it triggers via url, it tells blue iris to take another 2 images ?

Edit: is there any reason to include the "TRIGGER" command in the URL? It results in far less images being made and analysed?


Unless I am missing something, it does not make sense to make AI TOOLS to trigger blue iris, as it results in soo many images being created... in a real life scenario, if there is 10-20 images in the queue, one with a person somewhere toward the end, it will re-trigger blue iris again, creating additional CPU load and work... even though the person was last detected 20-30 seconds ago?
 
Last edited:
I am struggling to get my head around the whole triggers vs snapshot vs break time and how they all link together.

i've got a camera right on my front door (well i've got 8 cameras) but trying to fix the front door first.

So from my understanding, blue iris will trigger an alert, say motion at the front door, i set my end trigger to 8 seconds, and create snapshots every 5 seconds.

Blue iris creates TWO JPG snapshots in which AI TOOL then analyses the TWO images and then because it triggers via url, it tells blue iris to take another 2 images ?
Not quite.
The motion Blue Iris detects causes the snapshots to be taken or in the event of using single camera, it will actually record all motion clips.
The AI part sends a trigger command back to Blue Iris which then flags the footage and sends an alert (single camera setup) or using the cloned camera method, the trigger URL actually starts the motion recording.
Continued motion within the break timeout is what causes the snapshots to be taken either way and this is a function of the Blue Iris detection not AI.
 
  • Like
Reactions: seth-feinberg
I am struggling to get my head around the whole triggers vs snapshot vs break time and how they all link together.

i've got a camera right on my front door (well i've got 8 cameras) but trying to fix the front door first.

So from my understanding, blue iris will trigger an alert, say motion at the front door, i set my end trigger to 8 seconds, and create snapshots every 5 seconds.

Blue iris creates TWO JPG snapshots in which AI TOOL then analyses the TWO images and then because it triggers via url, it tells blue iris to take another 2 images ?

Edit: is there any reason to include the "TRIGGER" command in the URL? It results in far less images being made and analysed?


Unless I am missing something, it does not make sense to make AI TOOLS to trigger blue iris, as it results in soo many images being created... in a real life scenario, if there is 10-20 images in the queue, one with a person somewhere toward the end, it will re-trigger blue iris again, creating additional CPU load and work... even though the person was last detected 20-30 seconds ago?
@austwhite can type faster than me so I won't re answer but in ref to your edit- How many snapshots are taken is going to be determined by how often you tell BI to do so along with all the other ways you can limit that. AI-Tools is not triggering BI it is simply sending the images to DS (that is my understanding) If your queue is getting filled up then you may need to update your BI box, I have no idea of your set up so I am making an assumption here, Mine is set up all on 1 computer (no DOCKER) and my computer is a old ass Dell optiplex, I have never had queueing issues myself (9 cameras). As far as how to write the trigger do a search here or check out one of the several threads on GitHub or some of the alternate set ups on YouTube, there are no shortages of examples. I struggle with that topic myself simply because I have no flippin idea how to write
them. :) currently I am using these:

[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]
[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]&flagalert=1&memo={Detection}
and in the Cancel block this one:
[BlueIrisURL]/admin?camera=[camera]&user=[Username]&pw=[Password]&flagalert=0

This is exactly how they are entered the user/PW info is on a different location in AI-Tools and get pulled from there *that is how it is with version I use you may have a newer or older version.

**Again I am no expert so if jacked anything up someone PLEASE correct me so there is not bad info out there.
 
Last edited:
  • Like
Reactions: seth-feinberg
@austwhite can type faster than me so I won't re answer but in ref to your edit- How many snapshots are taken is going to be determined by how often you tell BI to do so along with all the other ways you can limit that. AI-Tools is not triggering BI it is simply sending the images to DS (that is my understanding) If your queue is getting filled up then you may need to update your BI box, I have no idea of your set up so I am making an assumption here, Mine is set up all on 1 computer (no DOCKER) and my computer is a old ass Dell optiplex, I have never had queueing issues myself (9 cameras). As far as how to write the trigger do a search here or check out one of the several threads on GitHub or some of the alternate set ups on YouTube, there are no shortages of examples. I struggle with that topic myself simply because I have no flippin idea how to write
them. :) currently I am using these:

[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]
[BlueIrisURL]/admin?trigger&camera=[camera]&user=[Username]&pw=[Password]&flagalert=1&memo={Detection}
and in the Cancel block this one:
[BlueIrisURL]/admin?camera=[camera]&user=[Username]&pw=[Password]&flagalert=0

This is exactly how they are entered the user/PW info is on a different location in AI-Tools and get pulled from there *that is how it is with version I use you may have a newer or older version.

**Again I am no expert so if jacked anything up someone PLEASE correct me so there is not bad info out there.
memo={Detection} should be memo=[Detection] I think. I personally use only one trigger URL, what is the reason to use 2?
 
What is your processing time per pic? And pic size?
I did follow the suggestion to resize the image to max available (1280x1024 from the substream) and this did improve the % recognition significantly (at least 30% on avg). I am running 5 CPU docker instances now at about 350ms/pic.
Recording substream fulltime, mainstream triggered.
Sorry, just catching up after a few days away.
Code:
[GIN] 2020/12/29 - 16:26:52 | 200 |    128.7103ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:26:56 | 200 |    144.0097ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:26:59 | 200 |    139.4709ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:03 | 200 |    142.1914ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:06 | 200 |    127.6803ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:10 | 200 |    127.9357ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:13 | 200 |    125.9889ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:17 | 200 |    124.4378ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:20 | 200 |    129.2105ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:24 | 200 |    133.2759ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:27 | 200 |    145.9042ms |      172.17.0.1 | POST     /v1/vision/detection

That's on HIGH on a Deepstack GPU running in Docker on WSL2. This is the other instance that runs:
Code:
[GIN] 2021/01/01 - 06:27:03 | 200 |    117.6835ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:14 | 200 |    135.3589ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:17 | 200 |    125.6809ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:19 | 200 |    120.2531ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:22 | 200 |      119.41ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:36 | 200 |    135.4541ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:39 | 200 |    132.1701ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:41 | 200 |    146.6058ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:44 | 200 |    107.0353ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:59 | 200 |    136.0961ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:02 | 200 |     118.547ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:04 | 200 |    105.7385ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:07 | 200 |    107.7303ms |      172.17.0.1 | POST     /v1/vision/detection

I've just noticed nothing logger for a few days. That's not right...

The other question, I get BI to save the JPEGs at 10% quality and 1280x720 from a cloned HD stream with a motion trigger that never records video.
 
I might have missed this somewhere, but has there been a Deepstack release for Windows that can either run as a service or auto-start on boot?
I've tried googling and not found anything on this.
I am trying to move away from running it in a virtual machine and running everything natively in Windows.
I run Deepstack in an auto-restarting Docker container in WSL2. I do this using Task Scheduler as per the answer in this thread.

I have nothing against VMs. I use them for running PiHole, Home Assistant, my NAS, my system monitor, etc. But why do it if you don't have to?
 
memo={Detection} should be memo=[Detection] I think. I personally use only one trigger URL, what is the reason to use 2?
Thanks. I am not getting any errors with the incorrect {} vs[] or it is not doing what it should be maybe. I will change it and see if it makes any changes. I was only using 1 also then I was reading something about an alternate way of doing this... anyway I "think" the second one is so BI will flag all the snapshots but if it is not a valid detection it is placed in the cancelled alerts folder, that URL and the one in the cancel block along with the current settings on my cameras are supposed to be all working together is my understanding of what I read. Again that all may be BS? I am trying to understand how the triggers work but I really do not understand if you write X vs Y what does it really mean/do. I have searched and searched for something to help me understand it all but personally had no luck. Based on looking at all the snapshots in my capture folder (aiinput) and checking what is being pushed to telegram, etc. the system is doing what it should correctly, seldom do I catch a snapshot that should have been flagged that is not being pushed to telegram that should have been.
 
I run Deepstack in an auto-restarting Docker container in WSL2. I do this using Task Scheduler as per the answer in this thread.

I have nothing against VMs. I use them for running PiHole, Home Assistant, my NAS, my system monitor, etc. But why do it if you don't have to?
I have read where some people have written a script to get it all running on boot.
 
Wondering if someone can help me. I've setup single cameras recording continuously and sending the triggered snapshots to AITOOL via Post section in BI.
The triggers work fine, however, AITOOL doesn't seem to flag all valid motion on the timeline. They show in the clips but with no flags. Is something wrong on my setup? I'm posting screenshots of my camera setup and also the BI screens that doesn't show the flagged event.
The event in question here is at 1:48pm. Detected by DS and AITOOLs as a dog (even though it's my cat, but that's fine). If you look at the last flagged event, it was at 12:39pm, but it shows fine in the Alerts area.
Thanks much in advance.

Camera setup:

Screen Shot 2021-01-02 at 1.59.19 PM.pngScreen Shot 2021-01-02 at 1.59.08 PM.png

BI timeline
Screen Shot 2021-01-02 at 1.52.22 PM.png Screen Shot 2021-01-02 at 1.51.15 PM.png Screen Shot 2021-01-02 at 1.51.04 PM.png

AI Tool config:
Screen Shot 2021-01-02 at 2.02.34 PM.png
 
Wondering if someone can help me. I've setup single cameras recording continuously and sending the triggered snapshots to AITOOL via Post section in BI.
The triggers work fine, however, AITOOL doesn't seem to flag all valid motion on the timeline. They show in the clips but with no flags. Is something wrong on my setup? I'm posting screenshots of my camera setup and also the BI screens that doesn't show the flagged event.
The event in question here is at 1:48pm. Detected by DS and AITOOLs as a dog (even though it's my cat, but that's fine). If you look at the last flagged event, it was at 12:39pm, but it shows fine in the Alerts area.
Thanks much in advance.

Camera setup:

View attachment 78673View attachment 78674

BI timeline
View attachment 78675 View attachment 78676 View attachment 78677

AI Tool config:
View attachment 78678
Is there a reason you're posting the images using the post tab rather than sending them using the Alert tab? Maybe that is the cause, as I don't see anything else especially wrong with your config. You might also want to remove the "trigger" statement from the Trigger URL because you don't want to re-trigger the camera (or if it is already in a triggered state you might now want to resend a trigger command).
 
Anyone trying the DOODS object AI with the latest version of AI Tools? I'd like to compare it to Deepstack and AWS Rekognition. I have it installed in Docker on my QNAP but I haven't been able to get it running just yet. Thanks
 
Is there a reason you're posting the images using the post tab rather than sending them using the Alert tab? Maybe that is the cause, as I don't see anything else especially wrong with your config. You might also want to remove the "trigger" statement from the Trigger URL because you don't want to re-trigger the camera (or if it is already in a triggered state you might now want to resend a trigger command).
I was just following someone's instructions. I made the changes to use the Alerts tab instead of Post tab and also removed the "trigger" element from the Trigger URL and it seems to be working ok. I'll do more testing tomorrow.

One more question, do I still need a "Cancel URL", or is that not necessary?

Thanks!
 
  • Like
Reactions: seth-feinberg
I was just following someone's instructions. I made the changes to use the Alerts tab instead of Post tab and also removed the "trigger" element from the Trigger URL and it seems to be working ok. I'll do more testing tomorrow.

One more question, do I still need a "Cancel URL", or is that not necessary?

Thanks!
I'm still experimenting myself with the 'Cancel URL' but it seems that if I use Cancel URL (in a set-up where I'm permanently recording 24x7 with motion detection jpeg snapshots) then for any false alerts (i.e. no object of interest detected) the Cancel URL results in the Alert image being removed from the Alert list in Blue Iris. This results in a 'clean' alert list with only confirmed detections in BI, but it also means you need to review the history in AI Tool to see if any valid detections have been missed by Deepstack, rather than in BI itself. For me this is the best approach. But as I say, I'm still experimenting.
 
  • Like
Reactions: seth-feinberg
This results in a 'clean' alert list with only confirmed detections in BI, but it also means you need to review the history in AI Tool to see if any valid detections have been missed by Deepstack, rather than in BI itself.

You can view them via blue iris... just click on "Cancelled alerts".. they will appear there.

In 1609669299156.png
 
  • Like
Reactions: Scoobs72
I run Deepstack in an auto-restarting Docker container in WSL2. I do this using Task Scheduler as per the answer in this thread.

I have nothing against VMs. I use them for running PiHole, Home Assistant, my NAS, my system monitor, etc. But why do it if you don't have to?
I gave up on the Windows version of Deepstack. I just couldn't get it to run right and it was slow compared to the ubuntu VM I had been using previously.
WSL2 is basically a virtual machine running Linux under Windows and I didn't end up going that route.

I ended up running Deepstack as a docker container on HassOS along side Home Assistant. I installed the Portainer add-in in Home Assistant and pulled the docker image of Deepstack. This way Deepstack loads with Home Assistant.
So I ended up with Blue Iris and AItool running on the host OS and using a Type2 Hypervisor to run Home Assistant on the same machine, Home Assistant runs 24/7 so it was going to be there regardless of which approach I took.
BI is snappier this way, but I still ended up virtualising HA and Deepstack.