[tool] [tutorial] Free AI Person Detection for Blue Iris

Personally, I don't actually find the 24/7 low-res recording of much use and I am moving away from it.
You can run all this without cloning cameras and on one camera using version 1.67 or the latest pre-release from VorlonCD. Without a clone, you still save all the motion, but it will only flag recordings with detected objects/persons. If you are okay with this and have the storage space, there is a description how to do it on the first page of this thread using AITool 1.67 or higher. It's pretty simple. You just setup motion recording as normal in Blue Iris, but make it quite sensitive. Then just add the saving JPEG every few seconds to the same camera you record with so AITool has something to send to Deepstack so recordings can be flagged.

As I only want to record the events specifically flagged by AItools/Deepstack and not every motion event, I will still use a cloned camera setup and hide the clones.
The advantage of actual cloned camera over Rob's (The Hookup) setup. The cloned camera uses no extra bandwidth or CPU time. As long as the clone has identical streams to the main camera it is cloned from, it will use no extra resources.
The low res stream Rob uses doesn't use much in resource, but it all adds up if you have several camera's and the low-res JPEGs can sometimes be less accurate, especially if you have a camera observing a wide area, say a camera that is up on the second floor looking out over a yard.

This is a super helpful response, thank you! What is the process for actually cloning the camera? and not doing Rob/The Hookup's method (creating the 2nd stream of cameras)? I'm powering through the thread but it is slow going, i'm only on page 62/147. I've so far seen a screenshot of a radio button to mark "Master Clone" or something to that affect, but nothing about the process of actually cloning (but I assume it's during setup?). I assume I should backup BI first before deleting the low res cameras (or maybe start from scratch)? any advice would be most welcome.

One question i've seen discussed is the max optimal size of the jpeg to analyze. Someone int he 50's of the thread thought 720p was the optimal size, and thought 4k would take too long to process without any added benefit over 720p, any takes on that?

If I understand your method correctly, You are still creating (and hiding) cloned cams, but they are identical resolution (i.e. in my case, 4k). In the end, blue iris will record 24/7 on the 4k "main" stream and flag all events (what clip size do you set here? 8hr? 1hr? and I assume you save ALL footage for a day?) and then on the cloned stream you will record all these motion events for "longer" term storage. is that right?
 
How are you guys handling alerts, specifically email alerts? I would like to get emails of relative images from AI Tools instead of the normal alerts from BI. Additionally it would be nice if the alert images can contain the same AI indications that is shown in AI Tools History.
 
I've noticed the latest version(s) of BI breaks flagged and confirmed tagging. I've tried it manually in a browser to verify it wasn't AITool and it still didn't work (as of 5.3.7.10).

I rolled it back to 5.3.7.5 and it fixed the issue.

I'm also going to try the confirmed alert tag which will put the Sentry "S" logo in the thumbnails. This might be good for those who don't want to flag verified clips.

Code:
/admin?camera=x&flagalert=x&memo=text&jpeg=path
x = 0 mark the most recent alert as cancelled (if not previously confirmed).
x = 1 mark the most recent alert as flagged.
x = 2 mark the most recent alert as confirmed.
x = 3 mark the most recent alert as flagged and confirmed.
x = -1 reset the flagged, confirmed, and cancelled states
 
I've noticed the latest version(s) of BI breaks flagged and confirmed tagging. I've tried it manually in a browser to verify it wasn't AITool and it still didn't work (as of 5.3.7.10).

I rolled it back to 5.3.7.5 and it fixed the issue.

I'm also going to try the confirmed alert tag which will put the Sentry "S" logo in the thumbnails. This might be good for those who don't want to flag verified clips.

Code:
/admin?camera=x&flagalert=x&memo=text&jpeg=path
x = 0 mark the most recent alert as cancelled (if not previously confirmed).
x = 1 mark the most recent alert as flagged.
x = 2 mark the most recent alert as confirmed.
x = 3 mark the most recent alert as flagged and confirmed.
x = -1 reset the flagged, confirmed, and cancelled states
Yes, I can confirm the latest BI broke &flagalert. It drove me crazy, I thought I messed things up.
 
Seems the documentation added a variable called flagclip which seems to flag it properly when I trigger the camera manually and run the url twice in a broswer. Will test to see if it works when BI detects someone.

Code:
http://127.0.0.1:81/admin?camera=FrontDoor&flagalert=3&memo=testmessage&user=user&pw=password&flagclip

http://127.0.0.1:81/admin?camera=[camera]&flagalert=3&memo=[summary]&user=[Username]&pw=[Password]&flagclip

Update:
Seems it stopped working again. Must be something broken with the newer updates. Reverted back to 5.3.7.5.
 
Last edited:
  • Like
Reactions: damaar and Nierka
I was scratching my head over why certain sections of video would seem to freeze or skip during the recordings.
I thought it was processing issues, but CPU usage is less than 20%. Deepstack and AITools processing time was less than a second and no repeat triggers being sent.
Then I found this setting in Other under the "Viewer" section called "Skip dead-air during timeline playback".
I didn't really think it would solve it but I thought what the hell and unticked it. All the sudden, no more skipped parts in clip playback.
Not sure if it was the issue, but I am not changing things again now it is running perfectly smoothly.

B.
 
This is a super helpful response, thank you! What is the process for actually cloning the camera? and not doing Rob/The Hookup's method (creating the 2nd stream of cameras)? I'm powering through the thread but it is slow going, i'm only on page 62/147. I've so far seen a screenshot of a radio button to mark "Master Clone" or something to that affect, but nothing about the process of actually cloning (but I assume it's during setup?). I assume I should backup BI first before deleting the low res cameras (or maybe start from scratch)? any advice would be most welcome.

One question i've seen discussed is the max optimal size of the jpeg to analyze. Someone int he 50's of the thread thought 720p was the optimal size, and thought 4k would take too long to process without any added benefit over 720p, any takes on that?

If I understand your method correctly, You are still creating (and hiding) cloned cams, but they are identical resolution (i.e. in my case, 4k). In the end, blue iris will record 24/7 on the 4k "main" stream and flag all events (what clip size do you set here? 8hr? 1hr? and I assume you save ALL footage for a day?) and then on the cloned stream you will record all these motion events for "longer" term storage. is that right?

Cloning a camera is just adding a new camera and chosing the copy from existing camera option. If the clone has identical streams to the original, it doesn't use extra bandwidth or resources. You then would modify the trigger section of the clone, identical to Rob's method, but chosing never record and only using it to take snapshots.

I use 640x480 resolution JPEGS with a camera that is on the third floor and it picks up people on the ground with about 98% accuracy. I would not use anything above 720p for JPEG resolution unless you want to do full facial recognition as it takes Deepstack longer to process higher resolution images.

If you do not need 24/7 recording, then make sure your clone camera is set to have the Video option not ticked on the Record tab as it will only be used to take JPEG images. So basically there are only 2 differences in the setup over Rob's 24/7 recording setup. 1. Clone has identical feeds to original, rather than only being a substream. 2. Video recording is turned off.
That way the clone only records JPEG's and not video. All video will be recorded on the original camera only when AITools sends a trigger command.

If you want 24/7 recording, you will likely want a low resolution recording stream unless you have huge amounts of storage space to save 4K video or you can tolerate only storing for a very short time and lowering the lifespan of your recording media.

Edit: Just as note, you can hide the cloned cameras and they will still act as triggers and record jpegs. In camera settings on General, click Hidden (make sure enabled remains ticked)
 
Last edited:
Dopes resi
Cloning a camera is just adding a new camera and chosing the copy from existing camera option. If the clone has identical streams to the original, it doesn't use extra bandwidth or resources. You then would modify the trigger section of the clone, identical to Rob's method, but chosing never record and only using it to take snapshots.

I use 640x480 resolution JPEGS with a camera that is on the third floor and it picks up people on the ground with about 98% accuracy. I would not use anything above 720p for JPEG resolution unless you want to do full facial recognition as it takes Deepstack longer to process higher resolution images.

If you do not need 24/7 recording, then make sure your clone camera is set to have the Video option not ticked on the Record tab as it will only be used to take JPEG images. So basically there are only 2 differences in the setup over Rob's 24/7 recording setup. 1. Clone has identical feeds to original, rather than only being a substream. 2. Video recording is turned off.
That way the clone only records JPEG's and not video. All video will be recorded on the original camera only when AITools sends a trigger command.

If you want 24/7 recording, you will likely want a low resolution recording stream unless you have huge amounts of storage space to save 4K video or you can tolerate only storing for a very short time and lowering the lifespan of your recording media.

Edit: Just as note, you can hide the cloned cameras and they will still act as triggers and record jpegs. In camera settings on General, click Hidden (make sure enabled remains ticked)
Does resize jpeg snapshots is less CPU intensive than DeepStack processing larger images?
 
  • Like
Reactions: seth-feinberg
Dopes resi

Does resize jpeg snapshots is less CPU intensive than DeepStack processing larger images?
Deepstack uses less CPU time and processes a lot quicker with the smaller images compared to the time Blue Iris uses to resize the image.
Big time saving if you can send Deepstack the resized image and it works out okay for your situation
 
Sorry, just catching up after a few days away.
Code:
[GIN] 2020/12/29 - 16:26:52 | 200 |    128.7103ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:26:56 | 200 |    144.0097ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:26:59 | 200 |    139.4709ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:03 | 200 |    142.1914ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:06 | 200 |    127.6803ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:10 | 200 |    127.9357ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:13 | 200 |    125.9889ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:17 | 200 |    124.4378ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:20 | 200 |    129.2105ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:24 | 200 |    133.2759ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2020/12/29 - 16:27:27 | 200 |    145.9042ms |      172.17.0.1 | POST     /v1/vision/detection

That's on HIGH on a Deepstack GPU running in Docker on WSL2. This is the other instance that runs:
Code:
[GIN] 2021/01/01 - 06:27:03 | 200 |    117.6835ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:14 | 200 |    135.3589ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:17 | 200 |    125.6809ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:19 | 200 |    120.2531ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:27:22 | 200 |      119.41ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:36 | 200 |    135.4541ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:39 | 200 |    132.1701ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:41 | 200 |    146.6058ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:44 | 200 |    107.0353ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:30:59 | 200 |    136.0961ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:02 | 200 |     118.547ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:04 | 200 |    105.7385ms |      172.17.0.1 | POST     /v1/vision/detection
[GIN] 2021/01/01 - 06:31:07 | 200 |    107.7303ms |      172.17.0.1 | POST     /v1/vision/detection

I've just noticed nothing logger for a few days. That's not right...

The other question, I get BI to save the JPEGs at 10% quality and 1280x720 from a cloned HD stream with a motion trigger that never records video.
Happy New Year! Thank you for your stats. I had thought the GPU version would be about 10x faster than the CPU version. I may try that cloned HD stream for the jpegs as well.
 
Anyone having issues with flagging alerts & confirmed etc. not showing up ? Running blue iris 5.3.7.11

Yes it started around the Jan 2 update.


I rolled back to 5.3.7.5 and they work fine.
 
  • Like
Reactions: joshwah
I got one flagged alert this morning and then they stopped, even though I can see in AI TOOL that it has positive detections. Is something wrong with my setup in AI TOOL?

Capture.PNG

EDIT: I've rolled back to 5.3.7.5 and will report back if it's working again
 
Last edited:
Cloning a camera is just adding a new camera and choosing the copy from existing camera option. If the clone has identical streams to the original, it doesn't use extra bandwidth or resources. You then would modify the trigger section of the clone, identical to Rob's method, but chosing never record and only using it to take snapshots.

Ahhh ty! I think it finally clicked. So, since it has to be the same resolution to be a clone, were people adding the low res substream to the main camera, and THEN cloning in BI to record 24/7 and take snapshots? The answer to this question actually leads me to the next part of your followup post:

Deepstack uses less CPU time and processes a lot quicker with the smaller images compared to the time Blue Iris uses to resize the image.
Big time saving if you can send Deepstack the resized image and it works out okay for your situation

Would it then make sense to turn the substream of my camera (like IN the cameras settings, e.g. Amcrest) into a 720p stream (it's SD right now) and enact the method above (but without 24/7 recording)? That way BI isn't converting anything, and AI/DS isn't being fed a JPEG greater than 720p and I can record 4k on confirmed alerts still?

In any event, thanks so much for everything. I'm still (very slowly) plowing through this thread, I'm at like 70/150 and i'd say the last 3 pages worth of info is worth 10-15 pages in the 60s/70s....
 
Hello, this program have been doing well for me. But after some time I started getting this errors on the log file:

Camera has no mask, the object is OUTSIDE of the masked area.
System.Threading.Tasks.TaskCanceledException | A task was canceled. (code: -2146233029 )

This always occurs after checking the object is outside the masked area.
I've read a lot about this error but wasn't able to find a solution.

Is anyone else getting this error?
It's curious that I used this program for about 2-3 months without getting this error, not even once, but after some time I started getting it quite often.
Hopefully someone here can help me with this issue.

Thanks!
 
An awful lot has changed with AITool and Deepstack since the first pages.:)
I'm getting better at skipping over tech support posts to find the info nuggets!

Edit: OOO chris dodge just showed up, I'm at The Two Towers section of the saga!
 
Last edited:
  • Like
Reactions: kosh42efg