[tool] [tutorial] Free AI Person Detection for Blue Iris

Yeah, I have 24x7 recording of the substream. Might have to shell out some cash and buy another camera though, just have to convince the wifey.
This is obviously a personal install for you, unless you have an immediate need you can improve your system over time and find out what works best for you.
If you wait a year for the second camera, there will likely better models. That's the good thing about Blue Iris the cameras don't need to be the same, they can be completely different vendors.
Andy from EmpireTech can be found on these forums, I have purchased a few from him at good prices.

I was like you keeping 24x7 of the sub-stream, then I moved to keeping main stream 24x7 based on the following;
  • Recommendations from these forums
  • BI supporting dual streams, basically now runs on the smell of an oily rag
  • Cheap storage (found a decent deals on 6 TB drives) and they just seem to be getting cheaper
  • H.265 compression with CBR (although it can come with a performance and slight quality hit with most vendors)
(caveat where I use an older CPU that doesn't support H.265 hardware, I use H.264 with VBR). Similar storage usage to H.265 CBR. H.265 VBR always artifacts at night with motion for me.
  • Stopped chasing Megapixels, and maximum bit rates (still have some 4K's but went more for the 2K low light models
  • Adopted Deep-Quest AI to Flag key events
  • Disappointed with the quality of sub-stream, main stream is so much better. If an event happens and you only have sub-stream footage, you will be disappointed.
  • Set more realistic retention periods. I think when I started I was keeping all footage forever on the cloud. It wasn't a bandwidth issue it's just that for home, I have no real interest in footage older than a month, and just came to the conclusion it was very wasteful. For me the cams are a visual deterrent, a hobby and peace of mind to monitor pets ect.. when away. Like being on holiday's and checking in to view the chickens in the back yard, checking a parcel wasn't left on the doormat.
  • Reduced the quality of sub-streams, just so it's enough to detect motion. The SNAPS sent to DeepStack are from the MainStream (but have BI downsize them first), seems more efficient.

Most importantly, enjoy and continue to learn and share. There are many people here that know a heck of a lot more than me and I try and learn from them. I also experiment after firmware\software updates as things change.

I must say that while I was always happy with BlueIris motion detection and app (the key reason I switched to BlueIris), this thread about Free AI Person detection has been an absolute game changer.
Like I have telegram notifications enabled when I am away and the detection rate has been astoundingly good. We have had some suspicious activity and the amount of time reviewing flagged events v's triggered events has been an absolute game changer in-terms of quality of life and time saved.
 
The latest compiled but not 'officially released' version looks to be v2.0.526 found on an open issue in VorlonCD's GitHub here: [Feature Request] Add Pushover Notifications · Issue #126 · VorlonCD/bi-aidetection
With this version getting an error..
tryied the versions 2.0.522.7681 and 2.0.526 when i open the new setting tab and setting up the url i get error :

Deepstack returned 'Invalid value for min_confidence' - http status code 'BadRequest' (400) in 38ms: Bad Request.

... AI URL for 'DeepStack' failed '6' times. Disabling: ''


My setup is with ip from a jetson nano.
 
With this version getting an error..
tryied the versions 2.0.522.7681 and 2.0.526 when i open the new setting tab and setting up the url i get error :

Deepstack returned 'Invalid value for min_confidence' - http status code 'BadRequest' (400) in 38ms: Bad Request.

... AI URL for 'DeepStack' failed '6' times. Disabling: ''


My setup is with ip from a jetson nano.
 
  • Reduced the quality of sub-streams, just so it's enough to detect motion. The SNAPS sent to DeepStack are from the MainStream (but have BI downsize them first), seems more efficient.

Spammenotinoz,

I am interested in your configuration of BI to capture a jpeg from the main stream. My current configuration is using a cloned camera for motion detection. It is using the main and sub streams in there standard corresponding locations. The clone triggers on motion and generates a jpeg from the sub-stream. The clone master records 24X7 and is triggered by external URL from my home automation system.

The only way I have been able to get a jpeg from the mainstream is to setup a second camera using only the main stream. My CPU usage has increased using the mainstream on this camera for motion detection. One other question, what have you found is an acceptable image resolution that provides a good balance between DeepStack processing speed and object identification accuracy?
 
This is obviously a personal install for you, unless you have an immediate need you can improve your system over time and find out what works best for you.
If you wait a year for the second camera, there will likely better models. That's the good thing about Blue Iris the cameras don't need to be the same, they can be completely different vendors.
Andy from EmpireTech can be found on these forums, I have purchased a few from him at good prices.

I was like you keeping 24x7 of the sub-stream, then I moved to keeping main stream 24x7 based on the following;
  • Recommendations from these forums
  • BI supporting dual streams, basically now runs on the smell of an oily rag
  • Cheap storage (found a decent deals on 6 TB drives) and they just seem to be getting cheaper
  • H.265 compression with CBR (although it can come with a performance and slight quality hit with most vendors)
(caveat where I use an older CPU that doesn't support H.265 hardware, I use H.264 with VBR). Similar storage usage to H.265 CBR. H.265 VBR always artifacts at night with motion for me.
  • Stopped chasing Megapixels, and maximum bit rates (still have some 4K's but went more for the 2K low light models
  • Adopted Deep-Quest AI to Flag key events
  • Disappointed with the quality of sub-stream, main stream is so much better. If an event happens and you only have sub-stream footage, you will be disappointed.
  • Set more realistic retention periods. I think when I started I was keeping all footage forever on the cloud. It wasn't a bandwidth issue it's just that for home, I have no real interest in footage older than a month, and just came to the conclusion it was very wasteful. For me the cams are a visual deterrent, a hobby and peace of mind to monitor pets ect.. when away. Like being on holiday's and checking in to view the chickens in the back yard, checking a parcel wasn't left on the doormat.
  • Reduced the quality of sub-streams, just so it's enough to detect motion. The SNAPS sent to DeepStack are from the MainStream (but have BI downsize them first), seems more efficient.

Most importantly, enjoy and continue to learn and share. There are many people here that know a heck of a lot more than me and I try and learn from them. I also experiment after firmware\software updates as things change.

I must say that while I was always happy with BlueIris motion detection and app (the key reason I switched to BlueIris), this thread about Free AI Person detection has been an absolute game changer.
Like I have telegram notifications enabled when I am away and the detection rate has been astoundingly good. We have had some suspicious activity and the amount of time reviewing flagged events v's triggered events has been an absolute game changer in-terms of quality of life and time saved.
Thank you for all the recommendations. I don't have any 4K cameras. After reading through this forum I went with 4x IP5M-T1179EW-28MM and 2xIPC-T5442TM-AS 2.8mm (driveway is one of them)
My current BI machine is a i5-10th gen with 16GB of ram, a 256GB NVME drive for Windows 10 OS and a 2TB Seagate SkyHawk for storage, which then offloads old recordings to my Unraid NAS (plenty of storage there).
I'm going to play with the Main stream 24x7 recording. My cameras are currently set as below with H264H since H265 was making me loose frames on the clips. Any suggestions for these settings?

cap2.PNG

Question, is there even a point to have substream active if 24x7 recording and AITOOL snapshots are taken from the MainStream?
Never mind, I see that BI uses the substream for motion detection
 
Last edited:
I'm used to all this, and have been pushing through this bad for a while without finding an answer. For some reason every time there's a motion event, it takes a snapshot of the events, then AI interface tool says unable to open image because it is already being used by another program.
 
Spammenotinoz,

I am interested in your configuration of BI to capture a jpeg from the main stream. My current configuration is using a cloned camera for motion detection. It is using the main and sub streams in there standard corresponding locations. The clone triggers on motion and generates a jpeg from the sub-stream. The clone master records 24X7 and is triggered by external URL from my home automation system.

The only way I have been able to get a jpeg from the mainstream is to setup a second camera using only the main stream. My CPU usage has increased using the mainstream on this camera for motion detection. One other question, what have you found is an acceptable image resolution that provides a good balance between DeepStack processing speed and object identification accuracy?
Sorry can only provide a quick response now, but it was only recently added 3 features that relate to your use case;
1. Duel Stream Support (there are guides here with some critical settings such as aligning frame rates between main and sub-streams)
2. They later added the ability to "scale\resize the JPEG" within BI
3. There was an update that changed so that in a dual Stream setup, the JPEGS come automatically now from the main stream (originally they were from the sub stream)
I have used the clone setup before, and it does work quite well, so nothing against it, just more complex.

For you other question, I am still learning but I am using "1272x720". I am using the GPU version (6 instances) and have no resource issues so no point in going lower. I was actually thinking of going back to 1080p to increase the Quality of my Telegram alerts.
 
  • Like
Reactions: kosh42efg
AiTool--->cameras--->actions/settings--->'cooldown time'

Does the camera cooldown setting only come into effect when their is a positve result and the URL is called? or is it on any analyzed jpg?

I would like to set it to the min length I have my triggered HD recordings going, to help eliminate extra analyses
 
Last edited:
  • Like
Reactions: seth-feinberg
AiTool--->cameras--->actions/settings--->'cooldown time'

Does the camera cooldown setting only come into effect when their is a positve result and the URL is called? or is it on any analyzed jpg?

I would like to set it to the min length I have my triggered HD recordings going, to help eliminate extra analyses
I believe it's after a positive detection it goes into timeout.
 
This writeup has a way to get the exact JPG (that AI triggered on) . He customized the AI app to give additional settings.

Adding &jpeg=[ImagePathEscaped] to the trigger URL copies the AI image to the BI Alerts folder. BI then uses this image for it's alert email action.

I'm glad I'm doing this as there's no way I would be staying up at 1am adjusting confidence levels after determining that my umbrella is NOT a cat!

FrontDoor.20210113_190813.112978.19.jpg
 
Anyone know what these errors mean when compiling?


Code:
1>------ Build started: Project: UI, Configuration: Debug Any CPU ------
1>CSC : error CS1617: Invalid option '8.0' for /langversion. Use '/langversion:?' to list supported values.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
 
Adding &jpeg=[ImagePathEscaped] to the trigger URL copies the AI image to the BI Alerts folder. BI then uses this image for it's alert email action.

I'm glad I'm doing this as there's no way I would be staying up at 1am adjusting confidence levels after determining that my umbrella is NOT a cat!

View attachment 79695
This doesn't work 100% of the time (at least with SMS notifications). I was playing around with it, and the same camera would sometimes use the snapshot from AITOOL and others the one from BI.
I'm using BI 5.3.8.3

Screen Shot 2021-01-15 at 9.04.04 PM.png
 
Adding &jpeg=[ImagePathEscaped] to the trigger URL copies the AI image to the BI Alerts folder. BI then uses this image for it's alert email action.

I'm glad I'm doing this as there's no way I would be staying up at 1am adjusting confidence levels after determining that my umbrella is NOT a cat!

Wow, this small change has made the Alerts so much more relevant. Thank-you!!
 
Hi there,
I just configured AI integration with BI and it works great, especially with Telegram notifications.

The only one issue I don't know how to deploy:
I have one camera which I don't want to have Telegram notification while I'm at home. How I can configure to notify me by Telegram only while not at home?
There are profiles in BI which I can use for that, so I can configure different alerts (push/email), but there is nothing same for Telegram notifications from aitools.

Any suggestion?

I tried to use this fork:
but I cant use Alert images from aitools in that case

Thanks!
 
Hi there,
I just configured AI integration with BI and it works great, especially with Telegram notifications.

The only one issue I don't know how to deploy:
I have one camera which I don't want to have Telegram notification while I'm at home. How I can configure to notify me by Telegram only while not at home?
There are profiles in BI which I can use for that, so I can configure different alerts (push/email), but there is nothing same for Telegram notifications from aitools.

Any suggestion?

I tried to use this fork:
but I cant use Alert images from aitools in that case

Thanks!
You could set a profile in BlueIris so when you are home it doesn't capture jpeg or alert pic on that particular camera that way ai tool won't process anything
 
Hi there,
I just configured AI integration with BI and it works great, especially with Telegram notifications.

The only one issue I don't know how to deploy:
I have one camera which I don't want to have Telegram notification while I'm at home. How I can configure to notify me by Telegram only while not at home?
There are profiles in BI which I can use for that, so I can configure different alerts (push/email), but there is nothing same for Telegram notifications from aitools.

Any suggestion?

Actually, this was working fine for myself with the original, but doesn't work for me with the Vorlon version. (Will re-test with the beta).
When it was working, I had a different recording path in BI when I was away, and in AI Tools created duplicate camera (Different Name) set to look at the AWAY path with the Telegram option enabled.
Do note that for AI Tool to still Trigger BI, you can't use the [CAM] variable, you need to manually put the correct camera name in he URL.

Anyway what is happening in the Vorlon version, is it only processed against the first path, seems to be completely confused and never analyses a single image in the AWAY path.
Will re-test with the Beta and Advise,
 
Actually, this was working fine for myself with the original, but doesn't work for me with the Vorlon version. (Will re-test with the beta).
When it was working, I had a different recording path in BI when I was away, and in AI Tools created duplicate camera (Different Name) set to look at the AWAY path with the Telegram option enabled.
Do note that for AI Tool to still Trigger BI, you can't use the [CAM] variable, you need to manually put the correct camera name in he URL.

Anyway what is happening in the Vorlon version, is it only processed against the first path, seems to be completely confused and never analyses a single image in the AWAY path.
Will re-test with the Beta and Advise,
I re-tested with the Vorlon Beta version, and it still didn't work. (AI tool detects the images in the away path, but always processes them against the original CAM name)???

"This isn't recommended and will lead to other issues", but I even tried changing the Recording Name in BI settings to (ie: Adding an "A" for away and "H" for home in the Name and updating the Paths in AI Tool to commence with
&CAM.A.%Y%m%d_%H%M%S%t
&CAM.H.%Y%m%d_%H%M%S%t
BI functioned correctly,

After updating the paths AI Tool and restarting, but alas, AI Tool detects the images in the new path, but again processed against the old camera. (ie: When away, AI Tool logs show it processes the correct AWAY file names, but processes against the ORIGINAL cam name.

Cloning cameras with completely different names would also work. It's probably due to the way the Vorlon version stores the config in a database, can't have different cameras starting with the Same Name, where this isn't a problem for Gentle Pumpkin's Version.

Even tried deleting the original CAM. I am thinking perhaps if I blow away the entire AI_Tool setup, and add the Cams with the .A and .H suffixes that may work, as creating without these (using defaults) may have done something funny in the Database. Note: AT Tool Camera Name and BI Camera Names within AI Tool were tested with made up values to be sure there wasn't a conflict...:)
 
  • Like
Reactions: Fooch and goldandy
This doesn't work 100% of the time (at least with SMS notifications). I was playing around with it, and the same camera would sometimes use the snapshot from AITOOL and others the one from BI.
I'm using BI 5.3.8.3

View attachment 79721

Yep, I noticed the same thing. Though, when I check the BI Alerts folder, all images there are AI Tool images with overlays. Seems that BI is not grabbing the Alert images at random times.