[tool] [tutorial] Free AI Person Detection for Blue Iris

So the top one that uses the [UK] Home profile is the Master and the one takes jpegs and sends them to AI Tool on ALL motion? And the bottom one that uses the [Spain] Home profile is Clone and the one that is triggered by AI Tool confirmed events?

along the same line: Which logical camera profile do you mark Clone Master on the general tab?

edit: what does the "Also re-triggers" button do?

Sorry I should have shown them with the same profile which how they operate. My system is programmed with three profiles 1. Home UK 2. Home Spain 3. Away from home. I have cameras installed both in the UK and Spain and connected to the one BI system using LAN to LAN. The surveillance system is automated using a small program that I have written using Tasker for Android. It enables the system to automatically select a profile depending on my location. It is similar to BI geofencing but allows me to support my two locations and three profiles.

In reference to choosing the Clone or Master as your trigger it is totally your choice it makes no difference to the operation. If you have more than one camera I propose you choose and then stick to it to avoid confusion. Presently my Master is set to receive external commands and the Clone is the Trigger but it could be reversed. So long as you have camera profile for generating a trigger and another to except the external command you are all set.

Re-triggers enable video to be captured as one file. For example if AITOOL sends an External trigger, it will stop recording in 60 seconds in my setup. If AITOOL continues to send External trigger events with the 60 second period, it will cause the timer to be reset. For example, when my gardener is working in the yard I will often have a video file 1 hour long covering the whole period he is working.
 
  • Like
Reactions: seth-feinberg
Just out of curiosity, has anybody heard any rumors about "AITool" type functionality eventually getting built directly into Blue Iris? I know BI has a service of its own, but surely the writing is on the wall regarding what the customers want (and Sentry isn't it) so I think the motivation is there, just wondering if we've heard anything.

Note: I'm not talking about the AI server, just the middleman service of passing photos and results back and forth.
BI already has that functionality today (sending images to Sentry and LPR). I assume the biggest blocker would be any current commercial agreements (if any) with Sentry.
From a financial point though, it would be risky to pass images directly to DeepStack, They would need something far more reliable and out of Beta.
Even with a robust AI engine the support costs to manage user expectations would be insane. No offence, but it's not like BI has decent support (from BlueIris). I have logged many support calls over the years, and I never never received a single response. Any issues\queries have been resolved via Forums or just work arounds until fixed.
 
Thanks, I'm currently taking stills from the 640x480 substream hence the relatively low image quality, thats already running at its highest bitrate and framerate so not sure I can improve that but I'll have a play with the exposire and contrast. The road comes from the left of shot and the car will do a right-left S turn onto the drive so should give a reasonable side shot at some point, but maybe my trigger interval means it's missing the "money shot".

Resolution is fine, but increase the bit-rate (quality) of whatever stream you are sending to AI Tool, there is a lot of artifacting.
Experiment with exposure options such as HLC.
Experiment with contrast. That angle detecting a car at night 100% of the time will be tough
 
For AI to work it needs to be able to clearly see and make out the object. That picture being blinded by headlights means the car is dark so it isn't seeing the car. As someone else mentioned, some HLC could help that. Bumping up brightness could as well. Does the camera have built in AI as that would probably result in higher probability since it is analyzing video as opposed to a picture?
 
Thanks, I'm currently taking stills from the 640x480 substream hence the relatively low image quality, thats already running at its highest bitrate and framerate so not sure I can improve that but I'll have a play with the exposire and contrast. The road comes from the left of shot and the car will do a right-left S turn onto the drive so should give a reasonable side shot at some point, but maybe my trigger interval means it's missing the "money shot".
Frame Rate Doesn't Need to be high.
 
  • Like
Reactions: Locoblade
Thanks, there's no AI on the cameras unfortunately, mostly cheap RLC-410 Reolinks at the moment. Obviously I understand it's tricky because of being blinded by headlights but naievely thought perhaps it might even recognise that pattern of 2 lights and a reflective thing in the middle as a car without needing to see the outline :D

Not a massive issue, I've not had time to play with settings and see the result but I've now got a second camera in that vascinity from another angle so I might set them up to trigger as a group so there's twice the chance of catching that type of incident.

For AI to work it needs to be able to clearly see and make out the object. That picture being blinded by headlights means the car is dark so it isn't seeing the car. As someone else mentioned, some HLC could help that. Bumping up brightness could as well. Does the camera have built in AI as that would probably result in higher probability since it is analyzing video as opposed to a picture?
 
Thanks, there's no AI on the cameras unfortunately, mostly cheap RLC-410 Reolinks at the moment. Obviously I understand it's tricky because of being blinded by headlights but naievely thought perhaps it might even recognise that pattern of 2 lights and a reflective thing in the middle as a car without needing to see the outline :D

Not a massive issue, I've not had time to play with settings and see the result but I've now got a second camera in that vascinity from another angle so I might set them up to trigger as a group so there's twice the chance of catching that type of incident.
Okay, those cameras are actually not too bad, with that much artifacting I bet you are using RTSP, move to RTMP (for some reason with Reolinks rtmp is more stable and less artifacting)
I believe the format is as follows;
rtmp:/192.168.1.100/bcs/channel0_main.bcs?channel=0&stream=0&user=admin&password=password
rtmp:/192.168.1.100/bcs/channel0_sub.bcs?channel=0&stream=0&user=admin&password=password

On some the sub-stream is
rtmp:/192.168.1.100/bcs/channel0_sub.bcs?channel=0&stream=1&user=admin&password=password

Link to the doco
 
I am running version 2.0.721. I would like to set different confidence levels for each object being detected. At night a dog will occasionally get detected as a human with a confidence level of 42%. I want the human set to a level of say 60% and the dog at 40%. Is the workaround setting up 2 instances of the camera in AI Tool? If I setup 2 instance will the image be processed more than once by DeepStack? I know I could accomplish this with a second clone in BI but I want to avoid the additional load on DeepStack and BI.
 
Sorry I should have shown them with the same profile which how they operate. My system is programmed with three profiles 1. Home UK 2. Home Spain 3. Away from home. I have cameras installed both in the UK and Spain and connected to the one BI system using LAN to LAN. The surveillance system is automated using a small program that I have written using Tasker for Android. It enables the system to automatically select a profile depending on my location. It is similar to BI geofencing but allows me to support my two locations and three profiles.

In reference to choosing the Clone or Master as your trigger it is totally your choice it makes no difference to the operation. If you have more than one camera I propose you choose and then stick to it to avoid confusion. Presently my Master is set to receive external commands and the Clone is the Trigger but it could be reversed. So long as you have camera profile for generating a trigger and another to except the external command you are all set.

Re-triggers enable video to be captured as one file. For example if AITOOL sends an External trigger, it will stop recording in 60 seconds in my setup. If AITOOL continues to send External trigger events with the 60 second period, it will cause the timer to be reset. For example, when my gardener is working in the yard I will often have a video file 1 hour long covering the whole period he is working.

thank you so much for that color on the your system. Sounds very thorough and interesting. I did some work in Tasker back when i had the first Moto X but abandoned it when it was misreading my battery state and one of my Low battery profiles was paradoxically crushing my battery whenever it was close to dead and it was one of the most frustrating troubleshooting experiences of my life. But still every time I move over to a new phone (I think this Pixel 5 is the 7th one?) I bring over my Tasker with all its disabled profiles so I guess I can't quite "quit it". Maybe it's time to dive back in:)

For some reason when I was in the 60's or so in the thread I got the impression from multiple posts with screenshots that it was VERY important to get the Clone Master checkbox correctly checked on the right camera but I could never figure out which one it was, glad to hear it's not that important, I'm curious what it does then, but I just trust you and will move forward without clarification:)

Finally, thanks for the concise and extremely clear explanation of re-triggers. And thanks so much for all your help. I'm usually a pretty good googler but I've found BI kind of tough to learn. If I may ask, how did you learn? just futzing around or is there some kind of concise explainer out there? I attempted the manual but found it very unwieldy. maybe some day....
 
One issue i am having is when my car is parked in the driveway the alert goes off over and over. Thoughts?
I suspect that your trigger senstivity may be set to high. Try rolling it back a bit to see if it improves. I have personally no experience of using it but an automated mask may resolve the issue!
 
One issue i am having is when my car is parked in the driveway the alert goes off over and over. Thoughts?
Dynamic masking doesn’t work on the latest release. Compile the latest debug release yourself or you can find links to compiled versions in earlier posts.

also for dynamic masking to work you need to send regular images before the history period.
 
@seth-feinberg
The latest version of Tasker has improved considerably over earlier versions, hang in there ;)

In respect to Clones, just to finally push home how inconsequential choosing which camera is chosen for the clone is:
You don't even need to have clone selected! The ONLY reason for it's existance is to avoid processing the same camera streams twice theirby saving cpu overhead.

As for experience, a lot of reading, trial and error + some great input from members of this forum.
 
Last edited:
  • Like
Reactions: seth-feinberg
AITool Version 2.0.703.7716

I have attached a link below to the latest version of AITOOL for anyone that would like to test the new version and unable to compile it yourself. Please be aware this version is in BETA it is NOT released!

Please discuss / post issues here

At this time I recommend you NOT delete the default camera. I have found it deletes the next camera in the list and reinstates the default.

Download Link
Not working. Can you please provide a working link? Thanks
 
@seth-feinberg
The latest version of Tasker has improved considerably over earlier versions, hang in there ;)

In respect to Clones, just to finally push home how inconsequential choosing which camera is chosen for the clone is:
You don't even need to have clone selected! The ONLY reason for it's existance is to avoid processing the same camera streams twice theirby saving cpu overhead.

As for experience, a lot of reading, trial and error and some great input from members of this forum.

I will give Tasker another go, you've convinced me, now just to find a use case :cool:
I would assume (and think i'm seeing) that regardless of whether you have "Clone Master" clicked, the CPU overhead is saved (i.e. not acting as if it's running 2 cameras instead of 1 and a clone), so i guess Clone Master does nothing? SO funny cause in the middle of this gargantuan thread people were VERY insistent that you mark the CORRECT camera Clone Master..haha. Well in any event, I'm moving on:)

Dang re: the lot of reading, I was hoping you had a 12 minute YouTube clip that would teach me everything i know:)

fwiw: i found this and I'll be doing what it suggests ::shrug:: Create a clone camera
 
If you do not select clone camera, then it will be pulling the stream from the camera for each "clone" and thus contribute to CPU overhead (unless your machine is new enough or not enough cameras to make a difference). Notice how my clone cameras do not show a bitrate. Pull up Blue Iris status and the camera tab and yours will show bitrates for all the cameras, including the clones. Or maybe yours doesn't, but it will at some point with an update or a reboot and pull the stream twice...

1613863378099.png
 
  • Like
Reactions: seth-feinberg
@seth-feinberg

Here is a Tasker challenge for you to contemplate. Make a task to change your profile when you leave the house and change it back on your return ;)

You do actually need one of the cameras with the same IP address to be marked 'Clone' to save resources.

"There are four parts of setting up a clone camera for backup" This does not appear to be a standard Clone configuration!
 
  • Like
Reactions: seth-feinberg
I am running AiTool version 2.0.721. Is there a way to receive multiple MQTT topics for the same image? An example would be if a person is triggering motion and then a car drives into the frame. I would like to be able to receive an MQTT topic and payload for each qualifying object identified.