[tool] [tutorial] Free AI Person Detection for Blue Iris

JNDATHP

Pulling my weight
Joined
Oct 16, 2018
Messages
338
Reaction score
249
Location
USA

Neil Sidhu

Young grasshopper
Joined
Mar 9, 2019
Messages
62
Reaction score
3
Location
Toronto
Today i tested this tool some more as i masked the steet out which is in front of my door. Issues is, if i mask the street in the back, it also masks the person standing at the door as the street is right behind them... not sure how others are doing this

Also; if a person walking on the sidwalk shadow with the bright sub sets off the trigger in BI - AI will then look at the picture and see a person and alert me. To me this is false.
 

nstig8

n3wb
Joined
Jan 21, 2015
Messages
23
Reaction score
11
Today i tested this tool some more as i masked the steet out which is in front of my door. Issues is, if i mask the street in the back, it also masks the person standing at the door as the street is right behind them... not sure how others are doing this

Also; if a person walking on the sidwalk shadow with the bright sub sets off the trigger in BI - AI will then look at the picture and see a person and alert me. To me this is false.
The only thing I can think of to improve your situation is to set a mask on the first Blue Iris camera to only detect motion in the very small area that is your porch. When people step onto the porch it will generate the picture that the AI will then process. Then unmask enough of the deepstack image so it can identify them as a person. It just seems like your camera angle is very poor and captures too much false positive area for all the requirements you seem to have about what you don't want to get alerts for.
 

Neil Sidhu

Young grasshopper
Joined
Mar 9, 2019
Messages
62
Reaction score
3
Location
Toronto
The only thing I can think of to improve your situation is to set a mask on the first Blue Iris camera to only detect motion in the very small area that is your porch. When people step onto the porch it will generate the picture that the AI will then process. Then unmask enough of the deepstack image so it can identify them as a person. It just seems like your camera angle is very poor and captures too much false positive area for all the requirements you seem to have about what you don't want to get alerts for.
This is my cam i am working with - if i mask the area out in the back - AI does not see the person at the door as half the pic in the back is masked
 

Attachments

nstig8

n3wb
Joined
Jan 21, 2015
Messages
23
Reaction score
11
This is my cam i am working with - if i mask the area out in the back - AI does not see the person at the door as half the pic in the back is masked
Mask the area in the back in Blue Iris, so it only detects motion on your property. Deepstack has a separate mask for where it looks for people (or whatever you set). Make that mask see a larger area so it can identify the person at the door. This way you only analyze images when there is motion on your property, but you can accurately tell if there is a person there or not.
 

Neil Sidhu

Young grasshopper
Joined
Mar 9, 2019
Messages
62
Reaction score
3
Location
Toronto
Mask the area in the back in Blue Iris, so it only detects motion on your property. Deepstack has a separate mask for where it looks for people (or whatever you set). Make that mask see a larger area so it can identify the person at the door. This way you only analyze images when there is motion on your property, but you can accurately tell if there is a person there or not.
Sorry, thing with Bluestack is - if i dont set the back with a mask big enough in AI - and there is a person or kid playing on the street, i will get all those alerts if my BI sees an alert close to the door
 

Neil Sidhu

Young grasshopper
Joined
Mar 9, 2019
Messages
62
Reaction score
3
Location
Toronto
Sorry, thing with Bluestack is - if i dont set the back with a mask big enough in AI - and there is a person or kid playing on the street, i will get all those alerts if my BI sees an alert close to the door
This is masked.. cuts so much off.. has to be a better way to this all. If i unmask the area in the back, then the road will be seen and there is alot of acitivity there in which AI will detect I dont want AI to see people on the sidewalk.
 

Attachments

sdaly

n3wb
Joined
Feb 11, 2019
Messages
1
Reaction score
0
Location
Dublin
To solve that problem, I wonder can the AI Tool be updated to also add something similar to the "Confidence limit" but a "Size limit" (min & max, maybe a per Object value) since we already have the coordinates of each object detected it should be easy to get that value. Therefore, it will only give a positive result if the "size is > min" but "< than max". Then you can make sure it doesn't give a positive result for "small" people in the background.

And if you wanted to go another level, you could have a new value (maybe called Zone like Blue Iris), could be just something simple first like a text field with the coordinates that you want to monitor and then only give a positive result if that object is located in it, maybe if its only partly in it, or maybe give a % of the object that needs to be located inside in that area. Future bonus feature would be a UI drag box to create the coordinates to monitor using an existing image.

If we were to add these 2 new features then maybe turn the Object checkboxes into a list that you add each monitored Object (like how adding cameras work) and then add the Confidence limit, size limit & maybe the Zone coordinates too so it's all per object. Id be ok for Zone to be per Camera and not per Object. But ideally it could be both and it can fallback to the Camera Zone value if its not specified on the object. Then we can do: if there is a car spotted here or a person spotted there then give a positive result. We could also then monitor for 2 car objects in different areas of different sizes with different confidence limits so having it as a list will make it more flexible.

And since my brain is in overdrive thinking about making this awesome tool even better, we could add a tag field to each Object and send them in the trigger urls as a parameter (e.g: http://myip?foo=bar&tags={tags}) and maybe in the Telegram alerts. Then I could do something in Home Assistant based on the exact type of alert.

Cooldown, Trigger URLs & Send alert images to Telegram values could be also be per Object and fallback to the parent camera values but we just need to be careful not to spam the actions so this would be lower priority (for me). But having a value per Object to opt in to Telegram alerts would be nice as maybe I want to call the trigger URL to tell Blue Iris to record if its any of my Objects but I only want Telegram alerts for a person object.

BTW, @GentlePumpkin I am really loving the tool. It has helped a lot with reducing my false positives so a big thanks for that. I might jump in and help with the code (c# dev myself) if I get the free time.
 

DuHads

n3wb
Joined
May 8, 2020
Messages
2
Reaction score
0
Location
Saint Louis
I’m guessing that AVX support isn’t passed through Virtual Box to the VM. Have you tried running Deepstack with the noavx tag?


Sent from my iPhone using Tapatalk
Tried just about every combination I could think of unfortunately.
 
Joined
Aug 6, 2018
Messages
4
Reaction score
4
Location
Louisiana
I know this is a long shot, but is there any chance I can get AI Detection for Blue Iris running under Linux. I'm a weirdo and don't have a Windows machine. I've been running BI in Docker with Linux for a while now and would like to give this a go.
 

Tanaban

n3wb
Joined
Jul 7, 2017
Messages
13
Reaction score
7
Location
Texas
Deepstack should work great with the linux build of it, but I think the Aitool is only windows based. I'd try looking into linux software like Wine, or if you have to make a lightweight virtual machine under linux to run just the AiTool.
 

Neil Sidhu

Young grasshopper
Joined
Mar 9, 2019
Messages
62
Reaction score
3
Location
Toronto
Anothor example on how this does not work too well.. see attached. I have almost the whole area masked and there was a movement (light change) and BI triggered a moton and then AI looked at the image and saw my car which is PARKED and sent me an alert (which was a flase alert) since the car was parked there already, it sent me an alert base don that..
 

Attachments

morten67

n3wb
Joined
Jan 15, 2020
Messages
13
Reaction score
1
I was never able to get DeepStack to run on the Windows VM, so I just used the HASS-deepstack docker. I also install the coral pi rest and am able to get the Coral USB accelerator to do the load work now instead of send off an image to deepstack (my internet is very slow). One thing I am noticing though is that the call to trigger the camera is taking a long time. Any ideas how to speed it up? As you can see it is taking a full two seconds to trigger. Example below.

[20.05.2020, 15:25:11.506]: (4/6) Checking if detected object is relevant and within confidence limits:
[20.05.2020, 15:25:11.514]: truck (91.02%):
[20.05.2020, 15:25:11.639]: (5/6) Performing alert actions:
[20.05.2020, 15:25:11.649]: trigger url:http://localhost:81/admin?trigger&camera=garageHD&user=------&pw=-------
[20.05.2020, 15:25:13.651]: -> Trigger URL called.
[20.05.2020, 15:25:13.662]: (6/6) SUCCESS.
Hello. If I understand your post correctly; you have changed AITool to use (send image to) the Coral USB accelerator instead of Deepstack? If that's the case, could you please describe how you did? Thanks in advance.
 

GentlePumpkin

IPCT Contributor
Joined
Sep 4, 2017
Messages
105
Reaction score
162
Anothor example on how this does not work too well.. see attached. I have almost the whole area masked and there was a movement (light change) and BI triggered a moton and then AI looked at the image and saw my car which is PARKED and sent me an alert (which was a flase alert) since the car was parked there already, it sent me an alert base don that..
AI Tool checks 9 points of every object to determine whether it's in a masked area or not. If >= 5 points are not masked, it will alert you.
Considering your posted image I really can't understand why you configured AI Tool to detect cars anyways? I mean, if the part that isn't masked ever contained a whole car, you certainly would not need the Blue Iris alert to notice that.

I know this is a long shot, but is there any chance I can get AI Detection for Blue Iris running under Linux. I'm a weirdo and don't have a Windows machine. I've been running BI in Docker with Linux for a while now and would like to give this a go.
I regret that I did not use QT or another multi-platform software development environment, not only because supporting other platforms wouldn't be bad. But I think that BI naturally requires Windows, so it should not be a problem to run AI Tool in the same VM als Blue Iris.

This is masked.. cuts so much off.. has to be a better way to this all. If i unmask the area in the back, then the road will be seen and there is alot of acitivity there in which AI will detect I dont want AI to see people on the sidewalk.
I really don't get the problem? You click on the "Show mask" switch button above the window again and the mask is gone.


To solve that problem, I wonder can the AI Tool be updated to also add something similar to the "Confidence limit" but a "Size limit" (min & max, maybe a per Object value) since we already have the coordinates of each object detected it should be easy to get that value. Therefore, it will only give a positive result if the "size is > min" but "< than max". Then you can make sure it doesn't give a positive result for "small" people in the background.

And if you wanted to go another level, you could have a new value (maybe called Zone like Blue Iris), could be just something simple first like a text field with the coordinates that you want to monitor and then only give a positive result if that object is located in it, maybe if its only partly in it, or maybe give a % of the object that needs to be located inside in that area. Future bonus feature would be a UI drag box to create the coordinates to monitor using an existing image.

If we were to add these 2 new features then maybe turn the Object checkboxes into a list that you add each monitored Object (like how adding cameras work) and then add the Confidence limit, size limit & maybe the Zone coordinates too so it's all per object. Id be ok for Zone to be per Camera and not per Object. But ideally it could be both and it can fallback to the Camera Zone value if its not specified on the object. Then we can do: if there is a car spotted here or a person spotted there then give a positive result. We could also then monitor for 2 car objects in different areas of different sizes with different confidence limits so having it as a list will make it more flexible.

And since my brain is in overdrive thinking about making this awesome tool even better, we could add a tag field to each Object and send them in the trigger urls as a parameter (e.g: http://myip?foo=bar&tags={tags}) and maybe in the Telegram alerts. Then I could do something in Home Assistant based on the exact type of alert.

Cooldown, Trigger URLs & Send alert images to Telegram values could be also be per Object and fallback to the parent camera values but we just need to be careful not to spam the actions so this would be lower priority (for me). But having a value per Object to opt in to Telegram alerts would be nice as maybe I want to call the trigger URL to tell Blue Iris to record if its any of my Objects but I only want Telegram alerts for a person object.

BTW, @GentlePumpkin I am really loving the tool. It has helped a lot with reducing my false positives so a big thanks for that. I might jump in and help with the code (c# dev myself) if I get the free time.
I think AI Tool should work out of the box, without the need to fine tune settings for days. In the future, I'd like to hide fine tuning settings like confidence limits and potential others from the first impression (maybe in an advanced menu), so that new users aren't overwhelmed.
Regarding size limit: Why is it generally bad to detect small people in the background?

Regarding zones: Do you mean something like BI's Hotspot feature? I admittedly don't see much use for the mayority of users.

object list: very good idea, looks way tidier! I support this 100%.

per-objects confidence settings etc: generally yes, but how to implement this without having a messed UI?

obejct tags: Implemented (Github) a week ago and testing it at the moment. Includes detection and confidence.

My key goal currently rather is to enhance AI Tool to only detect changes in an image, because this would allow to extend the use cases and p.e. allow to monitor your driveway for cars and only be alerted if a new car is parked there or maybe if your car leaves (hopefully not without you ;D ).
 

GentlePumpkin

IPCT Contributor
Joined
Sep 4, 2017
Messages
105
Reaction score
162
This is my cam i am working with - if i mask the area out in the back - AI does not see the person at the door as half the pic in the back is masked
Ok honestly, why monitor the street in the first place? I know this is legal in the States and I'm really happy that it isn't where I live. I'd like to be able to go for a walk without being spied, analyzed and stored by potentially weird people. That's a type of freedom I'm very thankful for.

What people do on my property, that certainly concerns me. But what happens off my property really isn't my business. If people can no longer walk freely in public, we'll have a serious Orwell problem. :)
 

Tinbum

Getting the hang of it
Joined
Sep 5, 2017
Messages
246
Reaction score
40
Location
UK
This is my cam i am working with - if i mask the area out in the back - AI does not see the person at the door as half the pic in the back is masked
Have you thought of repositioning the camera and / or zooming in to make the image viewed smaller.
 

Tinbum

Getting the hang of it
Joined
Sep 5, 2017
Messages
246
Reaction score
40
Location
UK
Just updated VCRUNTIME as in post 27 5.2.8 - May 20, 2020 - Direct-to-disc BVR recording will now include metadata for video overlays

and am now getting this error. LoadCameras() failed

Strangely it is loading 4 cameras but not the rest. The cameras it loads all begin with a number as in 1Camera2 the non loaded begin with a letter.
If I add a camera named 1test it will load it but not if its test. Strange!!

I can still see the list of cameras in Program files / AI tool/ Cameras

SOLVED.
I'm not sure how but a text file had got saved in the Cameras folder that shouldn't have been there. AI Tool had loaded the cameras until it had got to that file then stopped. The file began with the letter A
 
Last edited:

Jose Roca

n3wb
Joined
May 22, 2020
Messages
1
Reaction score
0
Location
Chile
Hi All,
I have running about a month without any problem this tool but I added detection for dogs (now that I have a dog :) and its not identifying it as a dog (or a cat)... I attached two images... I know that the dog is small but even with dogs or cats, it doesnt detect it. Do you know what I could configure in order to work? Its strange because it detects like an object ... Regards
 

Attachments

Neil Sidhu

Young grasshopper
Joined
Mar 9, 2019
Messages
62
Reaction score
3
Location
Toronto
Ok honestly, why monitor the street in the first place? I know this is legal in the States and I'm really happy that it isn't where I live. I'd like to be able to go for a walk without being spied, analyzed and stored by potentially weird people. That's a type of freedom I'm very thankful for.

What people do on my property, that certainly concerns me. But what happens off my property really isn't my business. If people can no longer walk freely in public, we'll have a serious Orwell problem. :)
I am not trying to monitor the road in the bad - thats my issues, i am trying to mask the entire area in the back but as shown in my pictures - when i mask the back, the front door area is also getting masked so as shown the person at the door then is half cut from the top as i am trying to block the street from the back. Am i doing something wrong with the mask?

As per the Pic, you can see that i have masked the whole street and sidewalk in the back. But now if you notice, my front porch is now masked as well. So if a person comes to the door, the top half of them will be cut. How would i address this?

Thank you,
 

Attachments

kumar2020

n3wb
Joined
May 8, 2020
Messages
9
Reaction score
1
Location
Iowa
Several people have given you advice on this setup, and I'm not sure if you've tried the advice or not. Again, you could set up zones in BI so that only motion in the area of the porch triggers the AI. I'd probably use 2 zones and setup crossing so that people just walking past won't trigger. So, for example, the "A" zone would be the whole image and the "B" zone would be the area that's not masked in the image you shared. Setup BI to trigger the AI when there's motion from A>B.

Then basically remove the left half of your AI mask so that Bluestack gets the full person (I'm not sure if that's even necessary as it seems to be able to detect "partial" people in my experience if it can see legs. It's still going to see people in the left background, but they won't trigger the camera/alerts unless someone approaches.

If you can't get some combination of these to work then I don't think there's anything this tool (or users here) can help you with. Fundamentally, the camera angle you have set up makes this a difficult proposition.
 
Top