[tool] [tutorial] Free AI Person Detection for Blue Iris

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
92
Reaction score
115
Location
massachusetts
@WildDoktor - It depends. I still mask out the street in BI so I dont get alerts all the time for passing cars. The dynamic masking is only useful for when you get repeated detection's in the same area.
 

jeffarese

n3wb
Joined
Aug 7, 2020
Messages
6
Reaction score
4
Location
Spain
Is it going to be possible to make BI to use the get the screenshot from the main stream when detecting motion from the substream?

Right now, since I use substreams to reduce CPU usage, the screenshots I get are very low res. I would prefer to get the full main stream screenshot.

Is it possible to request the feature?
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
Is it going to be possible to make BI to use the get the screenshot from the main stream when detecting motion from the substream?

Right now, since I use substreams to reduce CPU usage, the screenshots I get are very low res. I would prefer to get the full main stream screenshot.

Is it possible to request the feature?
You need to email BI support as I’ve done to request this, the more people that do it the more likely it will be implemented unless there’s a technical reason why it can’t?
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
448
Reaction score
126
Location
UK
Since I'm using AI Tools VorlonCD, do I need to use the "zones and hot spot" editor in BI - camera - motion sensor - configure to mask out the area where my flag waves? Because all the jpegs that are sent to the folder that AI Tools VorlonCD monitors have the full flag showing.
I do and would. I find Vmmem very cpu intensive. You can also still use the normal mask feature in AITool
 
Last edited:

Eatoff

n3wb
Joined
Aug 28, 2020
Messages
19
Reaction score
3
Location
Australia
Ok, getting this set up I have hit a roadblock and need some help.

I've followed the hook up tutorial, and I'm at the point of inputting the deepstack server address. When I trigger it gets stuck processing then gives the error log saying server unreachable. But when I copy paste the address into chrome the deepstack server window comes up. Any clues as to what I have done wrong?
 

jeffarese

n3wb
Joined
Aug 7, 2020
Messages
6
Reaction score
4
Location
Spain
You need to email BI support as I’ve done to request this, the more people that do it the more likely it will be implemented unless there’s a technical reason why it can’t?
I just sent them a support email explaining why it would be a nice addition.
In order to overcome this right now I'm using the third stream, which is 720p.
However, because of that, I can't use H265+ or smart events, so that's an inconvenience.
 

B-Murda

Getting the hang of it
Joined
Jun 16, 2020
Messages
32
Reaction score
26
Location
USA
Here is an update to my fork of @GentlePumpkin 's awesome tool.

Change log:
  • Camera option for 'Trigger Cancels' - Basically, rather than the URL triggering an event in BI, it will only be called when the detection is CANCELED. Note that you MUST change &flagalert=1 to &flagalert=0. As I understand this is how Sentry AI works - it just cancels a detection sent by BI. I haven't tested the 'Trigger cancels' camera checkbox yet since I dont have that configuration right now - let me know if it works.
So you're close, though I've not tested with just your method so wanted to share this anyway.
Sentry sends both a 1 and 0 to confirm or reject. They of course don't use flag but another parameter which is how they get a special icon. A time-out or failure to confirm/reject leaves just the alert as-is in BI without the cancel icon or verified. They do also send extra data to replace the alert image, if the user enabled the option, but for this to work with that would require BI changes for what we are doing.
I've not tested your change yet to know if just using cancel flag is enough to clear a "bad" one and not needing to affirm one is acceptable.

Of note for people who are going to test/use the cancel, make sure you re-enable triggers for motion (since right now you probably just have for external triggers), enable the disarm time period and make it so enough time set to allow AI to respond.

Also use caution with taking snapshots x seconds during trigger because a snapshot sent to AI does not = the alert existing in BI since BI only logs the alert image initially triggered and the continued images aren't new alert images technically.
For example if you are taking trash down and motion is detected for trash can before your body is in frame. BI see's motion, fires as alert, AI will reject this first image cause no human. 2 seconds later your body is in frame and depending on your cancel/retrigger and taking x pic every x seconds it will snap another pic HOWEVER this is NOT a new alert so not an alert logged in BI, AI gets the new pic, sees valid and you now get NO notification because the first one BI started was canceled and the second one doesn't exist in BI technically as an alert since it's the same alert still going.

Now not using the cancel method it kind of does the invert. AI tool ignores the first image of the can, the second image with the human it flags/triggers, BI sends the email but the image is of the can since well, that's the actual initial alert image. The alert itself is valid but the image is useless hah. This is more of an issue with email. push notifications etc. are less since you can see in the app anyway.

You need to email BI support as I’ve done to request this, the more people that do it the more likely it will be implemented unless there’s a technical reason why it can’t?
Me and Ken spoke on this in some details last week spitballing ideas. The issue is to get a high quality image means decoding must happen on the mean stream. If decoding is happening that is CPU time and now what's the point of the sub-stream if you have to decode the main still?? We had a few ideas which may work. It will still cause CPU time but just be big spikes vs. constant decoding. I suspect he'll keep tinkering with the idea because he seemed interested in it but it's a little complex. He's a smart guy though so even if not perfect the fact he's exploring it and what not is great!
 
Last edited:

Peter Myers

Young grasshopper
Joined
Dec 17, 2017
Messages
75
Reaction score
8
Ok, getting this set up I have hit a roadblock and need some help.

I've followed the hook up tutorial, and I'm at the point of inputting the deepstack server address. When I trigger it gets stuck processing then gives the error log saying server unreachable. But when I copy paste the address into chrome the deepstack server window comes up. Any clues as to what I have done wrong?

1. Visit 192.168.1.12:82 (or your IP and Deepstack port you set(i set mint to 82))
2. Go to Sign In
3. Copy the activation key ( See attached image)
4. Paste it in your 192.168.1.12:82 < Deepstack url (see attached image)
5. Make sure the necessary check boxes are marked on DeepStack window.

2020-08-28 08_23_48-Welcome Green.png2020-08-28 08_25_09-DeepStack.png
 

cepler

Getting the hang of it
Joined
Aug 13, 2020
Messages
47
Reaction score
78
Location
Allison Park, PA
I finally got around to trying this out today and it seems to be working pretty well. I was wondering though, the Telegram support sounded interesting but the image transfers get denied. Looking at the Telegram API it looks like it only supports 320 pixel max images. Is this why it's failing (AI Tool isn't reducing the images prior to send?) Is that feature abandoned? Sounded like an interesting idea and was going to try it out. Right now I'm running dual streams with motion activation for recording and appears to be working pretty good but would be nice to see Blue Iris support something like this internally. Seems like it would be pretty easy on their part to internally giving a smoother experience, but who knows what their deal is with Sentry and a potential conflict there I guess... I'd like to see more ways to flag things like how sentry has their own icons and have the ability to sort/filter like that in BlueIris..
 

mayop

n3wb
Joined
Jul 20, 2020
Messages
29
Reaction score
22
Location
Canada
I was wondering though, the Telegram support sounded interesting but the image transfers get denied. Looking at the Telegram API it looks like it only supports 320 pixel max images. Is this why it's failing (AI Tool isn't reducing the images prior to send?)
I've had no issues so far with images being sent. When I started had it send the full res camera image 2688x1520 until I switched to the 640x360 sub stream which still works.

 

cepler

Getting the hang of it
Joined
Aug 13, 2020
Messages
47
Reaction score
78
Location
Allison Park, PA
I've had no issues so far with images being sent. When I started had it send the full res camera image 2688x1520 until I switched to the 640x360 sub stream which still works.

Do I need anything underlying to get it working or just make the bot and join the chat? I made one with BotFather and entered the token into AI Tool but I get this in the logs:

[29.08.2020, 00:15:10.857]: -> 2 trigger URLs called.
[29.08.2020, 00:15:10.857]: Uploading image to Telegram...
[29.08.2020, 00:15:10.883]: uploading image to chat "REDACTED"
[29.08.2020, 00:15:12.011]: ERROR: Could not upload image D:\Blue Iris Video\AI-Input\IPCAM1.20200829_001507034.jpg to Telegram.
[29.08.2020, 00:15:12.253]: -> Sent image to Telegram.
[29.08.2020, 00:15:12.257]: (6/6) SUCCESS.
[29.08.2020, 00:15:12.257]: Adding detection to history list.


And nothing shows on my Telegram client..
 

cepler

Getting the hang of it
Joined
Aug 13, 2020
Messages
47
Reaction score
78
Location
Allison Park, PA
OK figured out you have to have a GROUP and invite the bot in there and the group # is what you put in AI Tool for the Chat ID, not the bot name... Was a little confusing but makes sense now. How did you get the identifier boxes/names on yours? Mine just has images w/o any notations...
 

mayop

n3wb
Joined
Jul 20, 2020
Messages
29
Reaction score
22
Location
Canada
OK figured out you have to have a GROUP and invite the bot in there and the group # is what you put in AI Tool for the Chat ID, not the bot name... Was a little confusing but makes sense now. How did you get the identifier boxes/names on yours? Mine just has images w/o any notations...
I modified and compiled the code to add the binding boxs and include a caption on the image of what it detected.
 

Eatoff

n3wb
Joined
Aug 28, 2020
Messages
19
Reaction score
3
Location
Australia
1. Visit 192.168.1.12:82 (or your IP and Deepstack port you set(i set mint to 82))
2. Go to Sign In
3. Copy the activation key ( See attached image)
4. Paste it in your 192.168.1.12:82 < Deepstack url (see attached image)
5. Make sure the necessary check boxes are marked on DeepStack window.

View attachment 69453View attachment 69454
Thank you so much, I'm pretty stupid for not spotting that. I had used the key when trialling the windows version, but had totally forgotten to do it second time around with the docker version.

Thanks for the help
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
92
Reaction score
115
Location
massachusetts
Our new mod of AITOOL lets you specify as many DeepStack URL's as you like in settings. If you have more than one image in the queue, they will be processed in parallel in each deepstack server.
The log does get a little more complicated to view since results from each URL are mixed together, but it seems to work well from my initial testing. (@GentlePumpkin )
 

cepler

Getting the hang of it
Joined
Aug 13, 2020
Messages
47
Reaction score
78
Location
Allison Park, PA
Is the show mask supposed to display something (I have a dynamic mask showing as set) or is that broken?
 

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
448
Reaction score
126
Location
UK
Is the show mask supposed to display something (I have a dynamic mask showing as set) or is that broken?
It will show a mask set up as per the instructions for AITool as an image in the cameras folder.
 
Top