Yet Another Free Extension for Blue Iris Adding AI Object Dectection/Reduction in False Alarms/Enhanced Notification of Activity/On Guard

MolsonB

n3wb
Joined
Nov 30, 2020
Messages
10
Reaction score
0
Location
Ontario, Canada
I am using a software library that sends a "string" (text) to your MQTT server. The terms in the braces are substituted with the On Guard values. The only things that On Guard recognizes are the ones in the MQTT setup screen. What happens is that the recognized terms are substituted for the values On Guard has. Anything else is exactly like you entered it (literal text). Note that the terms in braces must be there exactly (including case).

So, for the motion started message topic {Camera} gets the camera name, {Area} gets the area name, {Motion} gets "on". For the Payload {File} gets the motion file name, {Confidence} gets the confidence level, {Object} gets the object label found by DeepStack", {Motion} gets "on"

With these exceptions (slightly different for the motion stopped message) I send what you enter. The library does the rest. I don't know or care (much) what the JSON message looks like since I don't form it myself. By the time things get to the library there is just plain text.

I'm not sure if this answers your question or not.
It's just coincidence that both JSON and you use "{" as the special character. All that is good, no issues there. It's for people who integrate this into home assistant. For us to use the file name for example, it needs two backslashes / to be recognized on home assistant end of things as 1 backslash. Since Home assistant uses JSON when receiving mutli data fields in 1 mqtt message. Using a JSON library, it will contain "json_encode()", that function will take care of all the special characters for you. OR since the file name is the only issue I can see, just adding two backslashes instead of one will work. I'm probably not explaining myself, we can chat offline to keep the clutter down here.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
With the MQTT payload, can you put in an ifstatement to condition the data?

If payload starts and ends with "{" that you json_encode() the whole payload? The FileName has single backslashes in it which are escape characters for JSON. With HomeAssistant, you can send multiple data variables with JSON format.

Payload
Code:
{ "Motion": "ON", "File": "{File}" }
MQTT translation
Code:
{ "Motion": "ON", "File": "E:\Security\AIInput\Frontyard_aii.2021020.jpg" }
json_encode() translation
Code:
{ "Motion": "ON", "File": "E:\\Security\\AIInput\\Frontyard_aii.2021020.jpg" }
I am using a software library that sends a "string" (text) to your MQTT server. The terms in the braces are substituted with the On Guard values. The only things that On Guard recognizes are the ones in the MQTT setup screen. What happens is that the recognized terms are substituted for the values On Guard has. Anything else is exactly like you entered it (literal text). Note that the terms in braces must be there exactly (including case).

So, for the motion started message topic {Camera} gets the camera name, {Area} gets the area name, {Motion} gets "on". For the Payload {File} gets the motion file name, {Confidence} gets the confidence level, {Object} gets the object label found by DeepStack", {Motion} gets "on"

With these exceptions (slightly different for the motion stopped message) I send what you enter. The library does the rest. I don't know or care (much) what the JSON message looks like since I don't form it myself. By the time things get to the library there is just plain text.

I'm not sure if this answers your question or not.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
Ken, I found something that may help pinpoint the issue I am seeing. I had a dog trigger the SE camera on 2 separate events about 3 hours apart. There were 7 images in total. I reviewed the camera with motion only and none of the images displayed. So while viewing them 1 at a time and I noticed the first image had a confidence level of 40%. The confidence for any mammal was set to 42% so I lowered it to 40%. I then noticed that there were 4 of the 7 images with a confidence level of 42% or greater which should have fired an alert. Image of the log below and 2 of the images.

View attachment 81324View attachment 81322View attachment 81323

I then turned on the detailed logging and changed the name of the images and pasted them into the AiInput folder 1 at a time and all of the images fired an alert this time. I did not restart OnGuard. I just modified the AOI lowering the confidence level of any mammal. This log is OnGuard2.txt.

I then took the same 6 images I used from the test in post 417, changed the name again and pasted them into the AiInput folder. This time only the first 2 images triggered an alert. The log is the 02a attachment. I repeated and got the same result. I then lowered the confidence level of any mammal for the AOI and saved it and ran it through again. This time all 6 images fired alerts for the car. This log is the 02c attachment.

So the problem goes away either after a restart of OnGuard or a modification of the confidence level of an AIO. Perhaps OnGuard is losing the AOI settings and either a modification or restart corrects this.

The problem is that interesting objects are not always marked as interesting so they are not displayed when choosing motion only. The problem is sporadic affecting at least 2 and maybe more of my cameras. MQTT alerts are also not fired for theses images. When viewing these images without the motion only filter selected they exceed the confidence level and a second pass with the motion filter selected will display these images as having motion.

One thing to note about my cameras. The are fixed position and have a constant resolution from the sub stream for jpg generation.
The confidence level on animals is often lower than you might think. In addition, sometimes you will have a double definition. In my case my dog is sometimes a "sheep", sometimes "cat", and usually "dog". Often I get deer defined as "dogs. The later might be understandable because there is no definition for "deer" in the AI.

I may need to do the same thing I do for vehicles. If it is "any mammal" and there is a double definition I should probably artificially boost the confidence level. I don't think that this is the case for you, but I'd be interested in your opinion.

I will respond to the rest of your concerns shortly in a separate reply.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
Ken, I found something that may help pinpoint the issue I am seeing. I had a dog trigger the SE camera on 2 separate events about 3 hours apart. There were 7 images in total. I reviewed the camera with motion only and none of the images displayed. So while viewing them 1 at a time and I noticed the first image had a confidence level of 40%. The confidence for any mammal was set to 42% so I lowered it to 40%. I then noticed that there were 4 of the 7 images with a confidence level of 42% or greater which should have fired an alert. Image of the log below and 2 of the images.

View attachment 81324View attachment 81322View attachment 81323

I then turned on the detailed logging and changed the name of the images and pasted them into the AiInput folder 1 at a time and all of the images fired an alert this time. I did not restart OnGuard. I just modified the AOI lowering the confidence level of any mammal. This log is OnGuard2.txt.

I then took the same 6 images I used from the test in post 417, changed the name again and pasted them into the AiInput folder. This time only the first 2 images triggered an alert. The log is the 02a attachment. I repeated and got the same result. I then lowered the confidence level of any mammal for the AOI and saved it and ran it through again. This time all 6 images fired alerts for the car. This log is the 02c attachment.

So the problem goes away either after a restart of OnGuard or a modification of the confidence level of an AIO. Perhaps OnGuard is losing the AOI settings and either a modification or restart corrects this.

The problem is that interesting objects are not always marked as interesting so they are not displayed when choosing motion only. The problem is sporadic affecting at least 2 and maybe more of my cameras. MQTT alerts are also not fired for theses images. When viewing these images without the motion only filter selected they exceed the confidence level and a second pass with the motion filter selected will display these images as having motion.

One thing to note about my cameras. The are fixed position and have a constant resolution from the sub stream for jpg generation.
I took a look at your log file. What I am seeing here is that there were no dogs identified by the AI at all:
person 0.57180154 295 100 321 146
2021.02.02 05:53:56:3575 - (Trace) car 0.61758655 876 127 1008 348
In all the log files I'm seeing no reference to "dog" at all. For "* Any Mammal" to trigger it needs an animal. Maybe you don't have the correct part of the log files though.

If you are using the GPU DeepStack you may want to set "--MODE High". That has definitely help me with false triggers, but I don't know how well it does with recognizing objects that are there.

I will see if I can find a problem of the confidence level not being saved.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
I took a look at your log file. What I am seeing here is that there were no dogs identified by the AI at all:
person 0.57180154 295 100 321 146
2021.02.02 05:53:56:3575 - (Trace) car 0.61758655 876 127 1008 348
In all the log files I'm seeing no reference to "dog" at all. For "* Any Mammal" to trigger it needs an animal. Maybe you don't have the correct part of the log files though.

If you are using the GPU DeepStack you may want to set "--MODE High". That has definitely help me with false triggers, but I don't know how well it does with recognizing objects that are there.

I will see if I can find a problem of the confidence level not being saved.
I did look to make sure that the Confidence level is being save correctly. It is being saved to the registry and into memory. There may be some unrelated issues that could have caused what you are seeing. I'll keep an eye out for that.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
It's just coincidence that both JSON and you use "{" as the special character. All that is good, no issues there. It's for people who integrate this into home assistant. For us to use the file name for example, it needs two backslashes / to be recognized on home assistant end of things as 1 backslash. Since Home assistant uses JSON when receiving mutli data fields in 1 mqtt message. Using a JSON library, it will contain "json_encode()", that function will take care of all the special characters for you. OR since the file name is the only issue I can see, just adding two backslashes instead of one will work. I'm probably not explaining myself, we can chat offline to keep the clutter down here.
I think I do understand what you are saying. I suppose I could have used something other than braces so it doesn't get confusing for you, but maybe it is too late now (confusing others). Maybe I'll switch brackets [] instead? For a while I could support both [] and {} I guess if you think it is important. The only functional problem I see using {} is that if people put {} in but have a typo, then the braces wouldn't get filtered out by me. Maybe that would cause problems down the message pipeline.
 
Joined
May 1, 2019
Messages
2,215
Reaction score
3,504
Location
Reno, NV
@Ken98045 did you say somewhere earlier, you tried a GPU video card to help with the analyzing? I have a cheap low profile graphics card I still need to install (won at auction for $5, so aok for testing win or lose).
Just curious cause looking at the Nvidia Jetson 4mb nano board. Already gots a Pi4 (not used yet), i3 NUC (dedicated Supervised Home Assistant), i5 NUC (just installed Ubuntu over weekend and hoping for this to be my Linux everything box). Did try Deepstack AI on the i5 4th gen NUC. Over 2 seconds to analyze. Just wonder what a video card or the Jetson may do to increase that #.
As of now, just using onboard motherboard integrated video on my Windows 10 / Blue Iris machine which comes in around 500ms. I just do not like the spikes when Deepstack analyzes across multiple cameras.
 

jz3082

Young grasshopper
Joined
Dec 13, 2019
Messages
78
Reaction score
22
Location
Oklahoma, US
I took a look at your log file. What I am seeing here is that there were no dogs identified by the AI at all:
person 0.57180154 295 100 321 146
2021.02.02 05:53:56:3575 - (Trace) car 0.61758655 876 127 1008 348
In all the log files I'm seeing no reference to "dog" at all. For "* Any Mammal" to trigger it needs an animal. Maybe you don't have the correct part of the log files though.

If you are using the GPU DeepStack you may want to set "--MODE High". That has definitely help me with false triggers, but I don't know how well it does with recognizing objects that are there.

I will see if I can find a problem of the confidence level not being saved.
I checked the first attachment in my last post and I posted the wrong file log file. The correct file showed that the dog was identified in each image with an alert fired for each image. So it confirmed the same sporadic issue as noted above and in post 417. Two actions fix the issue either restarting OnGuard or changing the confidence level, even if it an unrelated object. Note the last 2 log files, I changed the confidence level of the mammal because it was unrelated to the images being posted of cars. The last 2 log files show the results of the images identified before and after the AIO change. Compare those with the logs from post 417. These logs show the results from the same six car images. Two logs show either none or fewer objects identified and the other logs files show all images identified with all alerts being sent. Again, the same 6 images were processed for all 4 logs.

I only have the onboard Intel Quicksync GPU in my PC. I do not know if the GPU version would work. I do have the Beta CPU version in Docker in --MODE High. With the move to Docker I get the same processing time (250ms) as I got in Windows with --MODE Medium.
 
Last edited:

jz3082

Young grasshopper
Joined
Dec 13, 2019
Messages
78
Reaction score
22
Location
Oklahoma, US
The confidence level on animals is often lower than you might think. In addition, sometimes you will have a double definition. In my case my dog is sometimes a "sheep", sometimes "cat", and usually "dog". Often I get deer defined as "dogs. The later might be understandable because there is no definition for "deer" in the AI.

I may need to do the same thing I do for vehicles. If it is "any mammal" and there is a double definition I should probably artificially boost the confidence level. I don't think that this is the case for you, but I'd be interested in your opinion.

I will respond to the rest of your concerns shortly in a separate reply.
I am only using MQTT alerting. I am relatively new to MQTT so I may not be doing things as efficiently as I could. I use mcsMQTT as my broker in HomeSeer and I have it automatically create my devices that I want tracked. I have had various dogs identified as a horse, sheep, cow, dog and cat. It would simplify things on the automation side for me with 1 change on your end. i propose this, if any mammal is selected return mammal as the object in the MQTT topic. If specific animals are selected as well as any mamal in the AIO first check to see if the confidence level is met for the specific animals and if true use the specific animal object. If the specific animals have a confidence too low but the any mammal exceeds the confidence level use mammal as the object. You could use any vehicle the same way. That way I would only need 3 devices in HomeSeer per camera, person, mammal and vehicle. I don't think it would be necessary to increase the confidence level of an animal if the same object is indented as multiple animals unless you could easily modify your code for vehicles.
 

tripp396

Getting the hang of it
Joined
Jun 18, 2020
Messages
65
Reaction score
30
Location
Minnesota
Might be wrong place for this, but does anyone get some really bad detections from deepstack for dogs? Deepstack detects my dog as a person 9/10 times and at relatively high confidence (65-75%). I hesitate to make the confidence higher as some of the confidences that I've tested in various spots in my yard can also fall that low. I want to keep detecting my dog when she's back there as she catches varmits quite regularly, but might have to turn it off.
 
Joined
May 1, 2019
Messages
2,215
Reaction score
3,504
Location
Reno, NV
Might be wrong place for this, but does anyone get some really bad detections from deepstack for dogs? Deepstack detects my dog as a person 9/10 times and at relatively high confidence (65-75%). I hesitate to make the confidence higher as some of the confidences that I've tested in various spots in my yard can also fall that low. I want to keep detecting my dog when she's back there as she catches varmits quite regularly, but might have to turn it off.
I am actually really surprised deep stack can differentiate between cats dogs or various small animals. The background, the contrast, the motion, size of various dog species. I would imagine the smaller pixel count for any object lowers the confidence or even object confusion
 

jz3082

Young grasshopper
Joined
Dec 13, 2019
Messages
78
Reaction score
22
Location
Oklahoma, US
Might be wrong place for this, but does anyone get some really bad detections from deepstack for dogs? Deepstack detects my dog as a person 9/10 times and at relatively high confidence (65-75%). I hesitate to make the confidence higher as some of the confidences that I've tested in various spots in my yard can also fall that low. I want to keep detecting my dog when she's back there as she catches varmits quite regularly, but might have to turn it off.
I have that issue when the dog is within about12 ft. of the camera for one of my cameras that has more of a downward angle than the rest. The confidence level of the dog detected as a person is usually under 72% so I have my person confidence level set to 72%. I have not done this but in the manual it mentions setting up different AOIs and then you could adjust the minimum height of the person. If you adjust the minimum height you would want to setup multiple AIOs because a person far away would not meet the minimum height. If DeepStack does not identify the dog as an animal and it is important that you get animal notifications of some kind perhaps you could setup overlapping AOIs with one set without a minimum height (this would catch the dog when identified as a human) and the other with a minimum height (this would detect people as people). Maybe others have already tackled this some other way.
 

Vettester

Getting comfortable
Joined
Feb 5, 2017
Messages
740
Reaction score
693
Might be wrong place for this, but does anyone get some really bad detections from deepstack for dogs? Deepstack detects my dog as a person 9/10 times and at relatively high confidence (65-75%). I hesitate to make the confidence higher as some of the confidences that I've tested in various spots in my yard can also fall that low. I want to keep detecting my dog when she's back there as she catches varmits quite regularly, but might have to turn it off.
I have a medium sized dog and DeepStack has been spot on with confidence levels set at 50%.
 

Vettester

Getting comfortable
Joined
Feb 5, 2017
Messages
740
Reaction score
693
What model is the camera that keeps eye on your dog? What height from the ground is it?
They are old Lowe’s Iris V2 cams mounted approximately 10’ from the ground. I use these just in my backyard. Everywhere else I’m using Dahua.
7753BDA0-E18A-49A6-B38C-A578447E4F25.jpeg

It also does well for picking up cats in the dark.
663966DA-B3BE-4919-A3A7-48A793EEA2F3.jpeg
 
Last edited:

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
Might be wrong place for this, but does anyone get some really bad detections from deepstack for dogs? Deepstack detects my dog as a person 9/10 times and at relatively high confidence (65-75%). I hesitate to make the confidence higher as some of the confidences that I've tested in various spots in my yard can also fall that low. I want to keep detecting my dog when she's back there as she catches varmits quite regularly, but might have to turn it off.
Yes. They are pretty bad. Sometimes dog=dog, dog=cat, deer=dog, sometimes multiple definitions overlapping. In general I think that the models need to be update for animals, particularly one is needed for "deer", and "elk" in my area. Sometimes a pretty low confidence rathing.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
They are old Lowe’s Iris V2 cams mounted approximately 10’ from the ground. I use these just in my backyard. Everywhere else I’m using Dahua.
View attachment 81521

It also does well for picking up cats in the dark.
View attachment 81522
I use Jennov Wireless WiFi Security Camera IP PTZ. They area reasonably cheap, excellent resolution, excellent nighttime sensitivity. I could wish that they had much better optics however, I think that height wise lower is better, but I think 8 to 12 feet is good. I like the clarity of these cameras. They are nice and sharp. A lot of people buy cameras based on how may pixels they have. However, the optics (quality of the lenses) is a much better determiant of performance. Unfortunately, this is very hard to judge when buying something online.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
I am only using MQTT alerting. I am relatively new to MQTT so I may not be doing things as efficiently as I could. I use mcsMQTT as my broker in HomeSeer and I have it automatically create my devices that I want tracked. I have had various dogs identified as a horse, sheep, cow, dog and cat. It would simplify things on the automation side for me with 1 change on your end. i propose this, if any mammal is selected return mammal as the object in the MQTT topic. If specific animals are selected as well as any mamal in the AIO first check to see if the confidence level is met for the specific animals and if true use the specific animal object. If the specific animals have a confidence too low but the any mammal exceeds the confidence level use mammal as the object. You could use any vehicle the same way. That way I would only need 3 devices in HomeSeer per camera, person, mammal and vehicle. I don't think it would be necessary to increase the confidence level of an animal if the same object is indented as multiple animals unless you could easily modify your code for vehicles.
Good ideas. I'll see about changing that.
 

Ken98045

IPCT Contributor
Joined
Aug 3, 2020
Messages
219
Reaction score
74
Location
Seattle
I checked the first attachment in my last post and I posted the wrong file log file. The correct file showed that the dog was identified in each image with an alert fired for each image. So it confirmed the same sporadic issue as noted above and in post 417. Two actions fix the issue either restarting OnGuard or changing the confidence level, even if it an unrelated object. Note the last 2 log files, I changed the confidence level of the mammal because it was unrelated to the images being posted of cars. The last 2 log files show the results of the images identified before and after the AIO change. Compare those with the logs from post 417. These logs show the results from the same six car images. Two logs show either none or fewer objects identified and the other logs files show all images identified with all alerts being sent. Again, the same 6 images were processed for all 4 logs.

I only have the onboard Intel Quicksync GPU in my PC. I do not know if the GPU version would work. I do have the Beta CPU version in Docker in --MODE High. With the move to Docker I get the same processing time (250ms) as I got in Windows with --MODE Medium.
I'll keep looking at this. I may need to see it here to fix it. Maybe I can spot the problem in code, but that is much more difficult than actually being able to replicate the problem.
 
Top