Yet Another Free Extension for Blue Iris Adding AI Object Dectection/Reduction in False Alarms/Enhanced Notification of Activity/On Guard

I am have an issue with 1.7.3.2 that I saw in 1.7.2. It was corrected in 1.7.3. On motion the initial MQTT alert is sent successfully. Subsequent alerts fail with this error message.

View attachment 80982

OnGuard then bogs down the MQTT broker with traffic until it pegs the CPU of the broker.
I changed nothing, or very little with respect to MQTT in this version. This specific error means that the MQTT server cannot be reached with the message. That may mean that (1) the contact info for MQTT isn't correct - maybe only for the motion stopped message(2) the server isn't up - goes down, or (3) something is interfering (anti-virus, etc. - this seems unlikely).

I will double check that the save/read of the MQTT stuff is correct. Also, I will make sure that the connection error is handled so it doesn't nuke On Guard and/or peg the CPU. I should be able to get to that soon.
 
I have put yet another release at: Release On Guard Security Assistant Version 1.7.3.3 · Ken98045/On-Guard
1. This has a fix for MQTT publishing. Things were very wrong before.
2. This should go a long way toward fixing a problem where moving vehicles were considered parked. This should happen very rarely now. However there will be cases where parked cars are considered to be moving. Hopefully, there will be fewer of these too.
 
I posted the detailed log information on post 359. Those logs were from 3 different cameras that do not have any visibility to parked cars. The moving vehicle was visible by 3 different cameras. The log showed the vehicle was removed because a parked car was found. From the log I have determined that every moving vehicle is being canceled out by a parked vehicle so no alert is sent. The only time it currently alerts for a vehicle correctly is if you close OnGuard and reopen it. It alerts on the first vehicle correctly and then every other after that is canceled out by a parked car. All other objects are being identified correctly.

What you are working on should correct my issue.
I have posted a Release 1.7.3.3. This should fix MQTT and 95% plus of the cases were moving cars were considered parked.
 
I have posted a Release 1.7.3.3. This should fix MQTT and 95% plus of the cases were moving cars were considered parked.
darn it! Just installed 1.7.3.2 5 minutes ago :) Good thing I did. When I installed previous version couple nights ago, I did not remove the program first. It has not been analyzing due to the new AOI format in latest versions.
 
darn it! Just installed 1.7.3.2 5 minutes ago :) Good thing I did. When I installed previous version couple nights ago, I did not remove the program first. It has not been analyzing due to the new AOI format in latest versions.
Yes, you do need to uninstall first because the installer won't allow minor version releases to be uninstalled (1.7.3 to 1.7.4 should just install, but 1.7.3.2 to 1.7.3.3 won't). When I have the time I will completely change the installer setup, but this may take me a bit.

There were some bad bugs in 1.7.3.2 with respect to both MQTT and "parked" vehicles. While not all real life parked cars will be considered parked, very few moving vehicles should be considered parked.

However, my test data is limited because my cameras don't have access to a street since we have a 400ft driveway. I thought that 1.7.3.2 had fixed the problems, but not so much. I've done quite a bit of testing for 1.7.3.3, but getting good test photos is still an issue. You'd think with the internet that finding photos that are (1) relevant to the test cases (2) In a sequence (say of one car moving down a street) and (3) non-copyrighted is not as easy as you'd think. For instance, I've also been trying to find photos of people carrying packages so I can train the AI to recognize them. It actually isn't that easy to find them in a real world setting with a variety of lighting and angles. Maybe I'm just looking in the wrong places though. I had originally though I'd find thousands in just one search.
 
Yes, you do need to uninstall first because the installer won't allow minor version releases to be uninstalled (1.7.3 to 1.7.4 should just install, but 1.7.3.2 to 1.7.3.3 won't). When I have the time I will completely change the installer setup, but this may take me a bit.

There were some bad bugs in 1.7.3.2 with respect to both MQTT and "parked" vehicles. While not all real life parked cars will be considered parked, very few moving vehicles should be considered parked.

However, my test data is limited because my cameras don't have access to a street since we have a 400ft driveway. I thought that 1.7.3.2 had fixed the problems, but not so much. I've done quite a bit of testing for 1.7.3.3, but getting good test photos is still an issue. You'd think with the internet that finding photos that are (1) relevant to the test cases (2) In a sequence (say of one car moving down a street) and (3) non-copyrighted is not as easy as you'd think. For instance, I've also been trying to find photos of people carrying packages so I can train the AI to recognize them. It actually isn't that easy to find them in a real world setting with a variety of lighting and angles. Maybe I'm just looking in the wrong places though. I had originally though I'd find thousands in just one search.
guess I'll have to donate to you one of those new Dahua PTZ cameras with 60x zoom, for testing purposes of course :)
 
guess I'll have to donate to you one of those new Dahua PTZ cameras with 60x zoom, for testing purposes of course :)
I have about a 10 year old Axis camera that has great optics and 10x zoom but poor (768) resolution, and a couple of fairly new camera with great resolution and poor optics. There are a lot of ads out there for "high resolution/4K" 6x zoom cameras, but they don't tell you that the lenses are $1.50 worth of plastic junk. Unfortunately, none of them can see through trees.
 
I have posted a Release 1.7.3.3. This should fix MQTT and 95% plus of the cases were moving cars were considered parked.
I have installed 1.7.3.3 and tested MQTT motion topic and payload and it is working well. I tested multiple cars and they are being detected properly, parked and moving. I do not trigger BI with OnGuard and I do not send email notifications. I will let you know if I notice anything. Thanks for all the work you put into this program.
 
Ken, have a question regarding multiple DeepStack instances. Does OnGuard rotate between the instances? If one is busy is the other used?
 
Ken, have a question regarding multiple DeepStack instances. Does OnGuard rotate between the instances? If one is busy is the other used?
It does them one after the other. If 10 messages are in the queue and you have 5 instances it will go 1 - 5, 1 - 5. In theory I could just take one that is not in use, but I don't think you would gain much on average. Right now I'm testing with 10 instances of DeepStack with the mode set to high. It doesn't hit the GPU or CPU very hard, but it does take a lot of memory. Occasionally I will set that 1 instance may have taken a while to process a frame, but generally I can hold down the Page Up/Down buttons and scroll through images at a reasonable rate (5 - 8 fps+, but I haven't measured it.

Personally, I think that this is probably just a short term issue, because now that DeepStack is open source I'm relatively sure that someone with some Python expertise will probably be able to make one instance of DeepStack run multiple jobs in parallel. So, in that case the only gain from multiple instances would be to spread the load on different machines. Even that wouldn't gain you too much unless your goal is to run real time video at 10+ frames per second.
 
I installed Deepstack AI on my little Ubuntu i5 NUC via Portainer (my first day doing this Linux stuff).
Setup the Deepstack config & OnGuard config to add a 2nd instance.
Eeeek! 2-3 seconds to analyze on the NUC (mind you...onboard video graphics with 4th gen cpu) through the network versus 500ms on my Windows Deepstack AI version locally on Blue Iris server.
Didn't expect such a high number. Well, ok...Plan A doesn't pan out :) But was good experience and is cool to have a backup Deepstack ready to go if necessary.
@Ken98045 is it possible for OnGuard to have a failover between instances of Deepstack? Such as...if my Windows Deepstack crashes, have OnGuard rely soley on the 2nd instsance? Or is this more of a Deepstack AI question I should research.
 
I installed Deepstack AI on my little Ubuntu i5 NUC via Portainer (my first day doing this Linux stuff).
Setup the Deepstack config & OnGuard config to add a 2nd instance.
Eeeek! 2-3 seconds to analyze on the NUC (mind you...onboard video graphics with 4th gen cpu) through the network versus 500ms on my Windows Deepstack AI version locally on Blue Iris server.
Didn't expect such a high number. Well, ok...Plan A doesn't pan out :) But was good experience and is cool to have a backup Deepstack ready to go if necessary.
@Ken98045 is it possible for OnGuard to have a failover between instances of Deepstack? Such as...if my Windows Deepstack crashes, have OnGuard rely soley on the 2nd instsance? Or is this more of a Deepstack AI question I should research.
Yes, I plan on doing that. Even if you have multiple instances on Windows one of them can crash. That is on the list for the next week or so. It isn't difficult and the payoff can be relatively high. I'm trying not to make too many changes too frequently so things can settle down some.
 
Yes, I plan on doing that. Even if you have multiple instances on Windows one of them can crash. That is on the list for the next week or so. It isn't difficult and the payoff can be relatively high. I'm trying not to make too many changes too frequently so things can settle down some.
Good idea. My thought on failovers would of been for a future add-in. Not right now though. I'm sure you are trying to find a nice stable release and working on bugs, before going too much forward.
 
I am using 1.7.3.3 and I just noticed the first bug that I have come across. A car was leaving my driveway and I did not get any notifications for 1 camera so I went to review the images in OnGuard. I refreshed and then went through each camera and ended at the one that I did not get an alert from. I chose motion only and it skipped past the event entirely. I unchecked motion only and refreshed and I then started going through the images 1 at a time. There were 12 images taken 2 seconds apart. The camera is set to a confidence level for cars at 50%. There were 10 of the images over 50% ranging from 60% to 85% confidence. The log entries for the images show 0 interesting objects found and no MQTT alerts. I did not have the detailed logging enabled. The SE camera has a poor picture from the sub-stream so I am using the mainstream thus it takes longer to process those images, about 140ms longer than the SW camera. The SW camera identified each image as exceeding the confidence level and firing a MQTT alert for each one

My question is if OnGuard believes a car is parked, will it display in OnGuard when you select the motion only button and then scroll through the images? If so maybe the parked car logic prevented the SE camera from flagging.

This repeated later in the morning. The Db and SE camera had all of their images flagged as interesting and the SW camera had 6 images with high confidence levels that were ignored. When I reviewed the images in OnGuard with motion only for the SW camera the event was skipped entirely. When I unselected motion only, the images identify the car with high confidence. Then when I select motion only again the 6 images for the SW camera show as having motion.

Could this be a database problem since when reviewing motion only the images are skipped until reviewing all images 1 at a time. It would not be a camera setting as the problem is not with 1 specific camera but jumps from camera to camera.

The images below are from the SE camera. These were 3 of the 12 identified with high confidence as cars. The log file from that event and a screenshot of DeepStack is included.
AiSE.20210201_083914453.jpgAiSE.20210201_083924479.jpgAiSE.20210201_083930594.jpgCapture.JPGDeepStack.JPG

Below is a screenshot of the log file for the second event where the SW camera ignored cars with high confidence levels but the Db and SE processed them normally.

Later Event.JPG
 
Last edited:
I am using 1.7.3.3 and I just noticed the first bug that I have come across. A car was leaving my driveway and I did not get any notifications for 1 camera so I went to review the images in OnGuard. I refreshed and then went through each camera and ended at the one that I did not get an alert from. I chose motion only and it skipped past the event entirely. I unchecked motion only and refreshed and I then started going through the images 1 at a time. There were 12 images taken 2 seconds apart. The camera is set to a confidence level for cars at 50%. There were 10 of the images over 50% ranging from 60% to 85% confidence. The log entries for the images show 0 interesting objects found and no MQTT alerts. I did not have the detailed logging enabled. The SE camera has a poor picture from the sub-stream so I am using the mainstream thus it takes longer to process those images, about 140ms longer than the SW camera. The SW camera identified each image as exceeding the confidence level and firing a MQTT alert for each one

My question is if OnGuard believes a car is parked, will it display in OnGuard when you select the motion only button and then scroll through the images? If so maybe the parked car logic prevented the SE camera from flagging.

This repeated later in the morning. The Db and SE camera had all of their images flagged as interesting and the SW camera had 6 images with high confidence levels that were ignored. When I reviewed the images in OnGuard with motion only for the SW camera the event was skipped entirely. When I unselected motion only, the images identify the car with high confidence. Then when I select motion only again the 6 images for the SW camera show as having motion.

Could this be a database problem since when reviewing motion only the images are skipped until reviewing all images 1 at a time. It would not be a camera setting as the problem is not with 1 specific camera but jumps from camera to camera.

The images below are from the SE camera. These were 3 of the 12 identified with high confidence as cars. The log file from that event and a screenshot of DeepStack is included.
View attachment 81290View attachment 81291View attachment 81292View attachment 81293View attachment 81294

Below is a screenshot of the log file for the second event where the SW camera ignored cars with high confidence levels but the Db and SE processed them normally.

View attachment 81299
The fact that they weren't in the motion only is because they weren't found interesting. It is possible that this is because of the parking filter. Without the detailed log I can't know for sure. However, as you step through them using the AI they may be added.

The parking check goes back and checks the previous frame to see if the current location overlaps the previous by 97% or if the upper left or upper right corners are within 10 pixels. This can be a problem if a car is moving very slowly, but with frames 2 seconds apart this is doubtful.

What you can do is delete the log file. Turn on detailed logging. Copy off the suspect pictures. Paste them into back into your camera output directory. Then, with detailed logging on you may be able to tell what is going on. If you can't then attach that portion of the log file to a post, or I think you can send them to me (I think IPCamTalk allows this, but I haven't checked). Before you do either you might want to remove any personal info like IP Addresses (if externally visible) and/or passwords.

I still want to do a more sophisticated parking check, but I really don't think that is the problem.

You mentioned that you have two different streams. One problem I have seen and fixed locally is that the check for the overlap to an area doesn't always work correctly if picture objects taken at different resolutions are fed though the same areas. I did try to account for that, but that could fail under certain circumstances. I don't want to post that fix yet because it is too bound up with a bunch of other changes I'm making locally, and I haven't had a chance to test them all completely. If you want to be an early tester of that I can make it available to you.
 
The SE camera that did not identify objects in the post above is set to only use the mainstream for motion detection and jpg generation because the sub stream is poor quality.

For my test I took the 6 images from the SW camera that OnGuard sent MQTT alerts on and I changed the filename by adding a 3 before the hour so it would be easy to match them up in the first log in the previous post. I pasted all 6 into the AiInput folder, 1 at a time in the order they were posted the first time. No MQTT alerts were sent and when I went to review the images for that camera and none of them show up when I check motion only. They show up with the coordinates and confidence levels below when I view them with motion only unchecked. BTW, I triggered the camera before running this test with no cars visible or identified within the frame.

X - Y and Confidence below

876-127-61%
828-109-89%
780-89-87%
731-93-92%
710-93-91%
832-98-73%

I changed the file name again by changing the 3 to a 4 and I restarted OnGuard. I posted the same images 1 at a time and this time all 6 images sent MQTT alerts. I reviewed the images with motion only selected and all images showed up as having motion without having to go through them 1 at a time.

I believe that the problem is that the images are not being recorded properly in the database. This would explain why when selecting motion only these images did not show up as motion events in OnGuard2.txt. When I restarted OnGuard the images all triggered MQTT alerts like in the post above and in OnGuard4.txt. According to the log none of the images identified a parked car. Perhaps the MQTT alert not being sent and the image not being logged as an interesting object is related.

I would be interested in helping you test your pre release.
 

Attachments

With the MQTT payload, can you put in an ifstatement to condition the data?

If payload starts and ends with "{" that you json_encode() the whole payload? The FileName has single backslashes in it which are escape characters for JSON. With HomeAssistant, you can send multiple data variables with JSON format.

Payload
Code:
{ "Motion": "ON", "File": "{File}" }

MQTT translation
Code:
{ "Motion": "ON", "File": "E:\Security\AIInput\Frontyard_aii.2021020.jpg" }

json_encode() translation
Code:
{ "Motion": "ON", "File": "E:\\Security\\AIInput\\Frontyard_aii.2021020.jpg" }
 
Ken, I found something that may help pinpoint the issue I am seeing. I had a dog trigger the SE camera on 2 separate events about 3 hours apart. There were 7 images in total. I reviewed the camera with motion only and none of the images displayed. So while viewing them 1 at a time and I noticed the first image had a confidence level of 40%. The confidence for any mammal was set to 42% so I lowered it to 40%. I then noticed that there were 4 of the 7 images with a confidence level of 42% or greater which should have fired an alert. Image of the log below and 2 of the images.

Capture.JPGAiSE.20210202_2010443541.jpgAiSE.20210202_2043206068.jpg

I then turned on the detailed logging and changed the name of the images and pasted them into the AiInput folder 1 at a time and all of the images fired an alert this time. I did not restart OnGuard. I just modified the AOI lowering the confidence level of any mammal. This log is OnGuard2.txt.

I then took the same 6 images I used from the test in post 417, changed the name again and pasted them into the AiInput folder. This time only the first 2 images triggered an alert. The log is the 02a attachment. I repeated and got the same result. I then lowered the confidence level of any mammal for the AOI and saved it and ran it through again. This time all 6 images fired alerts for the car. This log is the 02c attachment.

So the problem goes away either after a restart of OnGuard or a modification of the confidence level of an AIO. Perhaps OnGuard is losing the AOI settings and either a modification or restart corrects this.

The problem is that interesting objects are not always marked as interesting so they are not displayed when choosing motion only. The problem is sporadic affecting at least 2 and maybe more of my cameras. MQTT alerts are also not fired for theses images. When viewing these images without the motion only filter selected they exceed the confidence level and a second pass with the motion filter selected will display these images as having motion.

One thing to note about my cameras. The are fixed position and have a constant resolution from the sub stream for jpg generation.
 

Attachments

With the MQTT payload, can you put in an ifstatement to condition the data?

If payload starts and ends with "{" that you json_encode() the whole payload? The FileName has single backslashes in it which are escape characters for JSON. With HomeAssistant, you can send multiple data variables with JSON format.

Payload
Code:
{ "Motion": "ON", "File": "{File}" }

MQTT translation
Code:
{ "Motion": "ON", "File": "E:\Security\AIInput\Frontyard_aii.2021020.jpg" }

json_encode() translation
Code:
{ "Motion": "ON", "File": "E:\\Security\\AIInput\\Frontyard_aii.2021020.jpg" }
I am using a software library that sends a "string" (text) to your MQTT server. The terms in the braces are substituted with the On Guard values. The only things that On Guard recognizes are the ones in the MQTT setup screen. What happens is that the recognized terms are substituted for the values On Guard has. Anything else is exactly like you entered it (literal text). Note that the terms in braces must be there exactly (including case).

So, for the motion started message topic {Camera} gets the camera name, {Area} gets the area name, {Motion} gets "on". For the Payload {File} gets the motion file name, {Confidence} gets the confidence level, {Object} gets the object label found by DeepStack", {Motion} gets "on"

With these exceptions (slightly different for the motion stopped message) I send what you enter. The library does the rest. I don't know or care (much) what the JSON message looks like since I don't form it myself. By the time things get to the library there is just plain text.

I'm not sure if this answers your question or not.