Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

After a day of fooling, tinkering, scanning, chkdsk, and some other tricks I finally found out what was wrong with alerts.

I had enabled "burn" in AI for the cameras running AI so I could see what it was being triggered by. Previously alerts were nothing more than markers in the database, not actual files. With "burn" enabled a jpg is written to the alerts folder for each alert. I had the alerts folder set to zero since it was never used. BI/DS would write about a meg of jpgs then delete everything since it was out of space there.

Bottom line is to make sure you have allocated some space for alerts. Now to see if I can get my shoe out of my mouth.
 
I am awkwardly stuck on setting up Deepstack integration. Here's what I have done.

Blue Iris Server - 192.168.23.25
Deepstack - 192.168.23.26:83 (within Ubuntu, using Portainer)

1. I'm using Portainer to hold the Deepstack container. I can confirm that I can get to Deepstack landing page successfully.
2. I have setup the global settings "AI" tab.
3. I have a camera (driveway) that is setup to trigger alerts with a very low confidence score
4. I have motion zone A filled with the entire frame


The problem: I'm always getting Alert cancelled: Nothing found.
When I look at the Deepstack logs, it looks like it is giving me a 403, which I am believing to be a forbidden. I can confirm that within the container I can ping my BlueIris server (192.168.23.25)

I've banged my head against this but cannot seem to figure this out. Any help is greatly appreciated!
 

Attachments

  • Screenshot 2021-04-11 105649.jpg
    Screenshot 2021-04-11 105649.jpg
    23 KB · Views: 50
  • Screenshot 2021-04-11 105744.jpg
    Screenshot 2021-04-11 105744.jpg
    35.3 KB · Views: 47
  • Screenshot 2021-04-11 105806.jpg
    Screenshot 2021-04-11 105806.jpg
    40.5 KB · Views: 44
  • Screenshot 2021-04-11 105838.jpg
    Screenshot 2021-04-11 105838.jpg
    73.1 KB · Views: 46
  • Screenshot 2021-04-11 105920.jpg
    Screenshot 2021-04-11 105920.jpg
    46.9 KB · Views: 44
  • Screenshot 2021-04-11 105938.jpg
    Screenshot 2021-04-11 105938.jpg
    17.9 KB · Views: 37
I am awkwardly stuck on setting up Deepstack integration. Here's what I have done.

Blue Iris Server - 192.168.23.25
Deepstack - 192.168.23.26:83 (within Ubuntu, using Portainer)

1. I'm using Portainer to hold the Deepstack container. I can confirm that I can get to Deepstack landing page successfully.
2. I have setup the global settings "AI" tab.
3. I have a camera (driveway) that is setup to trigger alerts with a very low confidence score
4. I have motion zone A filled with the entire frame


The problem: I'm always getting Alert cancelled: Nothing found.
When I look at the Deepstack logs, it looks like it is giving me a 403, which I am believing to be a forbidden. I can confirm that within the container I can ping my BlueIris server (192.168.23.25)

I've banged my head against this but cannot seem to figure this out. Any help is greatly appreciated!

I would try downloading and installing the windows version first and install it to its default location. Configure it for 127.0.0.1 and the port you choose and have BI control it. If it works, the it's probably something with the way Portainer is configured.
I have Deepstack running on my Devuan Linux server in a Docker which is formatted XFS on stop of a ZFS ZVOL and I run it on on port 5000. My Docker container line:
docker run --detach --name=deepstack --restart=always -e MODE=High -e VISION-DETECTION=True -e VISION-FACE=True -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:latest
 
I would try downloading and installing the windows version first and install it to its default location. Configure it for 127.0.0.1 and the port you choose and have BI control it. If it works, the it's probably something with the way Portainer is configured.
I have Deepstack running on my Devuan Linux server in a Docker which is formatted XFS on stop of a ZFS ZVOL and I run it on on port 5000. My Docker container line:
docker run --detach --name=deepstack --restart=always -e MODE=High -e VISION-DETECTION=True -e VISION-FACE=True -v localstorage:/datastore -p 5000:5000 --name deepstack deepquestai/deepstack:latest


BTW, I figured this out.

The problem is that the docker compose on Docker mentioned VISION-SCENE, but not VISION-DETECTION. My assumption is that this was a changed environment variable, but it's not. Setting VISION-DETECTION was the culrpit.
 
  • Like
Reactions: beandip
Here's a detection I didn't expect, a person and the bicycle they were riding. Cool that it can tell them apart when they're captured together. (Yes, my fence is dirty, spring cleaning project for later this week).
View attachment 86735

I haven’t noticed the different color object boxes before. What determines the orange vs yellow?
 
I know I'm going to get flack for this one, but being a somewhat logical person I'm just trying to find a justification for using AI....

Exactly what is the point of AI in the first place? I've been running it since it was integrated into BI. It works, sometimes, but isn't entirely reliable during the day and is totally unreliable at night. I see others getting better results and generally that seems to be the case in more urban environments, with lots of light or when the target is relatively close to the camera, in other words a large target. I have yet to have it detect a raccoon not more than ten feet from one of my cameras and that raccoon goes by twice every night. BI sees it, I can see it plainly, down to eyes and tail stripes, very clearly, but the AI says "nothing found" every day.

I spent hours playing with pre-trigger times, object size, object contrast, frames to look at and confidence level. The last update to BI, or maybe the one before, changed the motion detection algorithm. Now I'm getting so many false triggers I've got to go back and retune at least half of my cameras to get rid of them. My understanding is that was done to improve DS detection abilities, but I see no real change there at all.

When it does work it tells us what it perceives the object is. For it to detect anything it needs the input of motion detection from BI, which is already detecting the target in the first place. My vision allows me to accurately identify a car, pick-up, truck, bus, person, bicycle, cat, dog, deer, raccoon, fox, opossum, elephant, zebra and anything else that might trigger BI, so why do I need AI running too, and sucking up resources of the GPU or CPU, to tell me what I can plainly see and already know? I can efficiently review events using UI3 with the animated clips list and easily see if it was a car, pick-up, truck, bus, yadda yadda. Nothing saved there from what I can see. Maybe I'm wrong, beats me.

If it were reliable it certainly could cut down on false triggers, but lacking that level of reliability vastly reduces its' functionality.
 
If you have an area with lots of motion but only want to be "alerted" (live alerts to your phone, etc..) to a specific trigger, then AI can work if its reliable and accurate. If it's not either of those two things, than regular ol'e motion detection works but you will gets lots of false triggers.

I've not messed with DS yet because I don't trust Ken and his pre-releases so I'm waiting to see all the bugs before I move forward.
 
@biggen You probably made a wise choice, especially if you're in a more rural area. I've been chasing my tail for two weeks, at least proving that Einstein was correct...doing the same things over and over again looking for a different result. Reliable it ain't.
 
Yeah, I figured I'd give him time to address bugs. I learned a couple years ago that his stable channel is more suitable for me.

I actually use Frigate for my live alerting. I have a dedicated NUC running it. I get alerts to my phone whenever someone steps foot on my driveway. Its deadly accurate. When the DS functionality moves into the stable BI release channel, I've give it a try and see if it works any better/worse than Frigate.
 
If you have an area with lots of motion but only want to be "alerted" (live alerts to your phone, etc..) to a specific trigger, then AI can work if its reliable and accurate. If it's not either of those two things, than regular ol'e motion detection works but you will gets lots of false triggers.

I've not messed with DS yet because I don't trust Ken and his pre-releases so I'm waiting to see all the bugs before I move forward.
This is why I went with AI. I only get notifications for people setting foot on my property. Otherwise, it's still useful to have triggered captures to be able to check on things, like when those ^%$@#% racoons show up and crap by the side of the house.

I was happy to jump on the bandwagon with the DS integration and be a guinea pig. It allowed me to ditch the Sentry AI subscription. Pretty happy with how it's been working and it keeps getting better with each BI revision.
 
^^^ So you're getting 100% detection reliability at night? I'm getting .01% reliability at night at best. That's with lots of auxiliary IR.
 
  • Like
Reactions: looney2ns
This is why I went with AI. I only get notifications for people setting foot on my property. Otherwise, it's still useful to have triggered captures to be able to check on things, like when those ^%$@#% racoons show up and crap by the side of the house.

I was happy to jump on the bandwagon with the DS integration and be a guinea pig. It allowed me to ditch the Sentry AI subscription. Pretty happy with how it's been working and it keeps getting better with each BI revision.

Well, I am having great results using it to alert me if a person steps into zone "B" on my sidewalk to shop. I configured it to produce a doorbell sound to alert me someone is approaching shop door. Before DS, I had some zones setup for the person to cross which also worked fine...BUT as soon as shadows from flag and trees....or the famous cloud and then full sun, I would get false triggers. So it's all how you can apply it, and for me, DS works every time.
 
  • Like
Reactions: beandip
^^^ So you're getting 100% detection reliability at night? I'm getting .01% reliability at night at best. That's with lots of auxiliary IR.
Nothing gives me 100% reliability but DS gives me way less false alerts and rarely misses anything. Granted, I am in a suburban environment but I dont have street lights either, just IR's. If an object is in the IR, it gets tagged and alerted. If the branches move, no alert.
 
My original idea was exactly that, to eliminate false triggers. The problem is it doesn't trigger worth a darn at night no matter what settings I use. A confidence level at 3 still doesn't produce triggers during walk tests which makes it totally useless for my purposes. I'd rather get a false or two than miss that one critical one, but that's just me.
 
I only use DS during my "daytime open hours profile" At night there are no false triggers.....so I go to Profile 3 (night) and just use normal motion detection. My area is well lit as well.
 
^^^ So you're getting 100% detection reliability at night? I'm getting .01% reliability at night at best. That's with lots of auxiliary IR.
Definitely not getting 100%, but in my usage case, the detection reliability is high enough that I'm happy with it. I'm in an urban environment with lots of ambient light and I have motion activated security lights as well, so it's actually pretty good for detection. It's going to be different for everybody :)
 
  • Like
Reactions: sebastiantombs
My original idea was exactly that, to eliminate false triggers. The problem is it doesn't trigger worth a darn at night no matter what settings I use. A confidence level at 3 still doesn't produce triggers during walk tests which makes it totally useless for my purposes. I'd rather get a false or two than miss that one critical one, but that's just me.

I've sent an email to Ken with 3 possible ways to address the Deepstack/night problem. He's probably way ahead of me on it, but just in case. I figured it can't hurt for him to hear a suggestion or two (or three) from someone outside the loop...