Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

I'm going to revert to the CPU version, installed the GPU version and made my desktop sound like an airplane's jet turbines :rofl: even with 17% CPU Usage

I am running the CPU version, but may give the GPU version a try and see how it performs.
 
  • Like
Reactions: ljw2k
Great addition to BI.

I've been running BI with Deepstack in docker for desktop (also available for windows now) (1 standard model and 1 custom model) for over 9 months now and runs brilliantly. deepquestai/deepstack:latest .
But I use the modified version of the AI tool from:


Which has so many options and configurations. This developer is releasing new versions all the time and responds to requests very fast.

So I'm thinking I am better off staying with my current setup.
Unless someone can shed some light on what advantage this BI AI option has please.

Thanks
 
  • Like
Reactions: Arjun
I used Deepstack with AITools quite a while back and also found the it worked perfectly during the day but was hit and miss during the night, since then I stopped using it and now rely upon in camera detection which does have more false positives but at least nothing is missed.
Have you tried building custom models?
I was trying to track a cat at night time. The standard model missed it a lot. I built a custom model with all black and white night photos run through the trainer. Runs in parallel and detects so much better now.
 
The "analyze image with DeepStack" from the ui is greyed out (it worked for me yesterday)
How can I enable this again ?

second thing, I have camera pointed at my cars, so obviously every analyze will show a car, but will DS analyze only moving objects ?

Last one, can I "train" DS ? to correct him when he's wrong ? my car isn't a boat!
 
Great addition to BI.

I've been running BI with Deepstack in docker for desktop (also available for windows now) (1 standard model and 1 custom model) for over 9 months now and runs brilliantly. deepquestai/deepstack:latest .
But I use the modified version of the AI tool from:


Which has so many options and configurations. This developer is releasing new versions all the time and responds to requests very fast.

So I'm thinking I am better off staying with my current setup.
Unless someone can shed some light on what advantage this BI AI option has please.

Thanks


From my initial testing, whilst AITOOLS has way more options and is further down the road in terms of development I find it very CPU hungry.. Whilst BI is a far way behind in options and development, its much less of a process hungry solution. Now with Ken's (BI dev) history he will slowly ramp this up but you'd need a crystal ball to say when they will both be on par. Remember BI is in this for the long game as you never know what will happen to AI Tools in the future.
AI in cameras is the new world so I'm guessing Ken will start putting more time and effort into this and we'll have the ultimate system. But if AITools works for you, I'd stick with it and keep watching the forums to see how BI is coming along.
 
With BI update 5.4.4, a new feature has been added that results in significant savings in storage. When the triggered+continuous record mode is used with a dual-streaming camera along with direct-to-disc, the result is a BVR file which will contain the sub-stream continuously recorded, but the main-stream only recorded when the camera is in a triggered state. During main-stream playback, the sub-stream will be upsampled whenever the main-stream is not available.

I tested this with one camera overnight to see how it works.

Previously, I would get about 1 hour at 3.90GB with the camera recording 24/7.

With this new feature, I got 1 hour at 120MB.

That is a tremendous savings!

As always YMMV. Obviously bitrate and FPS come into play, but it is yet another feature that can allow for higher resolutions with less storage needs.

However, now I noticed that clearly obvious AI in the daytime that wasn't missed previously like a delivery person was missed.

So I wanted to see if anyone caught this new storage savings update and if so, did you notice Deepstack now missing what it caught previously.
 
I need help understanding certain settings on the camera / AI page

1) In events I see a person icon along with green checkmark and a purple flag. What do the last two mean
2) is it worthwhile changing default of Additional real-time images to analyze
3) In trigger/motion sensor, my settings for Minimum object size, minimum contrast and make time are fairly conservative. For e.g. make time is 0.6 seconds which sometimes I feel can miss a person come into my front yard which is fairly small (50x20). With deepstack, would you advise me to simply change make time to 0.1 seconds and similarly get aggressive with object size & contrast with the hope to not miss any events and improve accuracy of catching people at night too?

Thx
 
Also in AI settings, if I put car:80 in cancel, that would mean any alert with less than 80% confidence that it's a car won't be a confirmed alert correct? Right after sunset, I started getting car alerts even though there were no cars but plenty of car flashlights but all of them were in the 50's and 60's
 
Hi guys,

Going to toy with this new feature.
Can someone explain the interval settings please?
Is this set to analyze an image every second for 5 seconds?

1619698521675.png
 
I can't wait for LPR to be integrated in Blue Iris without having to pay subscription fees
 
Hi guys,

Going to toy with this new feature.
Can someone explain the interval settings please?
Is this set to analyze an image every second for 5 seconds?

View attachment 88139

Yes, right now yours says it will take one more pic 5 seconds after the first one to send to analyze. You can change either one to better fit your cameras field of view and when the object enters.
 
  • Like
Reactions: Pentagano
Also in AI settings, if I put car:80 in cancel, that would mean any alert with less than 80% confidence that it's a car won't be a confirmed alert correct? Right after sunset, I started getting car alerts even though there were no cars but plenty of car flashlights but all of them were in the 50's and 60's

I think you have it backwards (or maybe I do).
I think that if you put the confidence value in the To Confirm at car:80 then it would require a confidence level of 80% or higher to be a confirmed altert.
If To Cancel works similarly then putting car:80 in it would require a confidence level of 80 to cancel/non-confirm the alert.
 


Not bad, but this one worked once, what about the other faces of people from later today? Why did only one JPEG face pop up?


786F8BA7-8887-428C-953E-8579C4652B4B.jpg
 
I have a problem with deepstack. I have 5 cams (a mix of 5231 and 2231) one of these works perfectly and every motion is detected and processed by deepstack .
For the other four cams it seems that snapshots aren't send to deepstack (confirmed by looking the cmd in windows).
Motion is captured by Blueiris and I receive a generic alert not the AI's one.

For the cam settings: main stream at 20fps/keyframe and substream 25fps/keyframe, continuos recording, on the AI settings 10 real-time images processed every second.

I have also tried to install deepstack on a container in unRAID, but nothing changed.
P.s. BI is on a VM on my unRAID server

As you can notice in the screenshots, only the Ingresso camera is correctly processed by deepstack (confirmed and cancelled alerts)
Screenshot_20210501_120549_com.blueirissoftware.blueiris.jpgScreenshot_20210501_120600_com.blueirissoftware.blueiris.jpgScreenshot_20210501_120841_com.blueirissoftware.blueiris.jpgScreenshot_20210501_120906_com.blueirissoftware.blueiris.jpgScreenshot_20210501_121105.jpg
 
  • Like
Reactions: Arjun
Frame and key frame rates need to be the same for each camera for the main and sub streams. Set them both to 25/25 or 20/20. I use 15/15 with good results. By having them set differently BI can't match up the start frames of motion which will effect detection. That's mentioned in the help file of BI and has been mentioned here on IPCT many times.
 
Last edited: