Hell Yeah! Direct Deepstack Integration - 5.4.0 - March 31, 2021

See if upping the number of instances of DS in the AI tab of the console. I upped mine to two and cut detection times in half or even more.

Air flow is why I'll never fool with a SFF. Well, one reason anyway.
A ds server for different cameras?
 
I think BI simply assigns detections to each one on an as needed basis. That would cut waiting/processing time significantly.
 
Now I like this cabinet. 3 front fans. 2 top and 1 rear. 2 bays for hdd spinners etc. Side that opens.
 

Attachments

  • Screenshot_20211030_231839_com.mercadolibre.jpg
    Screenshot_20211030_231839_com.mercadolibre.jpg
    447.3 KB · Views: 19
  • Screenshot_20211030_231843_com.mercadolibre.jpg
    Screenshot_20211030_231843_com.mercadolibre.jpg
    359.8 KB · Views: 17
If you're going that big have a look at the Obsidian. Two top fans, 140mm, three front fans, 140mm, rear fan, 120mm, bottom fan, 60mm. Bays for eight 3.5" drives, four 2.5" SSD drives, four 5.25 drives. Clear side for your viewing pleasure. Air filter on the font cover (I'm using a foam filter to catch even more dust due to high air flow). PSU mounts in the bottom.
 
If you're going that big have a look at the Obsidian. Two top fans, 140mm, three front fans, 140mm, rear fan, 120mm, bottom fan, 60mm. Bays for eight 3.5" drives, four 2.5" SSD drives, four 5.25 drives. Clear side for your viewing pleasure. Air filter on the font cover (I'm using a foam filter to catch even more dust due to high air flow). PSU mounts in the bottom.
Omg 700 usd for the cheapest Obsidian here.
10 times more than I'm willing to pay
 

Attachments

  • Screenshot_20211030_234112.jpg
    Screenshot_20211030_234112.jpg
    142.2 KB · Views: 11
  • Like
Reactions: looney2ns
It wasn't inexpensive here. I bought it a few years ago and it was about $130USD back then, but I'm glad I bought it. Enough room for just about anything I'll ever need and then some.
 
For what it's worth I've built a few computers over the years, in general brands like NZXT, Fractal, Corsair, Phanteks, Lian Li all make solid cases, there are exceptions as always. I suppose on balance the most all round case I built was the Phanteks Evolv but these are quite expensive and heavy! in the real world my recommendation for design, quality and price would be NZXT, Fractal and probably Corsair in that order, some of the Corsair designs share their platform with other brands.

Around 15 years ago Coolermaster used to build exceptional cases, called Praetorian, arguably some of the best ever made even to date in fact. Literally all alloy including the side panels and chassis which was 3-4mm thick, removable mainboard tray. Incredible quality, great airflow options, natural heatsink due to aluminium and light weight, not that the latter matters much. Even the power buttons were made of aluminium, such was the attention to detail. I still own two and supplied several for high end builds at the time, they only use 80mm fans and drive mounting isn't ideal compared to new cases, I just find it hard to throw them out knowing what I would replace them with. Couple of images from a review below:

1635672020160.png

1635672141200.png


A couple of years ago I built a Fractal Node with the intention of using it for Blue Iris, these have good drive mounting options, compact and cooling is pretty good. You need to be careful with PSU choice though as it can impact the GPU options. Space is tight but they are tidy cases and with Noctua or similar fans are virtually silent.

The easy answer is NZXT, great fan options, sensible price depending on country and supply, good quality. All new cases these days will mount PSU at the bottom, most have cable management well covered, NZXT and Phanteks probably better for this aspect.

Sadly at the moment supply can be limited and pricing not ideal, avoid glass unless you want to look inside, steel or aluminium will be better for heat management, also more options to add sound deadening if you ever want to.

Apologies for getting off topic here!
 
Last edited:
  • Like
Reactions: sebastiantombs
Back on topic, I have learnt a few things about Deepstack these past 3-4 weeks:

1. It reduces false triggers by a huge margin for me.
2. Filtering and alerts can be relied upon more.
3. It will potentially need some CPU/GPU power to run depending on camera numbers and settings.
4. Can be interesting to setup initially :lol:
 
  • Like
Reactions: sebastiantombs
A question about the interval for sending additional frames to DS. Most examples I see show a rate of 750ms to 1s. My rule of thumb -- self-conceived so w/o any real basis -- is that this should be no less than the typical DS processing time. My theory is that it makes sense to send frames at a rate less than what DS is able to process. For instance, don't send frames every 400ms if DS is going to take 500ms to process. First of all, is that a valid theory?

Second, is there a benefit (i.e. process frames at a faster rate) by enabling a second instance of DS? For reference, BI and DS run on an i5-6500 w/o a separate graphics card.
 
A question about the interval for sending additional frames to DS. Most examples I see show a rate of 750ms to 1s. My rule of thumb -- self-conceived so w/o any real basis -- is that this should be no less than the typical DS processing time. My theory is that it makes sense to send frames at a rate less than what DS is able to process. For instance, don't send frames every 400ms if DS is going to take 500ms to process. First of all, is that a valid theory?

Second, is there a benefit (i.e. process frames at a faster rate) by enabling a second instance of DS? For reference, BI and DS run on an i5-6500 w/o a separate graphics card.
The AI tool has a queue function but yes your reasoning is correct for me. I adjust the snapshot rate relative to how fast my ds system can process frames
Mine is about 110ms on average but 200ms snapshots is a good interval for my set up.
 
I'm using 500 or 250ms depending on the camera. Most processing times are in the 100ms range with some in the 200ms range. That's running a GPU.

From what I've seen so far, a second instance will help reduce processing times IF you're running a GPU. A second instance with the CPU version doesn't seem to work out very well, increases processing times significantly usually to the point of failure.
 
  • Like
Reactions: CAL7 and MikeLud1
I'm using 500 or 250ms depending on the camera. Most processing times are in the 100ms range with some in the 200ms range. That's running a GPU.

From what I've seen so far, a second instance will help reduce processing times IF you're running a GPU. A second instance with the CPU version doesn't seem to work out very well, increases processing times significantly usually to the point of failure.

Is there any way of doing this with the AI tool? I have a custom and a standard model, one as linked and the other as combined results with the first.
Can I add more servers here acting as balanced ds servers and not linked?
Let's say I have 4 cams - Can I use 1 DS for each cam? Just not tick any of the options (linked etc)?

1635688023578.png
 
Last edited:
A question about the interval for sending additional frames to DS. Most examples I see show a rate of 750ms to 1s. My rule of thumb -- self-conceived so w/o any real basis -- is that this should be no less than the typical DS processing time. My theory is that it makes sense to send frames at a rate less than what DS is able to process. For instance, don't send frames every 400ms if DS is going to take 500ms to process. First of all, is that a valid theory?

Second, is there a benefit (i.e. process frames at a faster rate) by enabling a second instance of DS? For reference, BI and DS run on an i5-6500 w/o a separate graphics card.
Sounds like a good theory, I have tried to work out methods similar but they go out the window when you have one occurrence of processing times in the low 100's and another in the high 100's, in the end I settle on 250 or 500 in some cases. Based on trial and error. I have a GPU and the second instance virtually killed my processor dead, all be it an older i5, two generations behind yours, so draw your own conclusions. Try it out and see what results you get, monitoring CPU load :thumb:
 
@Pentagano I have no idea when using separate servers. I'm using DS directly through the BI console settings. Sort of a KISS method compared to virtual servers or physical servers for each.
 
@Pentagano I have no idea when using separate servers. I'm using DS directly through the BI console settings. Sort of a KISS method compared to virtual servers or physical servers for each.
Been spending time getting to know the GPU functions/settings a bit more.
Odd thing happened - when I clicked on the 'overclocking range enhancement' it warned me it would flicker a little, which it did, then I turned it off again.

Now my CPU power is drawing less power(9w at idle now) and idle temp dropped from 51 to 35c.
Possibly something was hogging the GPU and now it stopped using it?

1635694677134.png

1635694946853.png
 
  • Like
Reactions: sebastiantombs
Keeping the overclocking range enhancement on appears to keep the power down lower at idle and idle temp much lower. I have the boost clock set at 1217 max anyway, Gpu needs the min settings for deeepstack from what I've observed (and I don't game).

Shame I can't turn down the voltage on this one - fixed one the strix apparently.

Does anyone have more info about this 'overclocking range enhancement switch'?
 
ah spoke too soon. gone back up to 55c, 28% power. Maybe deepstack using it and the power savings mode in windows (PCEi savings mode - 3 options).
Have to monitor whats using it
 
Odd behavior observed in task manager.
The numbers of the GPUs changed. I had 0 and 1 (Nvidia GPU and the ryzen GPU respectively).
Now changed to 2 and 3??


1635714895526.png


1635714869948.png
 

Attachments

  • 1635714849409.png
    1635714849409.png
    12.6 KB · Views: 4
@Pentagano I have no idea when using separate servers. I'm using DS directly through the BI console settings. Sort of a KISS method compared to virtual servers or physical servers for each.
Yep I created a DS server for each camera in AI tool. No linked server options. Just as individual servers but a specific camera for each defined.
Getting about 96ms analysis time which is great, on HIgh mode also.
 
  • Like
Reactions: sebastiantombs