5.4.7 - May 25, 2021 - Support for DeepStack custom model files

View attachment 98259

so you can see the 'dark'

also, you can see the DS time with GPU and the almost 20 secs to find a people that is why I have the 999 in plus images....

That was an amazon truck that triggered and took 19+ secs for the driver to exit and be visible.

What would have happened before is the people clone would motion trigger and keep recording due to the trucks movement and never see the person with 10-15 in the + images

I tried using 999 with several of my cams but I don't think my computer was up to the task...sent my CPU skyrocketing when two or more cameras triggered at the same time :D
Stuck using the CPU version of DS (no NVidia card).
 
One clue is whether you are seeing any of lower case objects detected (person, car, dog, etc.).

A better clue is whether the JSON box in the DeepStack Status Window is lacking a populated 'objects' section?
If only the custom models are being used, there should be no 'objects' section (or an unpopulated one), and only populated 'dark', 'openlogo' sections, etc.
CAVEAT: This is how it worked when I first tested - I'll admit that I haven't revisited this since the Ken first introduced the DeepStack Status Window. I'd recheck now, but cannot as I'm away from my server.
View attachment 98258


My interpretation of the help pdf is that this would:
1. cancel all custom models (because a custom model 'objects.pt' does not exist) and
2. use only the default (built-in) objects model ... and the built-in faces model (if enabled).

Therefore I'd expect that using 'objects' in the custom box is the same as entering any other non-existent custom model name.
This is why I enter 'none' in the custom box to cancel use of all custom models. It seems more explicit to me.
Thanks, appreciate the reply with screen grab, I think the problem is processing power when ExDark model is running, also I am asking too much of it since the cameras have no artificial light so dependant on IR. Essentially when darkness falls the identification relies on the Ai picking out headlights/license plates.
 
View attachment 98259

so you can see the 'dark'

also, you can see the DS time with GPU and the almost 20 secs to find a people that is why I have the 999 in plus images....

That was an amazon truck that triggered and took 19+ secs for the driver to exit and be visible.

What would have happened before is the people clone would motion trigger and keep recording due to the trucks movement and never see the person with 10-15 in the + images
It is clear I need to get myself a new graphics card even if just for testing, I dug out an Nvidia 210 but that won't get close to running Cuda. Slightly frustrating since I almost upgraded to an Nvidia 1660 back in 2020 when they were a good deal, never mind. I dropped the Nvidia 210 in and maybe that will take a bit of load of my CPU regarding Hardware decoding, probably nothing of any consequence.
 
I tried using 999 with several of my cams but I don't think my computer was up to the task...sent my CPU skyrocketing when two or more cameras triggered at the same time :D
Stuck using the CPU version of DS (no NVidia card).
I am beginning to think that even using the default model with anything more than a handful of cameras would benefit from an NVidia card, more so if running custom models. Certainly if you want to push the Ai to any degree. In fact even just running the Testing & Tuning > Analyze with DeepStack on my Intel i5 4690 with 16GB RAM, a simple clip with a single car pushes the processor straight up to 85-99%. Seems like I will need to decided between either a Quadro P series, 1050 or 1660, temptation is always to go with best possible but there will be no gaming so just Blue Iris driving the decision - leaning towards the 1050/1660.

On the plus side my daytime results are still very good even when 2-3 cameras get triggered all running DeepStack, for now I will likely switch back to simply motion during the night and maybe test DeepStack again with a suitable GPU.
 
Last edited:
  • Like
Reactions: beepsilver
Will this PNY Quadro P400 be adequate?

Funny, it just went up $10 between pasting the link, then testing it in preview.
That's the one I'm using. Shop around, a local computer store had the best price, much cheaper than Amazon. Just make sure that it's the V2 model.
 
As an Amazon Associate IPCamTalk earns from qualifying purchases.
even just running the Testing & Tuning > Analyze with DeepStack on my Intel i5 4690 with 16GB RAM, a simple clip with a single car pushes the processor straight up to 85-99% i
In my previous experimentation, Testing & Tuning ALWAYS used every custom model in my folder. So it always maximized CPU usage.

I emailed Ken awhile back if he could make it an option for T&T to respect the current Trigger>AI settings (for the camera’s active profile) when playing back clips and alerts. What he did was make Testing & Tuning respect the AI settings in effect when playing back an alert. I just tried and it indeed still appears to using the AI settings for the camera profile (as currently defined) that was in effect at the time of the alert. To accomplish this I believe Ken uses new entries (several i requested) to the alert's database record. These are accessible via the JSON clipstats command (properties of clips and alerts); I've provided an example below. I haven’t yet tested if Testing & Tuning also respects the ’To confirm’, ’To cancel’, and ’Zones’ settings.

1628867237106.png
 
Last edited:
I found that adding a Quadro P400 allowed DS to run on high without a problem, which improved detection accuracy.
I am a bit torn on whether to go with a P400 or just bite the bullet on a 1050/1660, currently got 6x 2MP with 4x 4MP being added in the next few weeks, mostly running at 12-15 fps.
 
In my previous experimentation, Testing & Tuning ALWAYS used every custom model in my folder. So it always maximized CPU usage.

I emailed Ken awhile back if he could make it an option for T&T to respect the current Trigger>AI settings (for the camera’s active profile) when playing back clips and alerts. What he did was make Testing & Tuning respect the AI settings in effect when playing back an alert. I just tried and it indeed still appears to using the AI settings for the camera profile (as currently defined) that was in effect at the time of the alert. To accomplish this I believe Ken uses new entries (several i requested) to the alert's database record. These are accessible via the JSON clipstats command (properties of clips and alerts); I've provided an example below. I haven’t yet tested if Testing & Tuning also respects the ’To confirm’, ’To cancel’, and ’Zones’ settings.

View attachment 98311
This would be a useful option to have no doubt, I think lots to come in future Blue Iris releases :thumb: :clap:
 
I am a bit torn on whether to go with a P400 or just bite the bullet on a 1050/1660, currently got 6x 2MP with 4x 4MP being added in the next few weeks, mostly running at 12-15 fps.
I'm running the P400 with 4 x 4MP, 3 x 8MP, 3 x 2MP. All of them enabled for DS, There was someone who figured out that the P400 can handle quite a few DS streams simultaneously. The other nice thing about the P400 is that it's rated for 30W max, much lower than a gaming GPU, if power consumption matters to you...
 
I'm running the P400 with 4 x 4MP, 3 x 8MP, 3 x 2MP. All of them enabled for DS, There was someone who figured out that the P400 can handle quite a few DS streams simultaneously. The other nice thing about the P400 is that it's rated for 30W max, much lower than a gaming GPU, if power consumption matters to you...
Thanks for that, if the P400 can handle the job and consume sensible power then I could be persuaded. The current pricing means there isn't a huge margin between the P400 and the 1050 GPU's which offer almost three times the Cuda processing, I just wonder how much more scope that gives Blue Iris regarding DeepStack.
 
I ordered some Dell G5 DT i7-10700F computers with RTX-2060 graphics cards for some particular tasks. I took one and repurposed it for BI to see the performance increases with DeepStack.

The DS responses when using Testing and Tuning are less than 75ms compared to the CPU version which are over 500ms.

Now, how to justify this computer to remain running BI. :facepalm:
 
  • Like
Reactions: sebastiantombs
I ordered some Dell G5 DT i7-10700F computers with RTX-2060 graphics cards for some particular tasks. I took one and repurposed it for BI to see the performance increases with DeepStack.

The DS responses when using Testing and Tuning are less than 75ms compared to the CPU version which are over 500ms.

Now, how to justify this computer to remain running BI. :facepalm:
Very useful and as I suspected, it appears from feedback that a card above the 1050/1660 threshold will yield low 100's response time, depending on the scene of course. My daytime average is around 300ms but night can be anything up to 5000 depending on settings. I will await the half price sale on 1660 or above :wow:

Joking aside, thoughts on whether the 1050ti will give plenty to play with or should I just get a 1660 a be done :facepalm: the latter is almost double Cuda cores.

Decided that the 1050ti was getting too close to the GTX 1660 range and then I started looking at the RTX 2060 series, slippery slope. Been many years since I had anything other than a basic graphics card in the $100-150 range, probably wouldn't hurt to go mad on this occasion, granted the pricing isn't the best right now!
 
Last edited:
I created a custom model earlier using Google Colab which was very specific to my camera setup during night hours, mainly for cars, will see how it performs later, testing looks promising :thumb:

The Google Colab notes were a bit vague in places, kind of felt my way a bit, if anyone tries it and gets stuck then feel free to ask. The custom model process was pretty quick, time consuming part is preparing the images, I only used about 50, recommended is 300 I believe!

Assuming these custom models work out well, with some additional GPU power I can see this being a huge step forward for Blue Iris.
 
  • Like
Reactions: sebastiantombs
Quick update on the custom model results from last night. Considering I only used a sample set of 50 images the results were very encouraging. Confirmation rates were vastly increased over the ExDark model, not through any fault of that model itself I might add. Simply that the custom model I created was specific to my uses and camera setup. Time permitting I shall add some more images to the model later and run training again to fine tune it.

I would encourage any users with minimal technical ability to play with the custom models, especially if your results are less than encouraging with the standard setup and/or ExDark. My suspicion is that if you are located in an area with reasonable ambient lighting at night, then the standard models should be pretty acceptable.

Another night with the custom model and it seems to work pretty well, good hit rate and only one false positive, this is without any additional training or images added.
 
Last edited:
Hello everyone, I am currently considering buying a jetson nano or a graphics card like the nvidia p4000.

Which is more useful? Deep stack separately with jetson nano or deep stack with graphics card on the blueiris server?
Many greetings
 
Hello everyone, I am currently considering buying a jetson nano or a graphics card like the nvidia p4000.

Which is more useful? Deep stack separately with jetson nano or deep stack with graphics card on the blueiris server?
Many greetings
Can't help with the jetson nano, I think you mean nvidia p400, p4000 would run it well though :thumb: - I will hopefully be installing a 1660 card later today so will have some feedback on that in due course.
 
Quick update on the custom model results from last night. Considering I only used a sample set of 50 images the results were very encouraging. Confirmation rates were vastly increased over the ExDark model, not through any fault of that model itself I might add. Simply that the custom model I created was specific to my uses and camera setup. Time permitting I shall add some more images to the model later and run training again to fine tune it.

I would encourage any users with minimal technical ability to play with the custom models, especially if your results are less than encouraging with the standard setup and/or ExDark. My suspicion is that if you are located in an area with reasonable ambient lighting at night, then the standard models should be pretty acceptable.

Another night with the custom model and it seems to work pretty well, good hit rate and only one false positive, this is without any additional training or images added.

What did you use for your training images? Did you have previous alerts images to use or did you go outside and walk in front of the cam in different outfits?


Sent from my iPhone using Tapatalk
 
  • Haha
Reactions: CamCrazy
What did you use for your training images? Did you have previous alerts images to use or did you go outside and walk in front of the cam in different outfits?


Sent from my iPhone using Tapatalk
:lmao: :lmao: Excellent, no I just used previous alert images for training. Don't let that stop you from walking in front of your cameras in different outfits though :lol: :wtf:

Joking aside, the best training is specific to your site and camera setup, use previous images of the objects you want to identify, or do a walk around then use those. Just to add, I have actually turned on smart motion with two of my new 5442 cameras, this works OK but it falls down a bit in certain situations. I think deepstack training is best if you find the default model isn't cutting it in your given setup, my big problem was at night with vehicles due to having no street lights.
 
Last edited: