[tool] [tutorial] Free AI Person Detection for Blue Iris

View attachment 77441




seems like the GPU goes to sleep , every new image after no new for a few min takes much longer, ~400ms. I cannot see any low power/sleep setting for the GPU, win10?

I am seeing a similar thing. I have changed the Nvidia Power Management Mode setting to "Prefer Maximum Performance" from 'Optimal Power' - will monitor and see if it makes any difference.
 
The problem is Unifi Protect. It causes the keyframe rate to be too low. Ideally it should be 1.00 as shown below:


then change the three instances of idrInterval:5 to idrInterval:1 as shown below.

JavaScript:
;a.DEFAULTS=[{idrInterval:1,minClientAdaptiveBitRate:0},{idrInterval:1,minClientAdaptiveBitRate:15e4},{idrInterval:1,minClientAdaptiveBitRate:0}]

Save the file then use SSH to restart protect:

Code:
systemctl restart unifi-protect

I only have two cameras in protect so I don't know what would happen with many cameras.

Since doing this my keyframe rate is 1.00 and BI works the same as if my G3 cameras were in standalone mode. Also you will have to edit the file everytime you update your


Hi, I SSH into my cloudkey and edited my server.js though my syntax in the file is slightly different perhaps because I'm on Cloudkey Gen2+ Firmware version: v1.1.13 and this is a bit newer than yours but this is what mine looks like in server.js

Code:
DEFAULTS=[{fps:15,idrInterval:1,bitrate:3e6,minMotionAdaptiveBitRate:75e4,minClientAdaptiveBitRate:0},{fps:15,idrInterval:1,bitrate:12e5,minMotionAdaptiveBitRate:75e4,minClientAdaptiveBitRate:15e4},{fps:15,idrInterval:1,bitrate:2e5,minMotionAdaptiveBitRate:0,minClientAdaptiveBitRate:0}],

I changed the "idrInterval:5" to idrInterval:1" in these three spots and restarted with

Code:
systemctl restart unifi-protect

After doing so keyframe stays at 0.20 though a few times I seen it go higher but I think only when cameras were rebooted, after nothing changed I rebooted both the Cloudkey and the four cameras I am testing with and it still remains at 0.20 for the keyframe rate. I also rebooted the BI server and no change to the keyframe rate.

I did also notice in my server.js file I had one extra "idrInterval:5" and I also changed that to idrInterval:1" and it made no difference with everything, that code section with the extra "idrInterval" was referring to a UVC G4 Pro PTZ

Code:
DEFAULTS[o.RESOLUTION_ID.LOW]};switch(e){case"UVC G4 PTZ":case"UVC G4 Pro":t={...o.DEFAULTS[o.RESOLUTION_ID.HIGH],fps:24,idrInterval:1,bitrate:6e6,minMotionAdaptiveBitRate:2e6,minClientAdaptiveBitRate:0}}return[t,r,s]},

Thanks again for the tips on what fixed this for you and it leads me down the right path, I'm not sure what is wrong with mine but my version with these changes doesn't affect the keyframe rate the same way as with your version.
 
I'm not sure what is wrong with mine but my version with these changes doesn't affect the keyframe rate the same way as with your version.

I spoke too soon, I have this working with four Ubiquiti G3 cameras and they are getting the 1.0 Keyframe instead of 0.20 and now all four cameras are working, will try adding more.

For others with Ubiquiti Protect that may be having the same problem I upgraded to the following shown below and followed @mayop 's info.

Gen2+ Cloudkey Firmware 2.0.24

Protect version is 1.17.0 Beta 9


**Thanks again @mayop for you very helpful info to getting my cameras working
 
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
 
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.

I think there’s an environment variable to set speed vs accuracy


Sent from my iPhone using Tapatalk
 
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.

As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
 
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.

I thought High was for speed, not accuracy. Like High would yield fast times but poor detection while low will yield slower times but more accurate results


Sent from my iPhone using Tapatalk
 
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.

“DeepStack offers three modes allowing you to tradeoff speed for peformance. During startup, you can specify performance mode to be , “High” , “Medium” and “Low”

The default mode is “Medium””

Maybe I’m reading it wrong haha


Sent from my iPhone using Tapatalk
 
I think I am reading it wrong, after reading it again, it’s performance mode so like Robpur said High should get you better accuracy


Sent from my iPhone using Tapatalk
 
I think I am reading it wrong, after reading it again, it’s performance mode so like Robpur said High should get you better accuracy


Sent from my iPhone using Tapatalk

I believe so. But note, the drop in % confidence may not mean a drop in accuracy. (in fact, it is likely more accurate / more critical). deepstack have changed their thresholding over the recent updates, so youll have to adjust what is an acceptable threshold in your setup, regardless of the given % value.
 
I believe so. But note, the drop in % confidence may not mean a drop in accuracy. (in fact, it is likely more accurate / more critical). deepstack have changed their thresholding over the recent updates, so youll have to adjust what is an acceptable threshold in your setup, regardless of the given % value.

Oh ok that’s good to know, thanks


Sent from my iPhone using Tapatalk
 
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.

What times are you getting with the Jetson?


Sent from my iPhone using Tapatalk
 
I thought High was for speed, not accuracy. Like High would yield fast times but poor detection while low will yield slower times but more accurate results


Sent from my iPhone using Tapatalk
I've tried on medium and high modes and it's pretty much the same unfortunately.

I did tests on the same images between the CPU version and Jetpack one.

The differences are really too great to use the Nano.

I notice that you @robpur have 2012.12 at the end though, is that just your container name or a newer version?

Ideally the Jetson version would just use the same model as the CPU one but I have no idea how you would go about doing that.
 
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.

Hi guys,

Here's an example to show you what I mean with the same image used against the CPU and Jetson versions, on the same mode.

You can see that the level of confidence on the person (97% vs 41%) is wildly different and other objects were not picked up at all, such as the bowl.

Jetson

deepstack_jetson.png

CPU

deepstack_cpu.png
 
  • Like
Reactions: maximosm
I dont understand why my AI tool randomly just keeps storing images in the queue... it's soo frustrating... i try reboot, delete all images in the folder etc.. and it always does it randomly... at the moment, i have 32 images in the 'queue' and everything shows it is up and running.

Edit: what makes it even more weird is, if I go to settings --> AI Server URL(s) --> edit and upload a test image, it works straight away... it does not go into the "queue"... ?
 
Last edited:
Du you use both versions to a jetson nano device?
Good question. No, the CPU version I am comparing it with is a Windows version that I downloaded off the DeepStack website 6 months or so ago and been using with Blue Iris since. It's only an i3 NUC (ESXi VM) so the plan was to offload the intense image processing to a Jetson as the NUC is already kept pretty busy with Blue Iris and Home Assistant.
 
  • Like
Reactions: maximosm