[tool] [tutorial] Free AI Person Detection for Blue Iris

kalakasan

n3wb
Joined
Jan 4, 2018
Messages
5
Reaction score
2
Yes, I did hit the save button - the log didn't throw any errors. Here's a screen grab of the log result:AIToolsLog.jpg
 

balucanb

Getting the hang of it
Joined
Sep 19, 2020
Messages
147
Reaction score
23
Location
TX
Yes, I did hit the save button - the log didn't throw any errors. Here's a screen grab of the log result:View attachment 77471
Just to be clear I mean this screen-

settings.JPG

If you got no messages after putting the info in here then I am not sure why it is not working- Assumption is you have the correct info and telegram set up correctly of course.
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
View attachment 77441




seems like the GPU goes to sleep , every new image after no new for a few min takes much longer, ~400ms. I cannot see any low power/sleep setting for the GPU, win10?
I probably wouldn't sweat a lost 300ms every now and then :) , but you could try looking in the nvidia control panel settings for power management to make sure (check tabs for both global settings and specific programs - deepstack)
 

kalakasan

n3wb
Joined
Jan 4, 2018
Messages
5
Reaction score
2
Just to be clear I mean this screen-

View attachment 77472

If you got no messages after putting the info in here then I am not sure why it is not working- Assumption is you have the correct info and telegram set up correctly of course.
Yes, that the one - I'm pretty sure all the entries are correct. Maybe it's a telegram issue.
 

damaar

n3wb
Joined
Dec 19, 2020
Messages
7
Reaction score
1
Location
nz
View attachment 77441




seems like the GPU goes to sleep , every new image after no new for a few min takes much longer, ~400ms. I cannot see any low power/sleep setting for the GPU, win10?
I am seeing a similar thing. I have changed the Nvidia Power Management Mode setting to "Prefer Maximum Performance" from 'Optimal Power' - will monitor and see if it makes any difference.
 

Ripper99

n3wb
Joined
Dec 12, 2020
Messages
10
Reaction score
2
Location
Canada
The problem is Unifi Protect. It causes the keyframe rate to be too low. Ideally it should be 1.00 as shown below:


then change the three instances of idrInterval:5 to idrInterval:1 as shown below.

JavaScript:
;a.DEFAULTS=[{idrInterval:1,minClientAdaptiveBitRate:0},{idrInterval:1,minClientAdaptiveBitRate:15e4},{idrInterval:1,minClientAdaptiveBitRate:0}]
Save the file then use SSH to restart protect:

Code:
systemctl restart unifi-protect
I only have two cameras in protect so I don't know what would happen with many cameras.

Since doing this my keyframe rate is 1.00 and BI works the same as if my G3 cameras were in standalone mode. Also you will have to edit the file everytime you update your

Hi, I SSH into my cloudkey and edited my server.js though my syntax in the file is slightly different perhaps because I'm on Cloudkey Gen2+ Firmware version: v1.1.13 and this is a bit newer than yours but this is what mine looks like in server.js

Code:
DEFAULTS=[{fps:15,idrInterval:1,bitrate:3e6,minMotionAdaptiveBitRate:75e4,minClientAdaptiveBitRate:0},{fps:15,idrInterval:1,bitrate:12e5,minMotionAdaptiveBitRate:75e4,minClientAdaptiveBitRate:15e4},{fps:15,idrInterval:1,bitrate:2e5,minMotionAdaptiveBitRate:0,minClientAdaptiveBitRate:0}],
I changed the "idrInterval:5" to idrInterval:1" in these three spots and restarted with

Code:
systemctl restart unifi-protect
After doing so keyframe stays at 0.20 though a few times I seen it go higher but I think only when cameras were rebooted, after nothing changed I rebooted both the Cloudkey and the four cameras I am testing with and it still remains at 0.20 for the keyframe rate. I also rebooted the BI server and no change to the keyframe rate.

I did also notice in my server.js file I had one extra "idrInterval:5" and I also changed that to idrInterval:1" and it made no difference with everything, that code section with the extra "idrInterval" was referring to a UVC G4 Pro PTZ

Code:
DEFAULTS[o.RESOLUTION_ID.LOW]};switch(e){case"UVC G4 PTZ":case"UVC G4 Pro":t={...o.DEFAULTS[o.RESOLUTION_ID.HIGH],fps:24,idrInterval:1,bitrate:6e6,minMotionAdaptiveBitRate:2e6,minClientAdaptiveBitRate:0}}return[t,r,s]},
Thanks again for the tips on what fixed this for you and it leads me down the right path, I'm not sure what is wrong with mine but my version with these changes doesn't affect the keyframe rate the same way as with your version.
 

Ripper99

n3wb
Joined
Dec 12, 2020
Messages
10
Reaction score
2
Location
Canada
I'm not sure what is wrong with mine but my version with these changes doesn't affect the keyframe rate the same way as with your version.
I spoke too soon, I have this working with four Ubiquiti G3 cameras and they are getting the 1.0 Keyframe instead of 0.20 and now all four cameras are working, will try adding more.

For others with Ubiquiti Protect that may be having the same problem I upgraded to the following shown below and followed @mayop 's info.

Gen2+ Cloudkey Firmware 2.0.24

Protect version is 1.17.0 Beta 9


**Thanks again @mayop for you very helpful info to getting my cameras working
 

AskNoOne

n3wb
Joined
Dec 20, 2020
Messages
7
Reaction score
5
Location
UK
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
 

johannp02180

Young grasshopper
Joined
Nov 30, 2020
Messages
39
Reaction score
5
Location
USA
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
I think there’s an environment variable to set speed vs accuracy


Sent from my iPhone using Tapatalk
 

robpur

Getting comfortable
Joined
Jul 31, 2014
Messages
278
Reaction score
1,350
Location
Washington State
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
 

johannp02180

Young grasshopper
Joined
Nov 30, 2020
Messages
39
Reaction score
5
Location
USA
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
I thought High was for speed, not accuracy. Like High would yield fast times but poor detection while low will yield slower times but more accurate results


Sent from my iPhone using Tapatalk
 

johannp02180

Young grasshopper
Joined
Nov 30, 2020
Messages
39
Reaction score
5
Location
USA
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
“DeepStack offers three modes allowing you to tradeoff speed for peformance. During startup, you can specify performance mode to be , “High” , “Medium” and “Low”

The default mode is “Medium””

Maybe I’m reading it wrong haha


Sent from my iPhone using Tapatalk
 

johannp02180

Young grasshopper
Joined
Nov 30, 2020
Messages
39
Reaction score
5
Location
USA
I think I am reading it wrong, after reading it again, it’s performance mode so like Robpur said High should get you better accuracy


Sent from my iPhone using Tapatalk
 

cjowers

Getting the hang of it
Joined
Jan 28, 2020
Messages
107
Reaction score
36
Location
AUS
I think I am reading it wrong, after reading it again, it’s performance mode so like Robpur said High should get you better accuracy


Sent from my iPhone using Tapatalk
I believe so. But note, the drop in % confidence may not mean a drop in accuracy. (in fact, it is likely more accurate / more critical). deepstack have changed their thresholding over the recent updates, so youll have to adjust what is an acceptable threshold in your setup, regardless of the given % value.
 

johannp02180

Young grasshopper
Joined
Nov 30, 2020
Messages
39
Reaction score
5
Location
USA
I believe so. But note, the drop in % confidence may not mean a drop in accuracy. (in fact, it is likely more accurate / more critical). deepstack have changed their thresholding over the recent updates, so youll have to adjust what is an acceptable threshold in your setup, regardless of the given % value.
Oh ok that’s good to know, thanks


Sent from my iPhone using Tapatalk
 

johannp02180

Young grasshopper
Joined
Nov 30, 2020
Messages
39
Reaction score
5
Location
USA
Hi everyone, wondering if anyone can help.

I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.

I'm using "deepquestai/deepstack:jetpack".

Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
What times are you getting with the Jetson?


Sent from my iPhone using Tapatalk
 

AskNoOne

n3wb
Joined
Dec 20, 2020
Messages
7
Reaction score
5
Location
UK
I thought High was for speed, not accuracy. Like High would yield fast times but poor detection while low will yield slower times but more accurate results


Sent from my iPhone using Tapatalk
I've tried on medium and high modes and it's pretty much the same unfortunately.

I did tests on the same images between the CPU version and Jetpack one.

The differences are really too great to use the Nano.

I notice that you @robpur have 2012.12 at the end though, is that just your container name or a newer version?

Ideally the Jetson version would just use the same model as the CPU one but I have no idea how you would go about doing that.
 

AskNoOne

n3wb
Joined
Dec 20, 2020
Messages
7
Reaction score
5
Location
UK
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.

sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12

I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
Hi guys,

Here's an example to show you what I mean with the same image used against the CPU and Jetson versions, on the same mode.

You can see that the level of confidence on the person (97% vs 41%) is wildly different and other objects were not picked up at all, such as the bowl.

Jetson

deepstack_jetson.png

CPU

deepstack_cpu.png
 
Top