Just to be clear I mean this screen-Yes, I did hit the save button - the log didn't throw any errors. Here's a screen grab of the log result:View attachment 77471
I probably wouldn't sweat a lost 300ms every now and then , but you could try looking in the nvidia control panel settings for power management to make sure (check tabs for both global settings and specific programs - deepstack)View attachment 77441
seems like the GPU goes to sleep , every new image after no new for a few min takes much longer, ~400ms. I cannot see any low power/sleep setting for the GPU, win10?
Is there a variation that load shares between the GPU and CPU? Or is this a one or the other thing?Using Deepstack GPU with CPU Deepstack in the background and a Jetsun.
Yes, that the one - I'm pretty sure all the entries are correct. Maybe it's a telegram issue.Just to be clear I mean this screen-
View attachment 77472
If you got no messages after putting the info in here then I am not sure why it is not working- Assumption is you have the correct info and telegram set up correctly of course.
I am seeing a similar thing. I have changed the Nvidia Power Management Mode setting to "Prefer Maximum Performance" from 'Optimal Power' - will monitor and see if it makes any difference.View attachment 77441
seems like the GPU goes to sleep , every new image after no new for a few min takes much longer, ~400ms. I cannot see any low power/sleep setting for the GPU, win10?
The problem is Unifi Protect. It causes the keyframe rate to be too low. Ideally it should be 1.00 as shown below:
then change the three instances of idrInterval:5 to idrInterval:1 as shown below.
Save the file then use SSH to restart protect:JavaScript:;a.DEFAULTS=[{idrInterval:1,minClientAdaptiveBitRate:0},{idrInterval:1,minClientAdaptiveBitRate:15e4},{idrInterval:1,minClientAdaptiveBitRate:0}]
I only have two cameras in protect so I don't know what would happen with many cameras.Code:systemctl restart unifi-protect
Since doing this my keyframe rate is 1.00 and BI works the same as if my G3 cameras were in standalone mode. Also you will have to edit the file everytime you update your
DEFAULTS=[{fps:15,idrInterval:1,bitrate:3e6,minMotionAdaptiveBitRate:75e4,minClientAdaptiveBitRate:0},{fps:15,idrInterval:1,bitrate:12e5,minMotionAdaptiveBitRate:75e4,minClientAdaptiveBitRate:15e4},{fps:15,idrInterval:1,bitrate:2e5,minMotionAdaptiveBitRate:0,minClientAdaptiveBitRate:0}],
systemctl restart unifi-protect
DEFAULTS[o.RESOLUTION_ID.LOW]};switch(e){case"UVC G4 PTZ":case"UVC G4 Pro":t={...o.DEFAULTS[o.RESOLUTION_ID.HIGH],fps:24,idrInterval:1,bitrate:6e6,minMotionAdaptiveBitRate:2e6,minClientAdaptiveBitRate:0}}return[t,r,s]},
I spoke too soon, I have this working with four Ubiquiti G3 cameras and they are getting the 1.0 Keyframe instead of 0.20 and now all four cameras are working, will try adding more.I'm not sure what is wrong with mine but my version with these changes doesn't affect the keyframe rate the same way as with your version.
I think there’s an environment variable to set speed vs accuracyHi everyone, wondering if anyone can help.
I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.
I'm using "deepquestai/deepstack:jetpack".
Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.Hi everyone, wondering if anyone can help.
I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.
I'm using "deepquestai/deepstack:jetpack".
Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
I thought High was for speed, not accuracy. Like High would yield fast times but poor detection while low will yield slower times but more accurate resultsAs johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.
sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12
I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
“DeepStack offers three modes allowing you to tradeoff speed for peformance. During startup, you can specify performance mode to be , “High” , “Medium” and “Low”As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.
sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12
I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.
I believe so. But note, the drop in % confidence may not mean a drop in accuracy. (in fact, it is likely more accurate / more critical). deepstack have changed their thresholding over the recent updates, so youll have to adjust what is an acceptable threshold in your setup, regardless of the given % value.I think I am reading it wrong, after reading it again, it’s performance mode so like Robpur said High should get you better accuracy
Sent from my iPhone using Tapatalk
Oh ok that’s good to know, thanksI believe so. But note, the drop in % confidence may not mean a drop in accuracy. (in fact, it is likely more accurate / more critical). deepstack have changed their thresholding over the recent updates, so youll have to adjust what is an acceptable threshold in your setup, regardless of the given % value.
What times are you getting with the Jetson?Hi everyone, wondering if anyone can help.
I switched to using the Jetson Nano for DeepStack detection the other day. I was super happy with the performance but have now realised that the accuracy is waaay down compared to the CPU version. People detected as 97-100% are now being detected as low as 40% or not at all.
I'm using "deepquestai/deepstack:jetpack".
Does anyone know if there is anything that can be done about this? I'm really keen to get away from the CPU version.
I just averaged the last 20 results from my Nano and came up with 468ms. Size of images being submitted are 1280x720, 1920x1080 and 2048x1536. I'm running in High mode.What times are you getting with the Jetson?
I've tried on medium and high modes and it's pretty much the same unfortunately.I thought High was for speed, not accuracy. Like High would yield fast times but poor detection while low will yield slower times but more accurate results
Sent from my iPhone using Tapatalk
Hi guys,As johannp0218 said, try running in high mode if you have not tried that already. The option is -e MODE=High. Here's what I use.
sudo docker run --runtime nvidia --restart=unless-stopped -e MODE=High -e VISION-DETECTION=True -p 5050:5000 deepquestai/deepstack:jetpack-2020.12
I haven't had the Nano running long enough or paid enough attention to know if it's less accurate than the CPU versions I'm running.