For those that upgraded to 2.0.x, are you seeing better detection response or better accuracy? Is is stable at this point or should we wait until some of the initial reported bugs are ironed out?
For those that upgraded to 2.0.x, are you seeing better detection response or better accuracy? Is is stable at this point or should we wait until some of the initial reported bugs are ironed out?
Thanks for taking the time to do some testing.So decided to compare some detection times of a Intel HD 530 (I7-6700k) vs Intel 770 HD (I7-12000) using CPAI's Yolov5.NET in GPU DirectML mode.
I have the same Intel HD 530 (I7-6700k) and using cpai 2.0.6 and the .NET version (with ipcam-combined and using the intel built in GPU) I'm getting same ~250ms responses. But I did notice much better behavior in the rain.. previously when foggy and rain, the cpu would go mad with triggers (that would be negated by AI), but it could pin the cpu a lot at night. Using .NET and the Intel gpu, the cpu is no where near as effected. Moving to a separate IR light, a few feet from the camera, also helps a lot with the rain/snow triggers on IR cameras.Thanks for taking the time to do some testing.
I must admit I expected better performance from the GPU, but now that I have read up on the Intel UHD 630, I guess it is what can be expected.
My CPU - I5 8500T - and the Intel UHD 630 is almost the same speed. Getting around 2-300 ms in response times.
Have you tested deepstack with these systems? Here are my times on an i5-8500 intel 630 using deepstack in medium mode with substreams and the default model. For the most part they are in the 200-400ms range while at times it gets bogged down to 6000-8000 likely when many cams trigger simultaneously.So decided to compare some detection times of a Intel HD 530 (I7-6700k) vs Intel 770 HD (I7-12000) using CPAI's Yolov5.NET in GPU DirectML mode.
Have you tested deepstack with these systems? Here are my times on an i5-8500 intel 630 using deepstack in medium mode with substreams and the default model. For the most part they are in the 200-400ms range while at times it gets bogged down to 6000-8000 likely when many cams trigger simultaneously.
DeepStack only supports Nvidia GPUs that has CUDA installedDoes deepstack use the embbeded GPU like CP ?
As mike noted, DS does not support intel. I would have expected a better improvement with going from the i7-6700 to the i7-12700 as it is more than 3x more powerful unless its topping out at 100% cpu usage, because you are actually seeing a 2/3 reduction in processing time.Yes I have used deepstack on both systems but, it has been too long to remember my exact timings. Does deepstack use the embbeded GPU like CP ?
As mike noted, DS does not support intel. I would have expected a better improvement with going from the i7-6700 to the i7-12700 as it is more than 3x more powerful unless its topping out at 100% cpu usage, because you are actually seeing a 2/3 reduction in processing time.
Are you talking about upgrading codeproject or jumping from Deepstack?
logging in the dashboard is more unstable. It also no longer reports image processing times
14:15:42: ALPR has left the building
14:15:42: Starting C:\Program Files...ws\python37\venv\Scripts\python "C:\Program Files...\modules\ALPR\ALPR_adapter.py"
14:15:42:
14:15:42: Module 'License Plate Reader' (ID: ALPR)
14:15:42: Active: True
14:15:42: GPU: Support enabled
!14:15:42: Parallelism: 0
!14:15:42: Platforms: windows,linux,macos,macos-arm64
!14:15:42: FilePath: ALPR_adapter.py
!14:15:42: ModulePath: ALPR
!14:15:42: Install: PostInstalled
!14:15:42: Runtime:
!14:15:42: Queue: ALPR_queue
!14:15:42: Start pause: 1 sec
!14:15:42: Valid: True
!14:15:42: Environment Variables
!14:15:42: PLATE_CONFIDENCE = 0.4
!14:15:42:
!14:15:42: Error trying to start License Plate Reader (ALPR_adapter.py)
!14:15:42: An error occurred trying to start process 'C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\Scripts\python' with working directory 'C:\Program Files\CodeProject\AI\modules\ALPR'. The system cannot find the file specified.
!14:15:42: at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo) at CodeProject.AI.API.Server.Frontend.ModuleRunner.StartProcess(ModuleConfig module)
!14:15:42: *** Please check the CodeProject.AI installation completed successfully
!14:16:29: Response received (id 1e4262b7-1540-41d8-93ad-68113c588399)
!14:17:00: Queued: 'custom' request, id c96ee7d0-e235-4862-a510-068ea5719602
What version CP.AI are you on, should be v2.0.6? there where some issues with previous versions of 2.0.X.I can't seem to get the LPR module to install. I've read back about 10 pages on the forum here. I tried the full uninstall, deleted the two folders, then reinstalled, but no files ever show up in the C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37 folder, which explains the 'cannot find the file specified' error below. I get a similar error when I try to install the OCR module. I only have YOLOv5 6.2 running in CUDA GPU mode.
Also C:\Program Files\CodeProject\AI\downloads\windows\python37 is empty, which is probably the source of the downstream issue.
Any ideas?
Here are the logs from the install attempt:
What version CP.AI are you on, should be v2.0.6? there where some issues with previous versions of 2.0.X.
Also you may what to check you antivirus and firewall to see if it is blocking CP.AI
1.6.8.0 (ms) | 2.0.6 (ms) | Increase | Note | |
Garage | 265.9 | 507.4 | 91% | 4k |
House | 124.3 | 503.4 | 305% | Substream |
Porch | 147.5 | 531.2 | 260% | Substream |
Shed | 117.9 | 296.2 | 151% | Substream |
1.6.8.0 | 2.0.6 | |
Garage | 906 | 166 |
House | 642 | 154 |
Porch | 420 | 107 |
Shed | 157 | 160 |
Total | 656 | 1838 |
I think I know why the increase in detection times, If you want I send you a file to replace and test until it gets fixed.Is anyone else getting significantly slower detection with 2.0.6 compared to 1.6.8.0?
I'm running:
An i5-11500 with 32Gb of RAM (no seperate graphics card)
Windows and Blue Iris (program, database and alerts) running off a NVME SSD
5 x 4k camera's. Cameras 1-3 recording to a 6Tb WD purple and camera's 4 and 5 recording to a 2nd 6Tb WD purple
24/7 substream recording and 4k alerts
4 cameras use substream for AI and 1 uses 4k (also captures license plates and gets a much better hit rate with 4k)
I ran DeepStack for the best part of a year and never had an issue
Changed to SenseAI 1.6.8.0 very early Jan and saw a slight speed improvement but a huge consistency improvement (fastest to slowest detection massively reduced)
I upgraded to 2.0.6 recently and both speed and accuracy have become much worse
Below shows the average response times for the 2 versions for each camera:
While the number of samples on the new version is a lot smaller they are still >100 for each camera.
1.6.8.0 (ms) 2.0.6 (ms) Increase Note Garage 265.9 507.4 91%4k House 124.3 503.4 305%Substream Porch 147.5 531.2 260%Substream Shed 117.9 296.2 151%Substream
Whilst I can't quantity it in the same way the number of missed objects also feels like it has gone up significantly (those where the 'Nothing Found' is returned but there was a person / vehicle in frame) especially in low light.
1.6.8.0 2.0.6 Garage 906 166House 642 154Porch 420 107Shed 157 160Total 656 1838
Blue Iris was on version 5.6.7.3 for all samples in the above data (now updated to 5.6.8.4 to see if that made a difference, it didn't)
Blue Iris was set to 'Medium' object detection for both versions
The single YOLOv5 6.2 is running on SenseAI, the single YOLO was running on the old version too (I didn't note the version before I uninstalled the old version)
I upgraded by stopping Blue Iris and SenseAI, uninstalling SenseAI, restarting, installing the new SenseAI then rebooting again
Any thoughts or suggestions would be more than welcome, I still have the install package for the old version so can downgrade but I thought I'd read most users seemed to be getting quicker times so I wonder why mine is different
You can send it to me, I’d like to try it.I think I know why the increase in detection times, If you want I send you a file to replace and test until it gets fixed.