5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

For those that upgraded to 2.0.x, are you seeing better detection response or better accuracy? Is is stable at this point or should we wait until some of the initial reported bugs are ironed out?
 
  • Like
Reactions: biggen
So decided to compare some detection times of a Intel HD 530 (I7-6700k) vs Intel 770 HD (I7-12000) using CPAI's Yolov5.NET in GPU DirectML mode.
Thanks for taking the time to do some testing.
I must admit I expected better performance from the GPU, but now that I have read up on the Intel UHD 630, I guess it is what can be expected.

My CPU - I5 8500T - and the Intel UHD 630 is almost the same speed. Getting around 2-300 ms in response times.
 
Thanks for taking the time to do some testing.
I must admit I expected better performance from the GPU, but now that I have read up on the Intel UHD 630, I guess it is what can be expected.

My CPU - I5 8500T - and the Intel UHD 630 is almost the same speed. Getting around 2-300 ms in response times.
I have the same Intel HD 530 (I7-6700k) and using cpai 2.0.6 and the .NET version (with ipcam-combined and using the intel built in GPU) I'm getting same ~250ms responses. But I did notice much better behavior in the rain.. previously when foggy and rain, the cpu would go mad with triggers (that would be negated by AI), but it could pin the cpu a lot at night. Using .NET and the Intel gpu, the cpu is no where near as effected. Moving to a separate IR light, a few feet from the camera, also helps a lot with the rain/snow triggers on IR cameras.
 
So decided to compare some detection times of a Intel HD 530 (I7-6700k) vs Intel 770 HD (I7-12000) using CPAI's Yolov5.NET in GPU DirectML mode.
Have you tested deepstack with these systems? Here are my times on an i5-8500 intel 630 using deepstack in medium mode with substreams and the default model. For the most part they are in the 200-400ms range while at times it gets bogged down to 6000-8000 likely when many cams trigger simultaneously.
 

Attachments

  • deepstack times 2.JPG
    deepstack times 2.JPG
    99.4 KB · Views: 35
  • deepstack times.JPG
    deepstack times.JPG
    112 KB · Views: 34
Have you tested deepstack with these systems? Here are my times on an i5-8500 intel 630 using deepstack in medium mode with substreams and the default model. For the most part they are in the 200-400ms range while at times it gets bogged down to 6000-8000 likely when many cams trigger simultaneously.

Yes I have used deepstack on both systems but, it has been too long to remember my exact timings. Does deepstack use the embbeded GPU like CP ?
 
Yes I have used deepstack on both systems but, it has been too long to remember my exact timings. Does deepstack use the embbeded GPU like CP ?
As mike noted, DS does not support intel. I would have expected a better improvement with going from the i7-6700 to the i7-12700 as it is more than 3x more powerful unless its topping out at 100% cpu usage, because you are actually seeing a 2/3 reduction in processing time.
 
  • Like
Reactions: looney2ns
As mike noted, DS does not support intel. I would have expected a better improvement with going from the i7-6700 to the i7-12700 as it is more than 3x more powerful unless its topping out at 100% cpu usage, because you are actually seeing a 2/3 reduction in processing time.

I did not leave my main machine on the GPU mode for long. It was making the GPU hit 100% when 3 cams would trigger, just was curious as how it compared to newer 770. My times are way better just using CPU mode.

Here was GPU graph while testing:

Screenshot 2023-01-20 101100.png

And now my times back on CPU mode:

Screenshot 2023-01-21 103418.png
 
Are you talking about upgrading codeproject or jumping from Deepstack?

I was referring to upgrading from CodeProject AI v1.6.x to latest 2.0.x version. I've already switched over to CP AI from Deepstack for over half a year already. It's been working fine for me with current 1.6.x version so I was just curious if latest version is any better. Right now i'm using GPU version in docker with ipcam-general model and seeing around 50-85ms response time from BlueIris logs with some occasional spike so I wanted to see what improvement the new version brings.
 
Last edited:
When benchmarking docker 1.6.9 gpu to 2.0.6 gpu, I am not seeing any noticeable improvement or degradation in terms OPS performance as shown in the CPAI explorer on my server.

What I am seeing since I have switched over to the 2.0.6 gpu docker version of CPAI, is that logging in the dashboard is more unstable. It also no longer reports image processing times, no matter what level of logging is selected or how many times you refresh the page. I personally thought it was a best practice to reference the dashboard indication of how fast it was processing images (which I believe cybernetics1d may have been doing in the post above) . I find that the AI response times listed in the BI log are much more variable, dependent on what is happening during the event being processed, and dependent on how each camera has AI configured. So even though the CP AI performance is the same for all these event images, the logged BI AI response times will vary. No doubt, BI AI response times can tell us a lot about performance trends on any given BI instance, and that is useful. It would be nice if 2.x docker CP AI dashboard still showed image response times. Perhaps that is due to some performance aspects of CPAI reporting those processing times and it is improved without them being logged.
 
I can't seem to get the LPR module to install. I've read back about 10 pages on the forum here. I tried the full uninstall, deleted the two folders, then reinstalled, but no files ever show up in the C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37 folder, which explains the 'cannot find the file specified' error below. I get a similar error when I try to install the OCR module. I only have YOLOv5 6.2 running in CUDA GPU mode.

Also C:\Program Files\CodeProject\AI\downloads\windows\python37 is empty, which is probably the source of the downstream issue.

Any ideas?


Here are the logs from the install attempt:


14:15:42: ALPR has left the building
14:15:42: Starting C:\Program Files...ws\python37\venv\Scripts\python "C:\Program Files...\modules\ALPR\ALPR_adapter.py"
14:15:42:
14:15:42: Module 'License Plate Reader' (ID: ALPR)
14:15:42: Active: True
14:15:42: GPU: Support enabled
!14:15:42: Parallelism: 0
!14:15:42: Platforms: windows,linux,macos,macos-arm64
!14:15:42: FilePath: ALPR_adapter.py
!14:15:42: ModulePath: ALPR
!14:15:42: Install: PostInstalled
!14:15:42: Runtime:
!14:15:42: Queue: ALPR_queue
!14:15:42: Start pause: 1 sec
!14:15:42: Valid: True
!14:15:42: Environment Variables
!14:15:42: PLATE_CONFIDENCE = 0.4
!14:15:42:
!14:15:42: Error trying to start License Plate Reader (ALPR_adapter.py)
!14:15:42: An error occurred trying to start process 'C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\Scripts\python' with working directory 'C:\Program Files\CodeProject\AI\modules\ALPR'. The system cannot find the file specified.
!14:15:42: at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo) at CodeProject.AI.API.Server.Frontend.ModuleRunner.StartProcess(ModuleConfig module)
!14:15:42: *** Please check the CodeProject.AI installation completed successfully
!14:16:29: Response received (id 1e4262b7-1540-41d8-93ad-68113c588399)
!14:17:00: Queued: 'custom' request, id c96ee7d0-e235-4862-a510-068ea5719602
 
I can't seem to get the LPR module to install. I've read back about 10 pages on the forum here. I tried the full uninstall, deleted the two folders, then reinstalled, but no files ever show up in the C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37 folder, which explains the 'cannot find the file specified' error below. I get a similar error when I try to install the OCR module. I only have YOLOv5 6.2 running in CUDA GPU mode.

Also C:\Program Files\CodeProject\AI\downloads\windows\python37 is empty, which is probably the source of the downstream issue.

Any ideas?


Here are the logs from the install attempt:
What version CP.AI are you on, should be v2.0.6? there where some issues with previous versions of 2.0.X.
Also you may what to check you antivirus and firewall to see if it is blocking CP.AI
 
What version CP.AI are you on, should be v2.0.6? there where some issues with previous versions of 2.0.X.
Also you may what to check you antivirus and firewall to see if it is blocking CP.AI

Yes, it is 2.0.6. I am only running Windows Defender, and I don't see any alerts from it. I turned off the anti-ransomware folder protection (which also was not throwing any alerts) and that didn't help either. I double-checked that this machine isn't using my ad-blocking proxy. I'm running out of things to check.

I think I'll try a full install on my other Windows box and see what it does.

Edit: It installed on my other computer but didn't seem to be working. It did however download the Python3.7 files. I copied that folder to my other PC, ran a repair, rebooted, and then installed the ALPR module. It looks like it successfully installed, but it isn't giving any returns when I test it in the Explorer. At least that's some progress. Maybe I'll try it in Docker tomorrow.
 
Last edited:
Is anyone else getting significantly slower detection with 2.0.6 compared to 1.6.8.0?

I'm running:
An i5-11500 with 32Gb of RAM (no seperate graphics card)
Windows and Blue Iris (program, database and alerts) running off a NVME SSD
5 x 4k camera's. Cameras 1-3 recording to a 6Tb WD purple and camera's 4 and 5 recording to a 2nd 6Tb WD purple
24/7 substream recording and 4k alerts
4 cameras use substream for AI and 1 uses 4k (also captures license plates and gets a much better hit rate with 4k)

I ran DeepStack for the best part of a year and never had an issue
Changed to SenseAI 1.6.8.0 very early Jan and saw a slight speed improvement but a huge consistency improvement (fastest to slowest detection massively reduced)
I upgraded to 2.0.6 recently and both speed and accuracy have become much worse

Below shows the average response times for the 2 versions for each camera:
1.6.8.0 (ms)2.0.6 (ms)IncreaseNote
Garage
265.9​
507.4​
91%​
4k
House
124.3​
503.4​
305%​
Substream
Porch
147.5​
531.2​
260%​
Substream
Shed
117.9​
296.2​
151%​
Substream

While the number of samples on the new version is a lot smaller they are still >100 for each camera.
1.6.8.02.0.6
Garage
906​
166​
House
642​
154​
Porch
420​
107​
Shed
157​
160​
Total
656​
1838​

Whilst I can't quantity it in the same way the number of missed objects also feels like it has gone up significantly (those where the 'Nothing Found' is returned but there was a person / vehicle in frame) especially in low light.

Blue Iris was on version 5.6.7.3 for all samples in the above data (now updated to 5.6.8.4 to see if that made a difference, it didn't)
Blue Iris was set to 'Medium' object detection for both versions
The single YOLOv5 6.2 is running on SenseAI, the single YOLO was running on the old version too (I didn't note the version before I uninstalled the old version)
I upgraded by stopping Blue Iris and SenseAI, uninstalling SenseAI, restarting, installing the new SenseAI then rebooting again

Any thoughts or suggestions would be more than welcome, I still have the install package for the old version so can downgrade but I thought I'd read most users seemed to be getting quicker times so I wonder why mine is different
 
Is anyone else getting significantly slower detection with 2.0.6 compared to 1.6.8.0?

I'm running:
An i5-11500 with 32Gb of RAM (no seperate graphics card)
Windows and Blue Iris (program, database and alerts) running off a NVME SSD
5 x 4k camera's. Cameras 1-3 recording to a 6Tb WD purple and camera's 4 and 5 recording to a 2nd 6Tb WD purple
24/7 substream recording and 4k alerts
4 cameras use substream for AI and 1 uses 4k (also captures license plates and gets a much better hit rate with 4k)

I ran DeepStack for the best part of a year and never had an issue
Changed to SenseAI 1.6.8.0 very early Jan and saw a slight speed improvement but a huge consistency improvement (fastest to slowest detection massively reduced)
I upgraded to 2.0.6 recently and both speed and accuracy have become much worse

Below shows the average response times for the 2 versions for each camera:
1.6.8.0 (ms)2.0.6 (ms)IncreaseNote
Garage
265.9​
507.4​
91%​
4k
House
124.3​
503.4​
305%​
Substream
Porch
147.5​
531.2​
260%​
Substream
Shed
117.9​
296.2​
151%​
Substream
While the number of samples on the new version is a lot smaller they are still >100 for each camera.

1.6.8.02.0.6
Garage
906​
166​
House
642​
154​
Porch
420​
107​
Shed
157​
160​
Total
656​
1838​
Whilst I can't quantity it in the same way the number of missed objects also feels like it has gone up significantly (those where the 'Nothing Found' is returned but there was a person / vehicle in frame) especially in low light.


Blue Iris was on version 5.6.7.3 for all samples in the above data (now updated to 5.6.8.4 to see if that made a difference, it didn't)
Blue Iris was set to 'Medium' object detection for both versions
The single YOLOv5 6.2 is running on SenseAI, the single YOLO was running on the old version too (I didn't note the version before I uninstalled the old version)
I upgraded by stopping Blue Iris and SenseAI, uninstalling SenseAI, restarting, installing the new SenseAI then rebooting again

Any thoughts or suggestions would be more than welcome, I still have the install package for the old version so can downgrade but I thought I'd read most users seemed to be getting quicker times so I wonder why mine is different
I think I know why the increase in detection times, If you want I send you a file to replace and test until it gets fixed.