5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

This card just makes the Compute capability version of 3.5, any low and will not work with the required CUDA version. Try install CUDA 11.7.1 and make sure you have HW Acceleration turned off.
If this does not work they are going to release a new version by the end of this week that should help GPUs with low memory.


View attachment 139811

Mike,

I reinstalled CUDA 11.7.1 but no real difference. In fact, it seemed to kill detections all together. I had to run REPAIR in the CP.AI installer and set CUDA to false in the modulesettings.json file for it to even work again. I'll just hang tight and perhaps the good fellows at CP.AI will enable better compatibility down the road. I do appreciate your insight and suggestions. And your models rock.

--Dirk
 
Nice, but if I understand correctly from your screenshot, we would still have to manually turn off modules that we don't need to have running, right? Also, would that setting now be "sticky" across restarts/reboots?
 
Nice, but if I understand correctly from your screenshot, we would still have to manually turn off modules that we don't need to have running, right? Also, would that setting now be "sticky" across restarts/reboots?
The last version I received has some minor bugs they should send me a new version to test shortly. From what I can tell so far the setting do not stick, but that should be an easy add.
 
Implemented and fully replaced my two Deepstack servers with this yesterday. Installation was easy (Linux).

I run two dedicated Linux VM's purely for this, and then have a load balancer sat in front (Pfsense/HaProxy) which gives me a single IP and PORT to put into BlueIris. This is exactly how I've run it for months with Deepstack, and now flawlessly with CodeProject.

Everything seems to be running fine. Perhaps a little high on the detection times currently but acceptable for my use (CPU only). Only real noticeable change was that the Linux VM's RAM usage increased by a couple gig. I've plenty to spare in their hosts so upped that.

Face processing seems to be erroring but I've put no effort into looking at it as I don't require it, so for now I've switched it off.

Nothing else to really report. Good stuff!
 

Attachments

  • remote-viewer_2022-09-16_12-37-19.png
    remote-viewer_2022-09-16_12-37-19.png
    113.6 KB · Views: 44
The bugs I found are fixed.
As for the settings be "sticky" across restarts/reboots this is also a bug that will be fixed Friday morning

View attachment 140025

So have you been un-installing old version and then install new ? The last release I just ran the new install on the existing version and it worked ok, but was curious on your method.
 
What I still want to see is customise option during the CP installer for selecting which modules to install and if custom objects module is chosen a further option to select which custom models to install.
 
  • Like
Reactions: JNDATHP
Is anyone willing to become part of the prerelease Beta tester team for the CodeProject.AI project. if so please PM me with your private email address so I can share your email with Chris Maunder who is leading the projects also I will send you a link to the latest prerelease version which is 1.5.7.2.

What would be needed of you is someone from the CodeProject.AI project team will email a link to the beta version that needs testing. After install the beta version you would run it and see if you are having any issues with the beta version. If you do have issues email them with details on the issues you are seeing. They are fairly quick with addressing the issues and sending a new version to test.

I recommend not testing the prerelease on your main Blue Iris system. How I am testing is using a second copy of Blue Iris and not till I vetted the version works with no issues then I start using it on my main Blue Iris system. You can use a demo version of Blue Iris, if you are having issue with running the demo version let me know and I will PM you how to address the issue.

1663645026503.png
 
I recommend not testing the prerelease on your main Blue Iris system. How I am testing is using a second copy of Blue Iris and not till I vetted the version works with no issues then I start using it on my main Blue Iris system. You can use a demo version of Blue Iris, if you are having issue with running the demo version let me know and I will PM you how to address the issue.

Mike,

Has anyone tried testing in a VM to make it easier to undo issues and start again?
 
Is anyone willing to become part of the prerelease Beta tester team for the CodeProject.AI project. if so please PM me with your private email address so I can share your email with Chris Maunder who is leading the projects also I will send you a link to the latest prerelease version which is 1.5.7.2.

What would be needed of you is someone from the CodeProject.AI project team will email a link to the beta version that needs testing. After install the beta version you would run it and see if you are having any issues with the beta version. If you do have issues email them with details on the issues you are seeing. They are fairly quick with addressing the issues and sending a new version to test.

I recommend not testing the prerelease on your main Blue Iris system. How I am testing is using a second copy of Blue Iris and not till I vetted the version works with no issues then I start using it on my main Blue Iris system. You can use a demo version of Blue Iris, if you are having issue with running the demo version let me know and I will PM you how to address the issue.

View attachment 140472
I do hope that you had a few volunteers for this Beta testing; I would be interested but I hesitate as I am only able to run it on my "production" system.

Really looking forward to the next stable release, hopefully integrated with more coordination with BI via Ken.
 
I do hope that you had a few volunteers for this Beta testing; I would be interested but I hesitate as I am only able to run it on my "production" system.

Really looking forward to the next stable release, hopefully integrated with more coordination with BI via Ken.
I had 4 users ask to be a beta tester.
They should be releasing a new version later today. Some the changes are listed below.

  • Ability to disable/enable modules from the Dashboard
  • Ability to disable/enable GPU (CUDA) from the Dashboard
  • Temporary fix for custom models path until Ken finishes the integration (During the install a script will run and read what custom model path is set in BI and create a dumpy folder with custom model file names in it)
  • Improved logging
  • Better GPU memory optimization.
 
I have been running it for about 2 days with BI 5.6.1.1 and it is running just like the 1.6.2 did before on the CPU mode. The installation was perfect and turning off what modules I did not want was a simple click. My timings are around 70-110 ms using the combined model. So I am giving it a A+

Note: I did a un-install of the previous Codeproject before new install.
 

Attachments

  • Screenshot 2022-09-21 134811.png
    Screenshot 2022-09-21 134811.png
    134 KB · Views: 32
  • Screenshot 2022-09-20 113706.png
    Screenshot 2022-09-20 113706.png
    158.8 KB · Views: 32
My timings are around 70-110 ms using the combined model.
I've been running 1.5.7-Beta+0002 on my production system (16 cameras) for a little over 24hrs. I replaced all the custom models with the yolov5l.pt model. I have found that this model is considerably slower but highly accurate. I'm seeing process times between 1-4 seconds on my i7-4790 CPU only with accuracy extremely high.

Screen Shot 2022-09-21 at 2.30.32 PM.png
Screen Shot 2022-09-21 at 2.59.31 PM.png
 
I've been running 1.5.7-Beta+0002 on my production system (16 cameras) for a little over 24hrs. I replaced all the custom models with the yolov5l.pt model. I have found that this model is considerably slower but highly accurate. I'm seeing process times between 1-4 seconds on my i7-4790 CPU only with accuracy extremely high.

View attachment 140609
View attachment 140610
Yikes, I am surprised that you find that to be an acceptable analysis time. With times that slow I would be concerned that only one or two images would be analyzed before the (moving) object would be out of frame.

This is with what settings for image quality/resolution being passed to the AI?