5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
This card just makes the Compute capability version of 3.5, any low and will not work with the required CUDA version. Try install CUDA 11.7.1 and make sure you have HW Acceleration turned off.
If this does not work they are going to release a new version by the end of this week that should help GPUs with low memory.


View attachment 139811
Mike,

I reinstalled CUDA 11.7.1 but no real difference. In fact, it seemed to kill detections all together. I had to run REPAIR in the CP.AI installer and set CUDA to false in the modulesettings.json file for it to even work again. I'll just hang tight and perhaps the good fellows at CP.AI will enable better compatibility down the road. I do appreciate your insight and suggestions. And your models rock.

--Dirk
 

jrbeddow

Getting comfortable
Joined
Oct 26, 2021
Messages
370
Reaction score
485
Location
USA
Nice, but if I understand correctly from your screenshot, we would still have to manually turn off modules that we don't need to have running, right? Also, would that setting now be "sticky" across restarts/reboots?
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Nice, but if I understand correctly from your screenshot, we would still have to manually turn off modules that we don't need to have running, right? Also, would that setting now be "sticky" across restarts/reboots?
The last version I received has some minor bugs they should send me a new version to test shortly. From what I can tell so far the setting do not stick, but that should be an easy add.
 
Joined
Nov 11, 2021
Messages
1
Reaction score
0
Location
Knowhere
Implemented and fully replaced my two Deepstack servers with this yesterday. Installation was easy (Linux).

I run two dedicated Linux VM's purely for this, and then have a load balancer sat in front (Pfsense/HaProxy) which gives me a single IP and PORT to put into BlueIris. This is exactly how I've run it for months with Deepstack, and now flawlessly with CodeProject.

Everything seems to be running fine. Perhaps a little high on the detection times currently but acceptable for my use (CPU only). Only real noticeable change was that the Linux VM's RAM usage increased by a couple gig. I've plenty to spare in their hosts so upped that.

Face processing seems to be erroring but I've put no effort into looking at it as I don't require it, so for now I've switched it off.

Nothing else to really report. Good stuff!
 

Attachments

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,208
Reaction score
1,472
Location
USA
The bugs I found are fixed.
As for the settings be "sticky" across restarts/reboots this is also a bug that will be fixed Friday morning

View attachment 140025
So have you been un-installing old version and then install new ? The last release I just ran the new install on the existing version and it worked ok, but was curious on your method.
 

CrazyAsYou

Getting comfortable
Joined
Mar 28, 2018
Messages
246
Reaction score
262
Location
England, Near Sheffield
What I still want to see is customise option during the CP installer for selecting which modules to install and if custom objects module is chosen a further option to select which custom models to install.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Is anyone willing to become part of the prerelease Beta tester team for the CodeProject.AI project. if so please PM me with your private email address so I can share your email with Chris Maunder who is leading the projects also I will send you a link to the latest prerelease version which is 1.5.7.2.

What would be needed of you is someone from the CodeProject.AI project team will email a link to the beta version that needs testing. After install the beta version you would run it and see if you are having any issues with the beta version. If you do have issues email them with details on the issues you are seeing. They are fairly quick with addressing the issues and sending a new version to test.

I recommend not testing the prerelease on your main Blue Iris system. How I am testing is using a second copy of Blue Iris and not till I vetted the version works with no issues then I start using it on my main Blue Iris system. You can use a demo version of Blue Iris, if you are having issue with running the demo version let me know and I will PM you how to address the issue.

1663645026503.png
 

clk8

Young grasshopper
Joined
Jul 18, 2022
Messages
30
Reaction score
24
Location
NY
I recommend not testing the prerelease on your main Blue Iris system. How I am testing is using a second copy of Blue Iris and not till I vetted the version works with no issues then I start using it on my main Blue Iris system. You can use a demo version of Blue Iris, if you are having issue with running the demo version let me know and I will PM you how to address the issue.
Mike,

Has anyone tried testing in a VM to make it easier to undo issues and start again?
 

jrbeddow

Getting comfortable
Joined
Oct 26, 2021
Messages
370
Reaction score
485
Location
USA
Is anyone willing to become part of the prerelease Beta tester team for the CodeProject.AI project. if so please PM me with your private email address so I can share your email with Chris Maunder who is leading the projects also I will send you a link to the latest prerelease version which is 1.5.7.2.

What would be needed of you is someone from the CodeProject.AI project team will email a link to the beta version that needs testing. After install the beta version you would run it and see if you are having any issues with the beta version. If you do have issues email them with details on the issues you are seeing. They are fairly quick with addressing the issues and sending a new version to test.

I recommend not testing the prerelease on your main Blue Iris system. How I am testing is using a second copy of Blue Iris and not till I vetted the version works with no issues then I start using it on my main Blue Iris system. You can use a demo version of Blue Iris, if you are having issue with running the demo version let me know and I will PM you how to address the issue.

View attachment 140472
I do hope that you had a few volunteers for this Beta testing; I would be interested but I hesitate as I am only able to run it on my "production" system.

Really looking forward to the next stable release, hopefully integrated with more coordination with BI via Ken.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
I do hope that you had a few volunteers for this Beta testing; I would be interested but I hesitate as I am only able to run it on my "production" system.

Really looking forward to the next stable release, hopefully integrated with more coordination with BI via Ken.
I had 4 users ask to be a beta tester.
They should be releasing a new version later today. Some the changes are listed below.

  • Ability to disable/enable modules from the Dashboard
  • Ability to disable/enable GPU (CUDA) from the Dashboard
  • Temporary fix for custom models path until Ken finishes the integration (During the install a script will run and read what custom model path is set in BI and create a dumpy folder with custom model file names in it)
  • Improved logging
  • Better GPU memory optimization.
 

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,208
Reaction score
1,472
Location
USA
I have been running it for about 2 days with BI 5.6.1.1 and it is running just like the 1.6.2 did before on the CPU mode. The installation was perfect and turning off what modules I did not want was a simple click. My timings are around 70-110 ms using the combined model. So I am giving it a A+

Note: I did a un-install of the previous Codeproject before new install.
 

Attachments

Vettester

Getting comfortable
Joined
Feb 5, 2017
Messages
740
Reaction score
693
My timings are around 70-110 ms using the combined model.
I've been running 1.5.7-Beta+0002 on my production system (16 cameras) for a little over 24hrs. I replaced all the custom models with the yolov5l.pt model. I have found that this model is considerably slower but highly accurate. I'm seeing process times between 1-4 seconds on my i7-4790 CPU only with accuracy extremely high.

Screen Shot 2022-09-21 at 2.30.32 PM.png
Screen Shot 2022-09-21 at 2.59.31 PM.png
 

jrbeddow

Getting comfortable
Joined
Oct 26, 2021
Messages
370
Reaction score
485
Location
USA
I've been running 1.5.7-Beta+0002 on my production system (16 cameras) for a little over 24hrs. I replaced all the custom models with the yolov5l.pt model. I have found that this model is considerably slower but highly accurate. I'm seeing process times between 1-4 seconds on my i7-4790 CPU only with accuracy extremely high.

View attachment 140609
View attachment 140610
Yikes, I am surprised that you find that to be an acceptable analysis time. With times that slow I would be concerned that only one or two images would be analyzed before the (moving) object would be out of frame.

This is with what settings for image quality/resolution being passed to the AI?
 
Top