5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

joshwah

Pulling my weight
Joined
Apr 25, 2019
Messages
298
Reaction score
146
Location
australia
When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
So SenseAI also uses 640px (same as deepstack)?
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,844
Reaction score
48,458
Location
USA
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
No benefit - it will downrez it regardless and use up CPU and time in doing so.
 

gwithers

Getting the hang of it
Joined
May 18, 2016
Messages
49
Reaction score
38
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
Well I suppose there is one but it is not about better AI detection. If you "burn label mark-up onto alert images" and want that mouse-over thumbnail that is visible in UI3 to be a higher resolution than your sub-stream, then using the main stream is how you achieve that. This is how I use BI/UI3 and I do find a significantly lower res thumbnail (say 640px) that gets enlarged on mouse-over is more blurry than I would like when I scroll through motion alerts and quickly assess what is going on in the image. It also depends on what your camera sub-streams are set at as to how much of a benefit it would provide by using the sub-stream. Presumably the image taken from the sub-stream would still need to be resized when sent over to AI if the sub-stream is a higher resolution than 640px (say for example 720px). If CPU usage is a concern for a given BI deployment, using the sub-stream for AI certainly appears to be a best practice to limit CPU usage with no impact on AI detection efficiency but it is not without a minor caveat about the thumbnail resolution aspect in certain instances.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,204
Reaction score
4,243
Location
Brooklyn, NY
CodeProject.AI update.
I have been working with CodeProject.AI team to have my custom models included with the install. The next release should have my models included. Also they added the ability to benchmark the models and more logging details. They still are working on GPU support, they had some Windows install issues that they needed to resolve first. The version I tested this morning looks to resolve those issues.

1657815042862.png
1657814870843.png
 
Last edited:

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,204
Reaction score
4,243
Location
Brooklyn, NY
Have they given you any idea as to when this will be released?
I think they will release version 1.5.5 today. I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times (about 100 msec). I did let them know about the slower detection times so hopefully they can fix it before they release the next version.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,691
Location
New Jersey
I'm still waiting for the GPU version. Nice that they included your models, Mike, but do they include documentation for them? Those of us that follow this thread and a few others, know what's in each one but newbies won't have a clue.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,204
Reaction score
4,243
Location
Brooklyn, NY
I'm still waiting for the GPU version. Nice that they included your models, Mike, but do they include documentation for them? Those of us that follow this thread and a few others, know what's in each one but newbies won't have a clue.
I am also waiting for the GPU version, I am still using DeepStack. I will work with them to include documentation. I have a GitHub repository for them to pull the models.
 

105437

BIT Beta Team
Joined
Jun 8, 2015
Messages
2,026
Reaction score
919
Great news! As soon as GPU support is there, I'll migrate. Has a migration doc from DS to Code Project AI been created?
 
Last edited:

pm3klb

n3wb
Joined
Jun 23, 2016
Messages
5
Reaction score
1
Good afternoon,
I might have missed it, I see MikeLud1 said they might release version 1.5.5 today. Do you know is that only for the Docker implementation or would that also be for the Windows Installer? I'm using Windows Installer version 1.50 and it seems to be working pretty well for me. It does seem to eat up my Ram faster than when I was using deepstack. I was hoping a newer version might be better in that regards. The RAM issue isn't a show stopper I just need to reboot my PC every week to correct it. Do you know if the windows installer is on the same version and can someone please share a link to grab the latest version?

This is the web site I used to get the version 1.5.0 (Installing CodeProject.AI Server on Windows - CodeProject.AI Server)
 

Vettester

Getting comfortable
Joined
Feb 5, 2017
Messages
740
Reaction score
693
I am also waiting for the GPU version, I am still using DeepStack.
I don't have a GPU so I switched over to SenseAI as soon as it was released. With the exception of the night profile for my LPR camera, it has been working really well with the default model. I've noticed a significant decrease in CPU usage with SenseAI vs DeepStack.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,204
Reaction score
4,243
Location
Brooklyn, NY
New release today version 1.5.5. This version adds Global Command Line Parameter Overrides, this will help Ken to better integrate CodeProject.AI with Blue Iris.
I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times were about 100 msec. I did let them know about the slower detection times so hopefully they can fix it next version. Below is a link to the new version. The next feature they are going to work on is adding GPU support


1657946916705.png
 
Last edited:

andycots

Getting the hang of it
Joined
Feb 21, 2015
Messages
172
Reaction score
81
Location
West Yorkshire, UK
New release today version 1.5.5. This version adds Global Command Line Parameter Overrides, this will help Ken to better integrate CodeProject.AI with Blue Iris.
I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times were about 100 msec. I did let them know about the slower detection times so hopefully they can fix it next version. Below is a link to the new version. The next feature they are going to work on is adding GPU support


View attachment 133589
Are custom models with this version. Thanks
 

gwminor48

Known around here
Joined
Jul 16, 2015
Messages
3,646
Reaction score
6,980
Location
Texas
In case Mike is really busy, I wanted to ask, is it still grayed out if you uncheck Default object detection?
 
Top