5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
So SenseAI also uses 640px (same as deepstack)?
 
When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
 
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.

No benefit - it will downrez it regardless and use up CPU and time in doing so.
 
  • Like
Reactions: sebastiantombs
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
Well I suppose there is one but it is not about better AI detection. If you "burn label mark-up onto alert images" and want that mouse-over thumbnail that is visible in UI3 to be a higher resolution than your sub-stream, then using the main stream is how you achieve that. This is how I use BI/UI3 and I do find a significantly lower res thumbnail (say 640px) that gets enlarged on mouse-over is more blurry than I would like when I scroll through motion alerts and quickly assess what is going on in the image. It also depends on what your camera sub-streams are set at as to how much of a benefit it would provide by using the sub-stream. Presumably the image taken from the sub-stream would still need to be resized when sent over to AI if the sub-stream is a higher resolution than 640px (say for example 720px). If CPU usage is a concern for a given BI deployment, using the sub-stream for AI certainly appears to be a best practice to limit CPU usage with no impact on AI detection efficiency but it is not without a minor caveat about the thumbnail resolution aspect in certain instances.
 
CodeProject.AI update.
I have been working with CodeProject.AI team to have my custom models included with the install. The next release should have my models included. Also they added the ability to benchmark the models and more logging details. They still are working on GPU support, they had some Windows install issues that they needed to resolve first. The version I tested this morning looks to resolve those issues.

1657815042862.png
1657814870843.png
 
Last edited:
Have they given you any idea as to when this will be released?
I think they will release version 1.5.5 today. I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times (about 100 msec). I did let them know about the slower detection times so hopefully they can fix it before they release the next version.
 
I'm still waiting for the GPU version. Nice that they included your models, Mike, but do they include documentation for them? Those of us that follow this thread and a few others, know what's in each one but newbies won't have a clue.
 
  • Like
Reactions: looney2ns and TBurt
I'm still waiting for the GPU version. Nice that they included your models, Mike, but do they include documentation for them? Those of us that follow this thread and a few others, know what's in each one but newbies won't have a clue.
I am also waiting for the GPU version, I am still using DeepStack. I will work with them to include documentation. I have a GitHub repository for them to pull the models.
 
Great news! As soon as GPU support is there, I'll migrate. Has a migration doc from DS to Code Project AI been created?
 
Last edited:
Good afternoon,
I might have missed it, I see MikeLud1 said they might release version 1.5.5 today. Do you know is that only for the Docker implementation or would that also be for the Windows Installer? I'm using Windows Installer version 1.50 and it seems to be working pretty well for me. It does seem to eat up my Ram faster than when I was using deepstack. I was hoping a newer version might be better in that regards. The RAM issue isn't a show stopper I just need to reboot my PC every week to correct it. Do you know if the windows installer is on the same version and can someone please share a link to grab the latest version?

This is the web site I used to get the version 1.5.0 (Installing CodeProject.AI Server on Windows - CodeProject.AI Server)
 
I am also waiting for the GPU version, I am still using DeepStack.
I don't have a GPU so I switched over to SenseAI as soon as it was released. With the exception of the night profile for my LPR camera, it has been working really well with the default model. I've noticed a significant decrease in CPU usage with SenseAI vs DeepStack.
 
New release today version 1.5.5. This version adds Global Command Line Parameter Overrides, this will help Ken to better integrate CodeProject.AI with Blue Iris.
I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times were about 100 msec. I did let them know about the slower detection times so hopefully they can fix it next version. Below is a link to the new version. The next feature they are going to work on is adding GPU support


1657946916705.png
 
Last edited:
New release today version 1.5.5. This version adds Global Command Line Parameter Overrides, this will help Ken to better integrate CodeProject.AI with Blue Iris.
I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times were about 100 msec. I did let them know about the slower detection times so hopefully they can fix it next version. Below is a link to the new version. The next feature they are going to work on is adding GPU support


View attachment 133589
Are custom models with this version. Thanks
 
In case Mike is really busy, I wanted to ask, is it still grayed out if you uncheck Default object detection?