So SenseAI also uses 640px (same as deepstack)?When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
So SenseAI also uses 640px (same as deepstack)?When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
As far as I see, yes.So SenseAI also uses 640px (same as deepstack)?
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.
“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
Well I suppose there is one but it is not about better AI detection. If you "burn label mark-up onto alert images" and want that mouse-over thumbnail that is visible in UI3 to be a higher resolution than your sub-stream, then using the main stream is how you achieve that. This is how I use BI/UI3 and I do find a significantly lower res thumbnail (say 640px) that gets enlarged on mouse-over is more blurry than I would like when I scroll through motion alerts and quickly assess what is going on in the image. It also depends on what your camera sub-streams are set at as to how much of a benefit it would provide by using the sub-stream. Presumably the image taken from the sub-stream would still need to be resized when sent over to AI if the sub-stream is a higher resolution than 640px (say for example 720px). If CPU usage is a concern for a given BI deployment, using the sub-stream for AI certainly appears to be a best practice to limit CPU usage with no impact on AI detection efficiency but it is not without a minor caveat about the thumbnail resolution aspect in certain instances.“So in most cases feeding the substream is better” - are there any cases where using the main stream is better? I asked this question in post #92 but was unanswered. I also asked what the minimum settable confidence percentage is when using senseAI. Does anyone know? When set at 30%, I’m getting confirmed in the low 30s (all wrong by the way). I remember Ken Pletzer telling me a long time ago that there’s no point in going below deepstack’s 40% limit.
Have they given you any idea as to when this will be released?CodeProject.AI update.
The next release should have my models included.
I think they will release version 1.5.5 today. I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times (about 100 msec). I did let them know about the slower detection times so hopefully they can fix it before they release the next version.Have they given you any idea as to when this will be released?
I am also waiting for the GPU version, I am still using DeepStack. I will work with them to include documentation. I have a GitHub repository for them to pull the models.I'm still waiting for the GPU version. Nice that they included your models, Mike, but do they include documentation for them? Those of us that follow this thread and a few others, know what's in each one but newbies won't have a clue.
I don't have a GPU so I switched over to SenseAI as soon as it was released. With the exception of the night profile for my LPR camera, it has been working really well with the default model. I've noticed a significant decrease in CPU usage with SenseAI vs DeepStack.I am also waiting for the GPU version, I am still using DeepStack.
Are custom models with this version. ThanksNew release today version 1.5.5. This version adds Global Command Line Parameter Overrides, this will help Ken to better integrate CodeProject.AI with Blue Iris.
I am see with this version slower custom model detection times (about 500 msec). With version 1.5.3 custom model detection times were about 100 msec. I did let them know about the slower detection times so hopefully they can fix it next version. Below is a link to the new version. The next feature they are going to work on is adding GPU support
CodeProject.AI Server: AI the easy way.
Version 2.6.5. Our fast, free, self-hosted Artificial Intelligence Server for any platform, any languagewww.codeproject.com
View attachment 133589
YesAre custom models with this version. Thanks
YepIn case Mike is really busy, I wanted to ask, is it still grayed out if you uncheck Default object detection?