5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Can you send me the log file so I can share the Radeon issue you are having. There was one change in the latest version that might slow down CPUs that was eliminating the low, medium, high modes and running everything at high mode (640x640).

View attachment 141466
Hmm...now that you mention it, I am running CPU only (i5-8500) and was originally seeing an average processing time of 160ms for the first week after installing 1.6.0 up to 1.6.2, but now on 1.6.5 I am seeing an average processing time of 383ms.

I also noticed that the modulesettings.json file no longer includes a line to change the resolution (Low, Medium, High).
Can it be put back in?
I don't have a copy of the old line (or complete modulesettings.json file) to reference.
 
Hmm...now that you mention it, I am running CPU only (i5-8500) and was originally seeing an average processing time of 160ms for the first week after installing 1.6.0 up to 1.6.2, but now on 1.6.5 I am seeing an average processing time of 383ms.

I also noticed that the modulesettings.json file no longer includes a line to change the resolution (Low, Medium, High).
Can it be put back in?
I don't have a copy of the old line (or complete modulesettings.json file) to reference.
I will talk with the developer to see if this is definitely the issue.
 
  • Like
Reactions: jrbeddow
Hmm...now that you mention it, I am running CPU only (i5-8500) and was originally seeing an average processing time of 160ms for the first week after installing 1.6.0 up to 1.6.2, but now on 1.6.5 I am seeing an average processing time of 383ms.

I also noticed that the modulesettings.json file no longer includes a line to change the resolution (Low, Medium, High).
Can it be put back in?
I don't have a copy of the old line (or complete modulesettings.json file) to reference.
Can you try replacing the two files in the attached zip file. This will set the mode back to Medium 416x416. Let me know if your processing time go back to 160ms

1664774076396.png
 

Attachments

@actran

I already updated to the latest version BI
Also updated to the latet CodeProrject.AI version v1.6.5 Beta

Still grey out - use custom model folder
What could be the reason?

2022-10-03_17-33-26.jpg
2022-10-03_17-33-51.jpg
 
You need to stop the CodeProcess service. Go into Windows service and stop it from there. Then close and reopen BI and perhaps even reboot. I was able to enable the checkbox after doing all these things.
 
Can you try replacing the two files in the attached zip file. This will set the mode back to Medium 416x416. Let me know if your processing time go back to 160ms

View attachment 141467
I just replaced those files now, and will get back to you late today to let you know the results.

Thanks for this, although I do find it strange that even the modeulesettings.json for this have seemingly been removed from end users control: would that mean that the Global AI interface in BI that sets the mode to Low, Medium, High was also effectively disabled?
 
I just replaced those files now, and will get back to you late today to let you know the results.

Thanks for this, although I do find it strange that even the modeulesettings.json for this have seemingly been removed from end users control: would that mean that the Global AI interface in BI that sets the mode to Low, Medium, High was also effectively disabled?
Yes
 
I am also seeing performance degradation for 1.6.5-beta with GPU (CUDA) as well. I have two identical VM's running the CodeProject docker container and each VM has access to one of the two Quadro P600 Nvidia cards installed in the ProxMox host. Both VM's had the same performance with 1.6.2-beta but when I pulled 1.6.5-beta on one VM, that VM now runs considerably slower now. Typical results are below.

Here is 1.6.2-beta
1664808882911.png

Here is 1.6.5-beta
1664808911965.png
 
Last edited:
Sorry to change the subject here just a quick question on config for BI and AI,

When we were using custom models with Deepstack and wanted to disable the default detection we needed to add objects:0 to the Custom Models section in BI. Is this the same for CodeProject?
 
Yes it is.

objects:0,"model of choice"
 
  • Like
Reactions: kaltertod
Yes it is.

objects:0,"model of choice"
Okay that is what I thought trying to comb through this mega thread and could not find a definitive answer, I am trying to track down a problem with the USPS.pt and delivery.pt models not detecting objects of interest when be runs a scan with the ai. If I manually scan the image through the codeproject interface however it does find the object of interest.
 
For those using a docker image and not installing AI-Server on the same machine as BI, I am finding that when using the docker image and custom models, it will use all the custom models unless you specify a model to use. When you do specify a model to use, it becomes "explicit" in that it will not use any of the other custom models that are included. So if you say to use "ipcam-combined" for a camera, it does not require the usual "ipcam-general:0" or "ipcam-animal:0" to prevent AI-server from running images through those other available custom models.
 
  • Like
Reactions: kaltertod
For those using a docker image and not installing AI-Server on the same machine as BI, I am finding that when using the docker image and custom models, it will use all the custom models unless you specify a model to use. When you do specify a model to use, it becomes "explicit" in that it will not use any of the other custom models that are included. So if you say to use "ipcam-combined" for a camera, it does not require the usual "ipcam-general:0" or "ipcam-animal:0" to prevent AI-server from running images through those other available custom models.
Yes, this is all true regardless of a Docker or conventional Windows installation. More importantly, these details are well documented in the BI help file, so it really pays off to re-read this help information; even if you think you know it already, it does get updated fairly frequently.

In summary: if you aren't using the default object detection at all, simply uncheck it on the main Global AI tab, no need to add the :0 after model names at all on each camera AI tab.
Also, if you are using custom models and don't specify which ones on the camera AI tab, it will use ALL of the custom models BI is aware of. Otherwise, use a comma separated list of model names.
 
You learn something every day. Thanks for clarifying that gentlemen. I forgot all about the checkbox for default object detection.
 
I can confirm that the 1.6.6-beta container now has restored the GPU performance back to that of 1.6.2-beta (at least on my rig). I also tested for CPU performance and the 1.6.6-beta is actually better than the 1.6.2-beta was (~14.4 vs. ~11.7).

1664825042267.png
 
Last edited:
This probably gonna be a dumb question but searched and cant find anything on this. Wanted to get a list of labels for the Yolov5L and X models. I cant seem to find anything unless Im just not looking in the right place, their own website doesnt flat out tell me that. I know person is one of them. But seems like vehicle is not working, im gonna assume its probably like car, truck, bus, etc.

Also put these two in the models in the custom models folder (C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models) and still cannot see them in CodeProject Explorer when trying to pick it for testing sample images, here is a screenshot.
Screenshot 2022-10-03 at 4.50.36 PM.png