CodeProject.AI Version 2.0

From what I understand only Nvidia are supported - give it a try anyway as you have it. I chose Nvidia specifically for deepstack and now use CP.



i have a 1070 gtx, as i saw at wikipedia it has compute compatibilty 6.1
I dont know if could be enough or better get a newer cheap because it isnt in my main pc and dont waste lot money in there..
 
I
i have a 1070 gtx, as i saw at wikipedia it has compute compatibilty 6.1
I dont know if could be enough or better get a newer cheap because it isnt in my main pc and dont waste lot money in there..
I have a gtx970 (5.2 compat by the looks of it).
Works perfectly and fast!
 
  • Like
Reactions: fjv
One setting you can change is uncheck Use main stream if available. This does not improve accuracy it only slows down the detection time.

View attachment 166261
From what I understand only Nvidia are supported - give it a try anyway as you have it. I chose Nvidia specifically for deepstack and now use CP.


I'm wondering on the .NET version that uses DirectML if compatible.
 
Thought the same - might well work with a non-nvidia card
I think the .NET version can work with various GPUs because just tried a fresh CP.AI install on i7 6700 without BI present.
It missed PIL, so 6.2 didn't work. But the .NET version worked, switched automatically to DirectML GPU mode and performed a 3 inference/second bechmark at 100 % HD 530 GPU load.
 
  • Like
Reactions: Pentagano
Just toying with running CP and Deepstack together. See how they perform and compare side to side so got both running, images will be sent to both servers at the same time (AITool) - will see the outcome tonight (only for night images as CP is great for the day).

1687883454747.png
 
Just toying with running CP and Deepstack together. See how they perform and compare side to side so got both running, images will be sent to both servers at the same time (AITool) - will see the outcome tonight (only for night images as CP is great for the day).

View attachment 166370
They also use different models. Or even if their models are similar then the training differs I guess. So speed may be affected by their accuracy.
I was wondering on the 'To cancel' list. It would be necessary to know what features can the models identify. Would like to try an ONNX analiser tool on their models. At least CP AI .NET is in ONNX.
 
  • Like
Reactions: Pentagano
What I'd like to do with the AITool settings is just fire the first and last detected object (or just limit the number that go to my botl
As I have the triggers to fire fast every 200ms so get spammed in my bot.

How do I limit the alerts to my bot without changing the blue iris trigger settings?

Hmm there are some cooldown settings etc I don't quite get yet in the AITool
 
What I'd like to do with the AITool settings is just fire the first and last detected object (or just limit the number that go to my botl
As I have the triggers to fire fast every 200ms so get spammed in my bot.

How do I limit the alerts to my bot without changing the blue iris trigger settings?

Hmm there are some cooldown settings etc I don't quite get yet in the AITool
Don't know exactly but had similar issues. Because of so many moving objects like tree leaves and spider nets I set quasi-continuos inference by setting the trigger end times 3 seconds and multiple frames to AI. On the 'Alerts' tab there's a 'Minimum time between alerts' field. Have set the same 3 seconds there. But that can be more, for your case, if it's suitable...
 
  • Like
Reactions: Pentagano
Don't know exactly but had similar issues. Because of so many moving objects like tree leaves and spider nets I set quasi-continuos inference by setting the trigger end times 3 seconds and multiple frames to AI. On the 'Alerts' tab there's a 'Minimum time between alerts' field. Have set the same 3 seconds there. But that can be more, for your case, if it's suitable...
1 think that does not work in my set up as I use the aitool and that alerts tab is all disabled as the aitool controls the alerts.
May have to try using blue iris alerts instead of the aitool if I can't find a solution.
May write to the developer on git
 
  • Like
Reactions: Gyula614
First you need to chose one, you have both the CPU and the GPU version running. I would try the GPU. Click on the 3 dots ... to disable one.
Second, when using custom models you need to uncheck Default object detection
Third, if you are using the GPU version you need the .onnx custom models for the .NET version
View attachment 166359

If you use the CPU version you will need the .pt custom models
View attachment 166360

Click on the 3 Dots ... on Use custom models: to see which models you have installed:
View attachment 166361

HTH


Gonna try

My path for custom models is:

C:\Program Files\CodeProject\AI\modules\ObjectDetectionNet\custom-models

In that place are the .onnx models

I unselected the "default object detection"
I unselected gpu in codeproject, because in this moment i cant mount the gtx 1070

I attach new screenshots
At the moment detection goes ok, and alerts too
So I think my problem is solved :)

Thank you so much for your help
 

Attachments

  • ipcam5.png
    ipcam5.png
    19.3 KB · Views: 32
  • ipcam6.png
    ipcam6.png
    138.5 KB · Views: 35
  • ipcam7.png
    ipcam7.png
    23.4 KB · Views: 32
  • Like
Reactions: David L and actran
1 think that does not work in my set up as I use the aitool and that alerts tab is all disabled as the aitool controls the alerts.
May have to try using blue iris alerts instead of the aitool if I can't find a solution.
May write to the developer on git


I've moved over to using AI on Blue iris now and got it all set up firing telegram pics using curl in the actions list.

There were pros and cons of the AITool but also with BI built in options.
One I needed was the alert options that were not in the AITOOL or maybe were but couldn't get them to work as I wanted.
Switched of the AITool for now.

Will renew my subscription with BI to get the latest updates also
:cool:
 
  • Like
Reactions: 105437 and Gyula614
Hi,

This is how a startup looks like at my BI server.
Originally the video decoding of both cameras was done on the built-in GPU (8th gen. i7, HD 630), GPU 0. Then the GPU 0 was busy doing 3D operations according to the task manager, with at about 30 % duration.
CP AI was always set to GPU but I think the AI inference was also going on the HD 630 or the CPU because the nVidia GPU was showing 0 % in the task manager.

Now, I've set all camera's decoding to nVidia, GPU 1 NvDec. Since then the system memory usage nicely increased as well and nVidia card load displays larger than 0 % in task manager.
However can't recognize where the AI inference occurs. Now none of the GPUs do 3D operations in task manager. However the AI inference is at the same rate.

When YoloV5 6.2 starts, it doesn't tell that the computation will occur on the nVidia card. Previously, after one startup I've seen that. Then the GPU 1's 3D load went for a one-shot 100 % peak and CP AI stopped.
Haven't seen that note since then.
Did anybody have the same issue, or know a way to know/set which GPU is to be used for CP AI? There are many other tasks that can load GPUs, like video decoding, motion detection, etc. So it's pretty hidden I think.
 

Attachments

  • 2.png
    2.png
    356.5 KB · Views: 27
Last edited:
I see that there is a Instances Box for Codeproject in BI.
Can we have more than 1 running like with could with deepstack?
Mine is greyed out.

How can we enable more than one with a different port? If possible?

Thanks


chatgpt told me this? Was easier with deepstack, just entered 2 port numbers.


To run two CodeProject AI instances on the same Windows computer, you will need to follow these general steps:

  1. Check system requirements: Ensure that your computer meets the necessary system requirements to run multiple instances of CodeProject AI simultaneously. Verify that you have enough processing power, memory, and disk space to accommodate the additional workload.
  2. Download and install CodeProject AI: Visit the CodeProject AI website and download the installation package for the AI software. Run the installer and follow the on-screen instructions to complete the installation process. Repeat this step to install a second instance of CodeProject AI.
  3. Set up separate directories: Create two separate directories to hold the installations of CodeProject AI. For example, you can create folders named "CodeProjectAI_Instance1" and "CodeProjectAI_Instance2" on your computer's hard drive.
  4. Install the first instance: During the installation process, specify the first directory you created ("CodeProjectAI_Instance1") as the installation location for the first instance of CodeProject AI. Complete the installation by following the provided instructions.
  5. Install the second instance: Start the installation process again and this time specify the second directory you created ("CodeProjectAI_Instance2") as the installation location for the second instance of CodeProject AI. Proceed with the installation until it is finished.
  6. Configure the instances: After both installations are complete, open each instance of CodeProject AI separately and configure them independently. This may involve setting up user accounts, specifying preferences, or adjusting any other necessary settings. Make sure to use different login credentials for each instance.
  7. Run the instances: Once the installations and configurations are done, you can run both instances of CodeProject AI simultaneously. Launch each instance separately from their respective installation directories or desktop shortcuts. They will operate as separate applications, allowing you to use them independently.
By following these steps, you should be able to run two instances of CodeProject AI on the same Windows computer without conflicts. Keep in mind that running multiple instances may increase the overall resource usage on your system, so ensure your hardware can handle the load effectively.
 
Last edited: