CodeProject.AI Version 2.0

Can you still access internet as normal after installing this with other programs eg browsers etc (I'm typing on my BI server atm to avoid having to change LAN cables (again). I don't know how many insertions the plugs are rated for, but I must be getting there!!

Also, this is something Mike might need to check on the .net model - to find out why it's accessing the internet if this is unexpected behaviour.

Yep it still works fine even when this is installed.

In my case it was trying to access the internet to look for an update at such a frequency it was timing it. I thought it was like every 3 seconds lol.
 
  • Like
Reactions: CCTVCam
Mike something else for you and BI to have a look at, some incompatibilities continue

1. Stopping the AI service still causes BI to stop responding for a considerable period of time (window shows not responding) - this has been happening every since I started using and persists even after a full windows re-install.

2. Wittaj / Mike, thanks for the instructions but I've had to uninstall .net again and the MS Loop Adapter because of issues:

- Alerts were occurring and recording but AI wasn't recognising anything. In fact, I could no longer contact the console server either. I had to uninstall the MS Loop back adapter AND reboot several times, then re-install 6.2 but then 6.2 wouldn't work either. Only later several reboots and several "Burn onto image" re-selections / selections could I get it to work again. Even after this, BI was also acting very slowly - clicking the alerts window, it took seconds to switch ebtween tabs on the alerts page epecially the stroage tab and others, and it took several reboots to get it working again. In the meantime detections were showing at > 1,000ms.

Screenshot here of 6.2 net not detecting objects:

CPAI Issue.jpg

I still suspect the issues are:

1. Some incompatibility between CPAI and BI as seen with the non response when stopping / starting the service, slow BI after re-installing 6.2, the non burning of detections onto the image and the slow detections initially

2. I believe from what I'm seeing that CPAI and / or BI are not cleaning up properly after themselves when a module or even the whole program is uninstalled / re-installed.
 
Hi guys,
Many mentioned the .net Yolov5 is faster than the 6.2
But I do not have this option available to install. Is this because I have the gpu card enabled and it is not available for my set up?

1687172975049.png


Update: It's a chrome and edge issue
It appears under firefox!!

yep it's fast - 40-50ms (GTX970)
 
Last edited:
  • Like
Reactions: David L
Strange. I was finding without an internet connection, it timed out.

The only reason I can out this down to internet is atm I have a temporary setup whereby my server doesn't have it's own LAN, so I have to pull the cable from the back of my to plug it into the Server and vice versa, which happened at least 10 times a day. I found that the timeouts co-incided with the times the internet wasn't connected. Don't get any time outs with a LAN plugged in.
My BI Box does not have any outbound Internet access and am running 2.08 CPAI with no problems, FYI. I do get an error message when CPAI tries to check for updates. It happens after a minute or so it will show Offline but nothing changes, it still does it's job well...

1687173660276.png1687173739697.png
1687174063134.png1687174504581.png
 
Which of the CP models do people find most effective and fast for person and cat/dog? (.net onmx)

I have just added the ipcam-combined and ipcam-dark(sunset-sunrise).
The combined appears fairly fast.

Before I just had the default codeproject_AI.
The url setting in the aitool was:



IPcam-combined Labels: - person, bicycle, car, motorcycle, bus, truck, bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig
IPcam-general Labels (includes dark models images): - person, vehicle
IPcam-animal Labels: - bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig
IPcam-dark Labels: - Bicycle, Bus, Car, Cat, Dog, Motorcycle, Person


Now the 2 are:
1687189169841.png
 

Attachments

  • 1687189116742.png
    1687189116742.png
    10.7 KB · Views: 11
Is there any difference in sensitivity of the 2 models for CAT and DOG? would both pick up the 2 creatures equally?


IPcam-combined Labels: - person, bicycle, car, motorcycle, bus, truck, bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig

IPcam-animal Labels: - bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pi
 
My BI Box does not have any outbound Internet access and am running 2.08 CPAI with no problems, FYI. I do get an error message when CPAI tries to check for updates. It happens after a minute or so it will show Offline but nothing changes, it still does it's job well...

View attachment 165768View attachment 165769
View attachment 165771View attachment 165772

I notice you are using a gpu with the .net model. Maybe that is the diference.

Also the BI version could be a variable.
 
  • Like
Reactions: David L
Pretty darn impressed with the latest version of cp using ipcam combined and dark.
Picks up just what I need.
From a distance at low resolution in low light.
My dog and spouse next to and at the trunk of the car.IMG_20230620_214945_784.jpgIMG_20230620_214922_871.jpg
 
  • Like
Reactions: Futaba and David L
Been having some really stupid issues with getting my RTX 3090 to work with code project. Right now, the only way I can get it to run is via the Yolov5 6.2 using CUDA instead of the .NET version. Forced to revert back to 2.0.8 since I wasn't able to get codeproject ai 2.1.9 working either. I want to keep the integrated gpu (i9-11900k) running for the QuickSync support, since the server is also being used to host Jellyfin and my homelab. I have tried looking thru my bios, but gigas**t doesn't seem to have any way to specify what gpu is supposed to load up first and I am not sure if there is any way to directly tell code project which GPU needs to be used for DirectML. Like I know 26ms is not horrible but would love to drop to 16 or 7ms like others have gotten before. Can confirm "use main stream" is unchecked and there is no dual detection running on the code project dashboard.
 
  • Like
Reactions: Gyula614
Hi,

I'm just new to AI-based detection with BlueIris. Just tried CodeProject.AI server 2.1.9 with CUDA 12.1 on GTX 1660 Super executing Large models for 'person' detection.
The settings are To confirm: person, To cancel: <field left empty>.
It seems like running, from the server log with at about 100-200 ms inference times.
However in BI each inference returns as "nothing found".\
I walked intentionally in front of each camera, stood for several seconds, then walked away.
There are at about 20 cameras running in different mounting angles. But from none of them inferred anything else than 'nothing found'.
I repeatedly uninstalled CP AI, deleted its folder and reinstalled several times but it had the same result. It's running YoloV5 6.2
Couldn't access to earlier versions than 2.1.9 however, found only their source codes on the net.

Did somebody experience some similar effects? If yes, is there a simple solution to it? Maybe I have unset something..

Thanks,
Gyula
 
Hi,

I'm just new to AI-based detection with BlueIris. Just tried CodeProject.AI server 2.1.9 with CUDA 12.1 on GTX 1660 Super executing Large models for 'person' detection.
The settings are To confirm: person, To cancel: <field left empty>.
It seems like running, from the server log with at about 100-200 ms inference times.
However in BI each inference returns as "nothing found".\
I walked intentionally in front of each camera, stood for several seconds, then walked away.
There are at about 20 cameras running in different mounting angles. But from none of them inferred anything else than 'nothing found'.
I repeatedly uninstalled CP AI, deleted its folder and reinstalled several times but it had the same result. It's running YoloV5 6.2
Couldn't access to earlier versions than 2.1.9 however, found only their source codes on the net.

Did somebody experience some similar effects? If yes, is there a simple solution to it? Maybe I have unset something..

Thanks,
Gyula
Disable Half Precision.
1687382867406.png
 
  • Like
Reactions: David L
Hi,

I'm just new to AI-based detection with BlueIris. Just tried CodeProject.AI server 2.1.9 with CUDA 12.1 on GTX 1660 Super executing Large models for 'person' detection.
The settings are To confirm: person, To cancel: <field left empty>.
It seems like running, from the server log with at about 100-200 ms inference times.
However in BI each inference returns as "nothing found".\
I walked intentionally in front of each camera, stood for several seconds, then walked away.
There are at about 20 cameras running in different mounting angles. But from none of them inferred anything else than 'nothing found'.
I repeatedly uninstalled CP AI, deleted its folder and reinstalled several times but it had the same result. It's running YoloV5 6.2
Couldn't access to earlier versions than 2.1.9 however, found only their source codes on the net.

Did somebody experience some similar effects? If yes, is there a simple solution to it? Maybe I have unset something..

Thanks,
Gyula
With that Video Card, I would consider running the GPU version.
 
With that Video Card, I would consider running the GPU version.
YoloV5 6.2 is for GPU. It uses the GPU, I checked that real time by an nvidia system monitor executable. It runs at bout 50 % peak and 15 % average GPU utilization and 2 % of 6 GB memory usage. As I remember other model returned % values for 'person', but the V5 6.2 on GPU always returns 'nothing found'.
 
YoloV5 6.2 is for GPU. It uses the GPU, I checked that real time by an nvidia system monitor executable. It runs at bout 50 % peak and 15 % average GPU utilization and 2 % of 6 GB memory usage. As I remember other model returned % values for 'person', but the V5 6.2 on GPU always returns 'nothing found'.

A few things to check:
Check Use GPU, Default object detection needs to be unchecked if you use Custom Models...

Here are my settings, maybe this will help you
1687386373448.png1687386460186.png
1687386961476.png1687386634099.png

On the Camera AI settings it is up to you, like you can uncheck Burn label, Use main stream if you have sub stream, etc.

I have another Clone CAM using the delivery custom model

HTH