CodeProject.AI Version 2.0

It looks like you tested with default model, ipcam-general should look like the below

# Label Confidence
0 vehicle 87%
1 person 83%
2 vehicle 81%
3 vehicle 80%
4 vehicle 80%
5 vehicle 55%
6 person 52%
7 vehicle 48%
Processed by ObjectDetectionYOLOv5Net
Processed on PC-NVR
Analysis round trip 23 ms
Processing 19 ms
Inference 19 ms

Here is how I selected everything for the test. Seems better, but my P400 GPU is obviously lacking.

1702829199718.png
1702829154870.png
# Label Confidence
0 car 89%
1 person 83%
2 traffic light 59%
3 truck 59%
4 truck 54%
5 car 53%
6 car 46%
7 truck 44%
8 traffic light 44%
9 car 42%
10 truck 41%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 231 ms
Processing 218 ms
Inference 212 ms
 
Here is how I selected everything for the test. Seems better, but my P400 GPU is obviously lacking.

View attachment 180314
View attachment 180313
# Label Confidence
0 car 89%
1 person 83%
2 traffic light 59%
3 truck 59%
4 truck 54%
5 car 53%
6 car 46%
7 truck 44%
8 traffic light 44%
9 car 42%
10 truck 41%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 231 ms
Processing 218 ms
Inference 212 ms
Custom Detect is slightly better
# Label Confidence
0 vehicle 85%
1 person 81%
2 vehicle 81%
3 vehicle 80%
4 vehicle 59%
5 person 57%
6 vehicle 54%
7 vehicle 53%
8 vehicle 40%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 117 ms
Processing 105 ms
Inference 101 ms
 
  • Like
Reactions: MikeLud1
On previous versions did you have CUDA working for the ALPR module? I am try figure out if the current ALPR version might be the issue.
No I have never got it to work. It didnt work on 2.3.4 but I thought it was the issue with CP.AI not finding CUDA. Also it never worked on 2.1.1. Not sure which ALPR modules those were with at the time. We had tried to troubleshoot this back in August but couldnt get it to go. I gave up at the point where 'NVIDIA_SLA_cuDNN_Support.txt' is not recognized.

Blue Iris and CodeProject.AI ALPR
 
Custom Detect is slightly better
# Label Confidence
0 vehicle 85%
1 person 81%
2 vehicle 81%
3 vehicle 80%
4 vehicle 59%
5 person 57%
6 vehicle 54%
7 vehicle 53%
8 vehicle 40%
Processed by ObjectDetectionYOLOv8
Processed on localhost
Analysis round trip 117 ms
Processing 105 ms
Inference 101 ms
YOLOv5 6.2 is faster and seemingly more accurate than YOLOv8 on my system
# Label Confidence
0 vehicle 86%
1 person 86%
2 vehicle 85%
3 vehicle 82%
4 vehicle 81%
5 vehicle 71%
6 vehicle 67%
7 person 57%
8 vehicle 55%
9 vehicle 52%
10 vehicle 44%
Processed by ObjectDetectionYOLOv5-6.2
Processed on localhost
Analysis round trip 72 ms
Processing 60 ms
Inference 54 ms
 
Real time analysis with a gtx 1650 is about 10x slower in my real world testing with YoloV8. I went back to 6.2 for now.
How are you liking that GPU? I'm wondering if my issues stem from my somewhat older GTX 1070 ti. Was thinking of maybe upgrading to see if that works.
 
How are you liking that GPU? I'm wondering if my issues stem from my somewhat older GTX 1070 ti. Was thinking of maybe upgrading to see if that works.
I have been using the 1650 in this rig for over a year now and have had fantastic performance with it in real time situations, I monitor 12 5mp cameras with it and during the heaviest traffic times I am seeing detection times in the 26-30ms range with multiple cameras firing off at the same time... This is when using yolo6.2

For comparison my 2060 12gb model in my other BI rig only sees a drop in detection times to about 20ms average with multiple cameras firing at the same time. This is also with Yolo6.2
 
Regarding the MESH.

Here is what I notice, when running (3) mesh'd systems, one of the systems has BI on it. It feels like its not actually selecting the most efficient box, more just using whichever one was spun up online first. I mean it sees them, and if the one spun first went offline it would switch to another node on the mesh, but it doesn't appear to be choosing the fastest. Not complaining at all, just giving some feedback to what I see, and have tested.

It could be there's a interval that it tests on the nodes at and then switches(?), like every 15min or so. Not real sure how its suppose work, but great for backup AI for sure.

Altho I could be missing something, just giving some info. I can test anything out, more than happy to.
 
I have been using the 1650 in this rig for over a year now and have had fantastic performance with it in real time situations, I monitor 12 5mp cameras with it and during the heaviest traffic times I am seeing detection times in the 26-30ms range with multiple cameras firing off at the same time... This is when using yolo6.2

For comparison my 2060 12gb model in my other BI rig only sees a drop in detection times to about 20ms average with multiple cameras firing at the same time. This is also with Yolo6.2
That's good to hear. Thanks for the reply. Looks like the RTX 3050 or 3060 is the newer model that replaces the 1650. But for twice the price and only a little more compute power it looks like the 1650 is the right choice if I go forward.
 
  • Like
Reactions: kaltertod
That's good to hear. Thanks for the reply. Looks like the RTX 3050 or 3060 is the newer model that replaces the 1650. But for twice the price and only a little more compute power it looks like the 1650 is the right choice if I go forward.

Find a non mining card on ebay they can be had for around $100

1650 for sale | eBay
 
As an eBay Associate IPCamTalk earns from qualifying purchases.
  • Like
Reactions: prsmith777
Regarding the MESH.

Here is what I notice, when running (3) mesh'd systems, one of the systems has BI on it. It feels like its not actually selecting the most efficient box, more just using whichever one was spun up online first. I mean it sees them, and if the one spun first went offline it would switch to another node on the mesh, but it doesn't appear to be choosing the fastest. Not complaining at all, just giving some feedback to what I see, and have tested.

It could be there's a interval that it tests on the nodes at and then switches(?), like every 15min or so. Not real sure how its suppose work, but great for backup AI for sure.

Altho I could be missing something, just giving some info. I can test anything out, more than happy to.

I'm seeing or thinking anyway only when the BI machine is slammed with several images will it dump some off. On mine it doesn't use the fastest slave as well unless I get several calls at once. In my case I only run 8 cams on AI on a I7-6700k in CPU mode, so most of the time it really needs no help at all.

1702839028449.png
 
Completely unscientific test results of my mailbox cam that captures passing cars using all of the same settings except for YOLOv5 6.2 vs YOLOv8.

YOLOv5 6.2 Average: = 90.3%, 339.5ms
0 12/17/2023 10:16:42.852 AM Mailbox_Cam AI: [ipcam-general] vehicle:92% [0,513 525,791] 298ms
0 12/17/2023 10:29:53.955 AM Mailbox_Cam AI: [ipcam-general] vehicle:92% [0,509 595,807] 286ms
0 12/17/2023 10:35:47.814 AM Mailbox_Cam AI: [ipcam-general] vehicle:92% [0,493 578,798] 405ms
0 12/17/2023 10:38:36.926 AM Mailbox_Cam AI: [ipcam-general] vehicle:92% [0,512 416,784] 433ms
0 12/17/2023 11:09:10.024 AM Mailbox_Cam AI: [ipcam-general] vehicle:90% [0,447 366,766] 290ms
0 12/17/2023 11:17:14.116 AM Mailbox_Cam AI: [ipcam-general] vehicle:83% [1106,691 1432,874] 288ms
0 12/17/2023 11:55:10.890 AM Mailbox_Cam AI: [ipcam-general] vehicle:93% [792,740 2688,1510] 222ms
0 12/17/2023 12:35:58.911 PM Mailbox_Cam AI: [ipcam-general] vehicle:86% [100,533 899,858] 532ms
0 12/17/2023 12:39:39.072 PM Mailbox_Cam AI: [ipcam-general] vehicle:93% [1,415 573,794] 389ms
0 12/17/2023 12:44:38.916 PM Mailbox_Cam AI: [ipcam-general] vehicle:91% [0,427 453,797] 383ms
0 12/17/2023 12:48:36.445 PM Mailbox_Cam AI: [ipcam-general] vehicle:90% [0,360 305,794] 208ms

YOLOv8 Average: = 87.4%, 442ms
0 12/17/2023 3:15:21.988 PM Mailbox_Cam AI: [ipcam-general] vehicle:92% [0,446 629,815] 700ms
0 12/17/2023 3:18:39.627 PM Mailbox_Cam AI: [ipcam-general] vehicle:91% [0,551 627,824] 305ms
0 12/17/2023 3:24:15.974 PM Mailbox_Cam AI: [ipcam-general] vehicle:90% [0,407 344,765] 359ms
0 12/17/2023 3:33:27.884 PM Mailbox_Cam AI: [ipcam-general] vehicle:89% [0,458 953,876] 317ms
0 12/17/2023 3:34:54.864 PM Mailbox_Cam AI: [ipcam-general] vehicle:89% [115,518 870,818] 520ms
0 12/17/2023 3:39:09.941 PM Mailbox_Cam AI: [ipcam-general] vehicle:82% [296,584 858,814] 331ms
0 12/17/2023 3:40:52.292 PM Mailbox_Cam AI: [ipcam-general] vehicle:80% [367,586 1056,836] 706ms
0 12/17/2023 3:44:38.471 PM Mailbox_Cam AI: [ipcam-general] vehicle:82% [239,574 894,803] 586ms
0 12/17/2023 3:47:40.586 PM Mailbox_Cam AI: [ipcam-general] vehicle:90% [0,439 296,805] 272ms
0 12/17/2023 3:53:05.724 PM Mailbox_Cam AI: [ipcam-general] vehicle:84% [1113,671 1443,874] 364ms
0 12/17/2023 3:54:59.935 PM Mailbox_Cam AI: [ipcam-general] vehicle:92% [0,489 522,800] 402ms
 
Stupid question... How do you know its not a mining card?
For those looking for video cards I would not get any under 4gigs of memory. There are post here of those with 2gig cards that were struggling...
 
  • Like
Reactions: BORIStheBLADE
This may be why it does not work
That makes sense. I will wait a bit then move the i5-8500 system to 2.4.5 and see what happens.
I am glad you discovered this. Thanks Mike, ya'll just saved me a lot of heartache since I have an i7-4770 running Windows 10 Pro with DirectX 12 and CPAI v2.0.8. Glad I did not try to upgrade to 2.4.5 RC1
Guess my BI PC will be getting a new Motherboard/CPU/Memory in it's future. My 970 4gigger video card is doing find presently but may be retired with the replacement of the 4th Gen Proc...
 
Stupid question... How do you know its not a mining card?

There are a couple of ways, Buy NIB, Look for listings that specifically state they were not used to mine crypto, Ask the seller. If there are any problems with the card return and get refund. Check not only the compute functionality of the card but also the 3d functionality..

Typically when I purchase a card off ebay the first thing I do is strip it and repad/repaste the card. Then it generally goes through a 24-48 hr stress test at full load. If it can make it through that the card gets the green light. If the card fails the 48 hr test then it gets shipped back as non functional.
 
One thing about Professional Mining Cards is they are usually ran in a climate controlled clean Data center, also usually undervolted. Of course Video card manufactures are going to tell you to stay away from the Mining cards. Gaming may weaken a card quicker than a Mining card since it heats up and cools down alot during gaming, where a Mining card usually stays pretty fixed at a constant speed and heat range, they mainly use the VRAM hard, but gaming does too...
 
The whole idea of dual NIC is that the main net shouldn't see the cameras network, other than BI itself, I think? BI is on the 192.168.1.xxx network, the cameras are on .50.xxx Both pc's see each other in explorer on the .1.xxx net. CPAI says "active address" is the .50, but no idea why or how to change that... hopefully someone can chime in and advise... thanks
Replying to myself: Thanks to Tinman, the problem is solved:) Great work! The issue was that CPAI insisted on seeing the Cameras IP (Dual NIC) as the active IP, and the second PC can't see that. Tinman pointed me to this:



This explained how to force the priority to be on the main network's IP, and it worked like a charm:) Both PC's Mesh now see each other.
Thanks again, Tinman
 
So when I last updated to 2.3.4 I stopped getting alerts. They were cancelled alerts when they still were 20%-30% over the confidence level. I managed to redo all my zones making literal checker boards between zone A and B which happened to fix it. When I updated to 2.4.5 those settings that were working with 2.3.4 stopped working. Even with AL confirming them well over the confidence level like before.
After going back to the drawing board again I stopped using "object crossing zones" in triggers and now have "object travels" box checked. I don't get why this keeps happening but I wanted to share my experience.