5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Been running CPAI for a few days and I'm not looking back to Deepstack. Everything is performing great and response times are better than Deepstack.
 
  • Like
Reactions: biggen and actran
This probably gonna be a dumb question but searched and cant find anything on this. Wanted to get a list of labels for the Yolov5L and X models. I cant seem to find anything unless Im just not looking in the right place, their own website doesnt flat out tell me that. I know person is one of them. But seems like vehicle is not working, im gonna assume its probably like car, truck, bus, etc.

Also put these two in the models in the custom models folder (C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models) and still cannot see them in CodeProject Explorer when trying to pick it for testing sample images, here is a screenshot.

@Dixit I did a quick test on my side and adding models to custom-model folder does work, as shown in screenshot below...
Can I assume that your models have extension *.pt?
Screen Shot 2022-10-03 at 2.34.04 PM.png

BTW, the benchmark UI is a speed test and does not provide list of possible labels.

BTW #2, in addition to custom models, these 3 yolo models are included with CPAI v1.6.x release in the following folder:
Screen Shot 2022-10-03 at 2.37.18 PM.png
 
Last edited:
Wanted to get a list of labels for the Yolov5L and X models.
Here you go:

person
bicycle
car
motorcycle
airplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
couch
potted plant
bed
dining table
toilet
tv
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush
 
This probably gonna be a dumb question but searched and cant find anything on this. Wanted to get a list of labels for the Yolov5L and X models. I cant seem to find anything unless Im just not looking in the right place, their own website doesnt flat out tell me that. I know person is one of them. But seems like vehicle is not working, im gonna assume its probably like car, truck, bus, etc.

Also put these two in the models in the custom models folder (C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models) and still cannot see them in CodeProject Explorer when trying to pick it for testing sample images, here is a screenshot.
View attachment 141502
1664833564851.png
 
@Dixit I did a quick test on my side and adding models to custom-model folder does work, as shown in screenshot below...
Can I assume that your models have extension *.pt?


BTW, the benchmark UI is a speed test and does not provide list of possible labels.

BTW #2, in addition to custom models, these 3 yolo models are included with CPAI v1.6.x release in the following folder:
Yeap they have a .pt extension. I wasnt using Benchmark to get the labels, but Vision wouldve given it to me as well, but @Vettester just provided the list (appreciate it)
Screenshot 2022-10-03 at 6.15.55 PM.png

I knew they were included but just couldnt figure out why they were not showing up in the list to be able to test even in custom models (when i was originally trying to find the labels off running tests from Vision). Tried both IE Edge and Chrome. For now I guess who cares since I have the labels now.
 
Also put these two in the models in the custom models folder (C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models) and still cannot see them in CodeProject Explorer when trying to pick it for testing sample images, here is a screenshot.

I had a similar issue - try totally refreshing the page (Shift F5) and see if they appear.
 
Well, things move so fast in CPAI development builds that I didn't even have time to report back on the performance change from installing the test files this morning before I noticed a new CPAI Beta build (1.6.6.0) around noon my time. So, it may be moot now, but yes, installing those test versions did restore my original CPU only performance, and then some. My daytime results (for about 10 hours today) have been averaging 112ms inference times compared to the longer term 160ms average times I was seeing on the first few CPAI versions (1.6.0 to 1.6.2); however, that could have been due to "easier" daylight only testing. In any case, I am installing 1.6.6.0 now, even though it may be the equivalent to what I had with your test file builds.

On an unrelated note: am I the only one slightly uncomfortable with the way this installs? By that I mean, yes, the initial download is just a tiny stub or script, that then interactively downloads and installs the rest of the massive program from the web. At first that might seem advantageous, but what concerns me a bit is that it is dependent on the server side download always being "available" in the long term. It doesn't allow us to save or cache a complete copy of the installation files on our computer(s), should things change with the funding or nature of the project. Oh well....I just hope this doesn't get gobbled up and become a "paid only" program later.
 
Well, things move so fast in CPAI development builds that I didn't even have time to report back on the performance change from installing the test files this morning before I noticed a new CPAI Beta build (1.6.6.0) around noon my time. So, it may be moot now, but yes, installing those test versions did restore my original CPU only performance, and then some. My daytime results (for about 10 hours today) have been averaging 112ms inference times compared to the longer term 160ms average times I was seeing on the first few CPAI versions (1.6.0 to 1.6.2); however, that could have been due to "easier" daylight only testing. In any case, I am installing 1.6.6.0 now, even though it may be the equivalent to what I had with your test file builds.

On an unrelated note: am I the only one slightly uncomfortable with the way this installs? By that I mean, yes, the initial download is just a tiny stub or script, that then interactively downloads and installs the rest of the massive program from the web. At first that might seem advantageous, but what concerns me a bit is that it is dependent on the server side download always being "available" in the long term. It doesn't allow us to save or cache a complete copy of the installation files on our computer(s), should things change with the funding or nature of the project. Oh well....I just hope this doesn't get gobbled up and become a "paid only" program later.
I agree. It also seems to need user input for the install. I have to click return and even manually close some of the console windows that pop up as they seem to hang during the install run. Seems a bit bloated.

It is beta and the pace of improvements are great however.
 
  • Like
Reactions: jrbeddow
On an unrelated note: am I the only one slightly uncomfortable with the way this installs? By that I mean, yes, the initial download is just a tiny stub or script, that then interactively downloads and installs the rest of the massive program from the web. At first that might seem advantageous, but what concerns me a bit is that it is dependent on the server side download always being "available" in the long term. It doesn't allow us to save or cache a complete copy of the installation files on our computer(s), should things change with the funding or nature of the project. Oh well....I just hope this doesn't get gobbled up and become a "paid only" program later.

You are not the only one! I have expressed the same thing here that my concern is they appear to have it positioned to try to get acquired by some firm wanting AI, at which point this becomes another OpenALPR and they charge a monthly fee for something that was a hobbyist venture, which DeepStack appears to be and will continue to be.
 
I am curious as I am having some trouble getting delivery.pt and USPS.pt working with codeproject ai, Does this have the same limitation as in DeepStack where there is a limitation to 4 models that it can run on a camera at a time?
 
If there is a limit to models on a single camera, and I have no idea if that's true, just create a clone of the camera(s) involved and use the additional models on the clone(s) creating more clones if needed. Make sure you assign the original camera as a "master" and you can also check the box for "group clone clips" on both the master and clone(s).
 
I am curious as I am having some trouble getting delivery.pt and USPS.pt working with codeproject ai,
I'm using USPS.pt with CodeProject AI. It works, but it isn't 100% reliable. Sometime it gets triggered by FedEx and UPS trucks.

Here's how I have mine setup on a cloned camera.
Screen Shot 2022-10-03 at 8.09.00 PM.png
 
Last edited:
If there is a limit to models on a single camera, and I have no idea if that's true, just create a clone of the camera(s) involved and use the additional models on the clone(s) creating more clones if needed. Make sure you assign the original camera as a "master" and you can also check the box for "group clone clips" on both the master and clone(s).
On the subject of clone cameras (yes, slightly off-topic here, my apologies): I haven't had great luck when setting these up with that "group clone clips" setting, as I don't see any difference one way or the other. Also, it seems like clone cameras do need to have recording enabled in some fashion (at least on trigger or on alert) in order to be able to review them remotely (ie: in UI3). I guess I'm still searching for an ultimate tutorial on the best ways to set up clone cameras, as the BI help file has limited coverage on that subject. Any suggestions for further study?
 
I am curious as I am having some trouble getting delivery.pt and USPS.pt working with codeproject ai, Does this have the same limitation as in DeepStack where there is a limitation to 4 models that it can run on a camera at a time?
Waaaaa we can run more than one model on a camera at a time. Mind=blown! Lol
 
  • Like
Reactions: gwminor48
On the subject of clone cameras (yes, slightly off-topic here, my apologies): I haven't had great luck when setting these up with that "group clone clips" setting, as I don't see any difference one way or the other. Also, it seems like clone cameras do need to have recording enabled in some fashion (at least on trigger or on alert) in order to be able to review them remotely (ie: in UI3). I guess I'm still searching for an ultimate tutorial on the best ways to set up clone cameras, as the BI help file has limited coverage on that subject. Any suggestions for further study?
According to the BI help file "If you select to Group clone clips, all cloned camera clips will be included with the master’s clips when the clips list is filtered by camera." I use cloned cameras to trigger various home automations. Most of my cloned cameras are not set to record so I don't use the group clone clips option. When a cloned camera is triggered it sets off an "on alert" action which is configured in the Alerts tab. I also hide all of my cloned cameras.
 
Last edited:
  • Like
Reactions: jrbeddow
Hmmm... Upper case is working for me.

View attachment 141531
Like I stated in an earlier post, it works if I run it manually on a snapshot through the codeproject interface but for some reason if I configure BI to run USPS and Delivery it shows up in the .dat file that it is running but nothing is found. I can run the exact snapshot manually and it will find USPS or other delivery logos it has me kind of stumped. It is not showing any errors either. The config is a direct transfer over from Deepstack with the same camera and location that was working flawlessly before. Might be a config issue with BI I am not sure
 

Attachments

  • DrivewayE.20221001_104751741.8.jpg
    DrivewayE.20221001_104751741.8.jpg
    1.5 MB · Views: 33
  • USPS Not Detected.png
    USPS Not Detected.png
    3.4 MB · Views: 35
  • USPS Test.jpg
    USPS Test.jpg
    1.5 MB · Views: 34
  • DrivewayE.20221001_104751741.9.jpg
    DrivewayE.20221001_104751741.9.jpg
    1.5 MB · Views: 35
  • Settings.png
    Settings.png
    34.8 KB · Views: 38