Thank you for the plenty of information. I check it out in 1-2 days. Now the BI PC is away on the countryside and haven't set remote access up yet.A few things to check:
Check Use GPU, Default object detection needs to be unchecked if you use Custom Models...
Here are my settings, maybe this will help you
View attachment 165967View attachment 165969
View attachment 165974View attachment 165972
On the Camera AI settings it is up to you, like you can uncheck Burn label, Use main stream if you have sub stream, etc.
I have another Clone CAM using the delivery custom model
HTH
Have used the same model size. With deepstack I experienced similar accuracy between tiny and large models. Hence wanted to try CP AI.Is the mode level setting for codeproject the same as deepstack?
Also what is the recommended model size?
Thanks
I see from this postHave used the same model size. With deepstack I experienced similar accuracy between tiny and large models. Hence wanted to try CP AI.
Don't know where they get the model from BTW. It would be reasonable if BI owns the models and just runs them on DS or CP servers.
But I guess it's not the case, and the AI servers have their own models. Although I have no exect knowledge about it.
Great. Once I also want to update to some custom model to increase the accuracy. I think it's a lot of work. Model training might require a good training algorithm and thousands of annotated images..BI does not own the AI or the models. The models are generated by others, including a member here.
If the training algorithm is good then yes, a larger model should be more accurate. If the training algorithm limits then they may stuck. The model is an operation graph and set of operation filter weights. The operation graph is fixed, that's the YoloV5, etc... The filter weights are what must be set by training. They are loads of multidimensional tensors of data and each data value must be set on the most exact way possible. The training algorithms just getting closer to that, that's why they must be good. Because there's no ideal way how to train an NN, usually they are just tweaked error gradient methods..I see from this post
smaller model size less accurate but faster.CodeProject.AI Version 2.0
cmd.exe cannot find nvcc.exe. Either my install is bad, or the Environment PATH variable is lacking the directory. Where should I find it? Try using PowerShell, some times Command Prompt does not work but PowerShell does.ipcamtalk.com
still not sure what MODE changes yet. Model size sounds like MODE in deepstack
Yes. Yolo is a small net. It's also a question if BI resizes mainstreams to a certain resolution or not. I've read a post somewhere somebody wrote it resizes. I hope not. What I noticed is that if it calls the AI server with an 8 MP camera's mainstream images, for that I also set 10 additionals after the detection, it rises the CPU load (i7 8700) to 100 % peak for approx. 1 second. That may be an effect of some resize. But I have also no information about it...I've set the model size to large. My old gpu still processes them sub 100ms.
Fast enough
New Object Detection Module that works with Orange Pi 5/5B/5 Plus Orange Pi 5's Link. This is an all in one solution, it has a built-in NPU similar to the Coral but faster
CPU and NPU yes, the differences are RAM, Wi-Fi, onboard storageDo all of the Orange PIs listed perform the same?
Hi,A few things to check:
Check Use GPU, Default object detection needs to be unchecked if you use Custom Models...
Here are my settings, maybe this will help you
View attachment 165967View attachment 165969
View attachment 165974View attachment 165972
On the Camera AI settings it is up to you, like you can uncheck Burn label, Use main stream if you have sub stream, etc.
I have another Clone CAM using the delivery custom model
HTH
I find Object Detection (YOLOv5 .NET) works the best. You can test it your self by using Explorer like the below screenshots. Just remember you can only have one Object Detection module enabled at a time.I may be a bit late to the party, but I am having fun! I am running an i7-8700 with inbuilt Intel graphics. Which Yolo would be optimal do you think? New features sound pretty ace
Taking one aspect at a time. Your CPU seems to be struggling. Have you considered enabling your GPU within BI to reduce the BI CPU load?Hi,
I got CP AI working eventually on GPU. Just stopped Face Detection and disabled half precision. It may work with face detection enabled, haven't tried afterwards.
The AI evaluation loads the CPU significantly however. There are many trees and spider nets around the cameras (currently 24 are connected). So set 3 seconds alert period. and 2 additional images 750 ms wise after each trigger, using main stream.
Now the CPU (i7 8700) is at around 90 % average load, the CUDA GPU is not significant. In at about 12 hours CP AI server did 700k inference with YoloV5 6.2 custom models added.
Will optimize it later somehow. The constantly moving objects like tree leaves and spider nets make it difficult to trigger when a person is there. So I just set the 3 seconds period an 3*0.75 second images for now for many cameras.
However the recording seems to be done for all alerts. Not only for alerts confirmed by AI.
Does somebody know a way to record only when the AI "To Confirm" criteria with the percentage set has met for the alarm?
Thanks,
Gyula
Hi,Taking one aspect at a time. Your CPU seems to be struggling. Have you considered enabling your GPU within BI to reduce the BI CPU load?
Are you recording continuously and capturing events or only wishing to record when an event happens that meets your criteria? Both are possible.
Coming soon to CodeProject.AI
- New Object Detection Module that works with Orange Pi 5/5B/5 Plus Orange Pi 5's Link. This is an all in one solution, it has a built-in NPU similar to the Coral but faster.