CodeProject.AI Version 2.0

Is the mode level setting for codeproject the same as deepstack?

Also what is the recommended model size?
Thanks
 
A few things to check:
Check Use GPU, Default object detection needs to be unchecked if you use Custom Models...

Here are my settings, maybe this will help you
View attachment 165967View attachment 165969
View attachment 165974View attachment 165972

On the Camera AI settings it is up to you, like you can uncheck Burn label, Use main stream if you have sub stream, etc.

I have another Clone CAM using the delivery custom model

HTH
Thank you for the plenty of information. I check it out in 1-2 days. Now the BI PC is away on the countryside and haven't set remote access up yet.
 
  • Like
Reactions: David L
Is the mode level setting for codeproject the same as deepstack?

Also what is the recommended model size?
Thanks
Have used the same model size. With deepstack I experienced similar accuracy between tiny and large models. Hence wanted to try CP AI.
Don't know where they get the model from BTW. It would be reasonable if BI owns the models and just runs them on DS or CP servers.
But I guess it's not the case, and the AI servers have their own models. Although I have no exect knowledge about it.
 
Have used the same model size. With deepstack I experienced similar accuracy between tiny and large models. Hence wanted to try CP AI.
Don't know where they get the model from BTW. It would be reasonable if BI owns the models and just runs them on DS or CP servers.
But I guess it's not the case, and the AI servers have their own models. Although I have no exect knowledge about it.
I see from this post
smaller model size less accurate but faster.

still not sure what MODE changes yet. Model size sounds like MODE in deepstack
 
BI does not own the AI or the models. The models are generated by others, including a member here.
 
BI does not own the AI or the models. The models are generated by others, including a member here.
Great. Once I also want to update to some custom model to increase the accuracy. I think it's a lot of work. Model training might require a good training algorithm and thousands of annotated images..
 
I see from this post
smaller model size less accurate but faster.

still not sure what MODE changes yet. Model size sounds like MODE in deepstack
If the training algorithm is good then yes, a larger model should be more accurate. If the training algorithm limits then they may stuck. The model is an operation graph and set of operation filter weights. The operation graph is fixed, that's the YoloV5, etc... The filter weights are what must be set by training. They are loads of multidimensional tensors of data and each data value must be set on the most exact way possible. The training algorithms just getting closer to that, that's why they must be good. Because there's no ideal way how to train an NN, usually they are just tweaked error gradient methods..
 
I've set the model size to large. My old gpu still processes them sub 100ms.
Fast enough
Yes. Yolo is a small net. It's also a question if BI resizes mainstreams to a certain resolution or not. I've read a post somewhere somebody wrote it resizes. I hope not. What I noticed is that if it calls the AI server with an 8 MP camera's mainstream images, for that I also set 10 additionals after the detection, it rises the CPU load (i7 8700) to 100 % peak for approx. 1 second. That may be an effect of some resize. But I have also no information about it...
There should be a massive, well trained model for these GPUs. I prefer accuracy and alert reliance to used power.
 
Coming soon to CodeProject.AI
  • Updated ALPR module v2.5
    • Optimize OCR to improve accuracy
    • Detects and reads multi-line license plates
      • 1687634168931.png
  • New Object Detection Module that works with Orange Pi 5/5B/5 Plus Orange Pi 5's Link. This is an all in one solution, it has a built-in NPU similar to the Coral but faster.
    • 1687635773862.png
    • 1687635242948.png
 
Last edited:
As an Amazon Associate IPCamTalk earns from qualifying purchases.
New Object Detection Module that works with Orange Pi 5/5B/5 Plus Orange Pi 5's Link. This is an all in one solution, it has a built-in NPU similar to the Coral but faster


Do all of the Orange PIs listed perform the same?
 
As an Amazon Associate IPCamTalk earns from qualifying purchases.
I may be a bit late to the party, but I am having fun! I am running an i7-8700 with inbuilt Intel graphics. Which Yolo would be optimal do you think? New features sound pretty ace
 
A few things to check:
Check Use GPU, Default object detection needs to be unchecked if you use Custom Models...

Here are my settings, maybe this will help you
View attachment 165967View attachment 165969
View attachment 165974View attachment 165972

On the Camera AI settings it is up to you, like you can uncheck Burn label, Use main stream if you have sub stream, etc.

I have another Clone CAM using the delivery custom model

HTH
Hi,

I got CP AI working eventually on GPU. Just stopped Face Detection and disabled half precision. It may work with face detection enabled, haven't tried afterwards.
The AI evaluation loads the CPU significantly however. There are many trees and spider nets around the cameras (currently 24 are connected). So set 3 seconds alert period. and 2 additional images 750 ms wise after each trigger, using main stream.
Now the CPU (i7 8700) is at around 90 % average load, the CUDA GPU is not significant. In at about 12 hours CP AI server did 700k inference with YoloV5 6.2 custom models added.
Will optimize it later somehow. The constantly moving objects like tree leaves and spider nets make it difficult to trigger when a person is there. So I just set the 3 seconds period an 3*0.75 second images for now for many cameras.
However the recording seems to be done for all alerts. Not only for alerts confirmed by AI.
Does somebody know a way to record only when the AI "To Confirm" criteria with the percentage set has met for the alarm?

Thanks,
Gyula
 

Attachments

  • 0.png
    0.png
    329.5 KB · Views: 23
  • Like
Reactions: David L
What difference if any does selecting the ´Black and White´ option under BI Motion trigger options?

I have my cameras in colour mode during the night with some white light illuminating the outside areas. The snapshots still have some darkish areas.
Thanks
 
I may be a bit late to the party, but I am having fun! I am running an i7-8700 with inbuilt Intel graphics. Which Yolo would be optimal do you think? New features sound pretty ace
I find Object Detection (YOLOv5 .NET) works the best. You can test it your self by using Explorer like the below screenshots. Just remember you can only have one Object Detection module enabled at a time.

1687707569400.png

1687707617286.png
 
  • Like
Reactions: David L
Hi,

I got CP AI working eventually on GPU. Just stopped Face Detection and disabled half precision. It may work with face detection enabled, haven't tried afterwards.
The AI evaluation loads the CPU significantly however. There are many trees and spider nets around the cameras (currently 24 are connected). So set 3 seconds alert period. and 2 additional images 750 ms wise after each trigger, using main stream.
Now the CPU (i7 8700) is at around 90 % average load, the CUDA GPU is not significant. In at about 12 hours CP AI server did 700k inference with YoloV5 6.2 custom models added.
Will optimize it later somehow. The constantly moving objects like tree leaves and spider nets make it difficult to trigger when a person is there. So I just set the 3 seconds period an 3*0.75 second images for now for many cameras.
However the recording seems to be done for all alerts. Not only for alerts confirmed by AI.
Does somebody know a way to record only when the AI "To Confirm" criteria with the percentage set has met for the alarm?

Thanks,
Gyula
Taking one aspect at a time. Your CPU seems to be struggling. Have you considered enabling your GPU within BI to reduce the BI CPU load?

Are you recording continuously and capturing events or only wishing to record when an event happens that meets your criteria? Both are possible.
 
  • Like
Reactions: David L
Taking one aspect at a time. Your CPU seems to be struggling. Have you considered enabling your GPU within BI to reduce the BI CPU load?

Are you recording continuously and capturing events or only wishing to record when an event happens that meets your criteria? Both are possible.
Hi,

The AI inference is done on the nvidia GPU. The procerssor's GPU decodes the video stream and the processor copies the frames I think, and possibly resizes. Unfortunately if the trigger frame is not an I-frame then it needs the deconding of multiple frames. That might be a reason why it's so high but it would be nice to see how BI exactly works step by step. You can see the nvidia GPU load in the bottom left cmd window.
I use the nvidia card only for CUDA. The display is on the motherboard's. I don't know if it's a disadvanteguous setup. May try everything from the nvidia card later.
I would record only when the alert is confirmed by the AI result. But haven't found a setting for that. Now the recording starts on alert and that'a roughly 3 seconds away from the event on video stream regardless I set 12 seconds recording in advance. Things to sort out...
 
Last edited:
Coming soon to CodeProject.AI
  • New Object Detection Module that works with Orange Pi 5/5B/5 Plus Orange Pi 5's Link. This is an all in one solution, it has a built-in NPU similar to the Coral but faster.

Are we SOL for Coral support on Windows platform? I would have thought more people are using Coral on Windows than using a fairly niche platform like Orange Pi?
 
As an Amazon Associate IPCamTalk earns from qualifying purchases.