CodeProject.AI Version 2.5

@MikeLud1 (or anyone who is in the know), is there a reference page that gives the recommended CPAI detect module <> hardware accelerator?
What I mean, is it spelled out anywhere for folks who have (asking for myself) a gforce 1660 ti , what the recommended CPAI detection module is?
I am switching from an intel NUC style to a VM with said 1660ti.
The detect modules mention the cude range but not being cuda savvy I was curious if there was a page that said something like:
detect module YOLOv5 6.2 = if you have an nvidia model x through y.
detect module YOLOv8 = if you have nvidia model x through z.

I suspect there is no such thing because
A) my google*fu is not terrible.
B) Someone would have to build said page and then maintain it.

And aside from your general recommendation I read in our forums, I do not really understand which of the detect modules is "better" suited to a given build out. Like what makes YOLOv8 better or not so over YOLOv5 .net


Thank you to yourself and the others that make these project successful. As end abusers we typically do not say it enough.
 
Last edited:
Hi

I have BI setup with approximately 15cams running (Hikvisions) some of these are clone cams for AI alerts. I am detecting 'person'only using ipcam-combined.

Is this the best model to use with YOLOv5.Net 1.9.2? or is there a faster way to process?

I typically set to detect 5 images at 500ms

I notice then when triggered and detecting CPU spikes to 100% for a few seconds and then drops down to around 15%

Is the spiking normal or is there something I can do to assist this. I have a Lenovo AIO pc with 16gb Ram

CPU - i5-10400T @ 2.00GHz
6cores 12 processors
Intel UHD 630 GPU


Status logs in CPAI report around 250ms -1500ms at time or higher

Also get Random shutdowns

Use Main Stream is unticked

Any advice/help is appreciated

thanks
you should be using ipcam-general which only detects humans and vehicles..
 
Last edited:
  • Like
Reactions: David L and 105437
On the left is the jpeg from the BI Alerts directory, on the right is what it sent to CPAI. This explains non-detection of the truck.

View attachment 184850 View attachment 184851

The blacked out area is what I have excluded for BI's motion detection, to not trigger on cars on the road. The alert says it was triggered by ONVIF alerts, so I wouldn't expect BI to apply the motion detection mask. I'd also expect the jpeg in the Alerts folder to be exactly what it sends to CPAI. Maybe there's some logic here I don't understand? I'm thinking I can maybe get around this by cloning the camera. I haven't learned cloning yet, need to learn it tomorrow. Looks like the BI watermark isn't sent to CPAI either.
BI applies the detection mask regardless of type of trigger. I guess the logic is that if you dont want BI to trigger on motion in that area then it is not important to you. Why is it that you are using both onvif and BI motion detection but excluding that area only in blue iris but not the camera?
 
  • Like
Reactions: David L
@MikeLud1 (or anyone who is in the know), is there a reference page that gives the recommended CPAI detect module <> hardware accelerator?
What I mean, is it spelled out anywhere for folks who have (asking for myself) a gforce 1660 ti , what the recommended CPAI detection module is?
I am switching from an intel NUC style to a VM with said 1660ti.
The detect modules mention the cude range but not being cuda savvy I was curious if there was a page that said something like:
detect module YOLOv5 6.2 = if you have an nvidia model x through y.
detect module YOLOv8 = if you have nvidia model x through z.

I suspect there is no such thing because
A) my google*fu is not terrible.
B) Someone would have to build said page and then maintain it.

And aside from your general recommendation I read in our forums, I do not really understand which of the detect modules is "better" suited to a given build out. Like what makes YOLOv8 better or not so over YOLOv5 .net


Thank you to yourself and the others that make these project successful. As end abusers we typically do not say it enough.
The best setup for a model will really depend on your hardware setup. As in how many cameras you have, how often they trigger, their resolution, what accuracy you want, what you’re trying to identify, what CPU you have, and how hard you want to push it. I would set up everything else first, and then dial in the model size using how much spare CPU cycles you have/want.

Replace ‘CPU’ with ‘CPU & GPU’ if that works better for your setup, but regardless of your GPU, your CPU will always be spending a lot of cycles shlepping frames around and resizing them.

I’ve been working on this new multi-TPU code and have gotten it working with MobileNet, EfficentDet, YOLOv5, & YOLOv8. With that said, there’s probably no reason to choose any model older than YOLOv8. (The custom models haven’t been updated yet, though.) I would personally choose to run the small model if I only have one or two TPUs, and medium or large models with more TPUs.

Edit: and you can do optimizations such as running inference on the sub-streams instead of the 4k images. YOLOv8 also allows you to export in arbitrary non-square sizes, so I’ve also been playing with running inference on 640x416 pixel inputs to both use all of your tensors and keep a reasonable aspect ratio.

Edit2: and if you’re interested in what YOLOv8 is, I’d start here:

You really don’t need to know the details. Basically it’s better in every way from its predecessors. (In theory.)
 
Last edited:
  • Like
Reactions: JNDATHP and David L
Why is it that you are using both onvif and BI motion detection but excluding that area only in blue iris but not the camera?
The area (the road next to the driveway) is excluded in the camera by how the IVS trigger lines are drawn. Anyway, I think I solved this by cloning the camera, using one for IVS detection and the other for BI motion detection. Now I'm "pulling my hair out" trying to dial in the BI motion detection to trigger on what I want (animals) without a lot of false triggers. Just going to take some time.
 
  • Like
Reactions: David L
The area (the road next to the driveway) is excluded in the camera by how the IVS trigger lines are drawn. Anyway, I think I solved this by cloning the camera, using one for IVS detection and the other for BI motion detection. Now I'm "pulling my hair out" trying to dial in the BI motion detection to trigger on what I want (animals) without a lot of false triggers. Just going to take some time.
I still dont understand why you need both onvif and BI motion detection. If you explain the reason perhaps there is a better alternative than cloning.
Other than some cpu time, there is nothing wrong with false triggers if you use CPAI for alerting..this way nothing is missed in high res recording but you only get notifications on confirmed alerts.
 
I hope I can say this in an understandable way. This one camera covers the foot of my driveway where I want to capture cars and people, and also covers a lot of open ground where I want to capture animals and people. The camera is a 4k-x with IVS that does only cars and people. In my very short time of comparing the IVS with CPAI I have concluded that the camera is clearly superior with detecting people and vehicles. I therefore want to retain the IVS, which does zip for picking up the animals, and that's the reason for BI motion detection and CPAI. The camera's basic motion detection sucks to the point of being useless. I'm assuming that BI's motion detection is better but haven't been able to get it set up well yet. There's usually a fairly good stream of deer in the camera's view but this week they have disappeared. I walked through the camera's coverage a bunch of times today to test the BI motion detection. Each test takes a half mile round trip so this is going to take me a while.

I'm finding CPAI to be hit-and-miss, sometimes working really well. With a different camera today I had a CPAI false confirmation of a horse. I brought up the clip and there was a squirrel running through that had triggered the IVS. I don't care if the animal type is wrong. But digging deeper revealed that the CPAI totally missed the squirrel and identified a shadow on a rock as the horse. Without the interesting shadow, BI+CPAI would have thrown out the legitimate hit.

The Dahua cameras with IVS that picks up any object are really good at triggering on animals. They're also really good at triggering on shadows, bugs and moving tree branches, creating a lot of false positives. If CPAI can reliably throw out the false positives without throwing out legitimate hits, that's huge.

I do truly appreciate the help offered on the forum. I wouldn't get anywhere with CPAI on my own.
 
  • Like
Reactions: David L
I hope I can say this in an understandable way. This one camera covers the foot of my driveway where I want to capture cars and people, and also covers a lot of open ground where I want to capture animals and people. The camera is a 4k-x with IVS that does only cars and people. In my very short time of comparing the IVS with CPAI I have concluded that the camera is clearly superior with detecting people and vehicles. I therefore want to retain the IVS, which does zip for picking up the animals, and that's the reason for BI motion detection and CPAI. The camera's basic motion detection sucks to the point of being useless. I'm assuming that BI's motion detection is better but haven't been able to get it set up well yet. There's usually a fairly good stream of deer in the camera's view but this week they have disappeared. I walked through the camera's coverage a bunch of times today to test the BI motion detection. Each test takes a half mile round trip so this is going to take me a while.

I'm finding CPAI to be hit-and-miss, sometimes working really well. With a different camera today I had a CPAI false confirmation of a horse. I brought up the clip and there was a squirrel running through that had triggered the IVS. I don't care if the animal type is wrong. But digging deeper revealed that the CPAI totally missed the squirrel and identified a shadow on a rock as the horse. Without the interesting shadow, BI+CPAI would have thrown out the legitimate hit.

The Dahua cameras with IVS that picks up any object are really good at triggering on animals. They're also really good at triggering on shadows, bugs and moving tree branches, creating a lot of false positives. If CPAI can reliably throw out the false positives without throwing out legitimate hits, that's huge.

I do truly appreciate the help offered on the forum. I wouldn't get anywhere with CPAI on my own.
When you get BI/CPAI setup to properly detect animals, can you share your settings? I will be setting our cameras up the same way, once I install them on our new property. The wife drools over seeing deer and rabbits. Our dog drools over squirrels and all animals her size or smaller. :) The Bull and cows behind us she freezes, haha. For me I want to capture hogs, coons, hopefully no coyotes, etc...
 
  • Like
Reactions: tigerwillow1
Also any ideas if I set my cameras to substream only for AI detection, how do I get the trigger event to record mainstream?
Do I need to have main and sub on the AI camera and tick use Mainstream if available?
Aim is to reduce the 100% cpu spikes from a few AI cameras being triggered at the same time.
Or is that just one of those things that cant be avoided?
thanks
 
Also any ideas if I set my cameras to substream only for AI detection, how do I get the trigger event to record mainstream?
Do I need to have main and sub on the AI camera and tick use Mainstream if available?
Aim is to reduce the 100% cpu spikes from a few AI cameras being triggered at the same time.
Or is that just one of those things that cant be avoided?
thanks

NO you do not need to check the Use Mainstream box to get video in mainstream.

Also keep in mind that if you use the mainstream, CPAI downrezes it before it processes it, so for most instances using the substream is fine.

You go into the record option and select Continuous sub +triggers as this will record the substream until triggered and then switch to mainstream while triggered.
 
NO you do not need to check the Use Mainstream box to get video in mainstream.

Also keep in mind that if you use the mainstream, CPAI downrezes it before it processes it, so for most instances using the substream is fine.

You go into the record option and select Continuous sub +triggers as this will record the substream until triggered and then switch to mainstream while triggered.
ok thanks - I have hikvision running 24/7, so I I tend to just use BI for triggered events.
I have now set the substreams and cpu is lower so thats good.

Is it possible with an AI event i.e. Person detected, to capture a period of time before the event? I find sometimes the clip recorded misses the actual event triggered.

I have other cams running that capture standard non AI event recordings when triggered, these allow me to capture a period of time before the event.

Can the same be achieved when AI creates the alert to start recording?

thanks
 
Last edited:
ok thanks - I have hikvision running 24/7, so I I tend to just use BI for triggered events.
I have now set the substreams and cpu is lower so thats good.

Is it possible with an AI event i.e. Person detected, to capture a period of time before the event? I find sometimes the clip recorded misses the actual event triggered.

I have other cams running that capture standard non AI event recordings when triggered, these allow me to capture a period of time before the event.

Can the same be achieved when AI creates the alert to start recording?

thanks

Yes, under the RECORD tab, set the pre-trigger amount for however long you want.
 
Yes, under the RECORD tab, set the pre-trigger amount for however long you want.
Yes I have that set already but for some reason, the recording either starts at the point of AI detecting the person so no pre-trigger time or after the event and I capture them walking away.
Any ideas?

UPDATE - thats my bad. I hadnt spotted the video playback jumps to the event start and you can rewind the clip to capture the pre alert time.

Is this also possible within the Android BI app? I can see there is a few seconds of pre event time but I cant rewind in the Android app for some reason. Anyone know if its possible?

thanks
 
Last edited:
The best setup for a model will really depend on your hardware setup. As in how many cameras you have, how often they trigger, their resolution, what accuracy you want, what you’re trying to identify, what CPU you have, and how hard you want to push it. I would set up everything else first, and then dial in the model size using how much spare CPU cycles you have/want.

Replace ‘CPU’ with ‘CPU & GPU’ if that works better for your setup, but regardless of your GPU, your CPU will always be spending a lot of cycles shlepping frames around and resizing them.

I’ve been working on this new multi-TPU code and have gotten it working with MobileNet, EfficentDet, YOLOv5, & YOLOv8. With that said, there’s probably no reason to choose any model older than YOLOv8. (The custom models haven’t been updated yet, though.) I would personally choose to run the small model if I only have one or two TPUs, and medium or large models with more TPUs.

Edit: and you can do optimizations such as running inference on the sub-streams instead of the 4k images. YOLOv8 also allows you to export in arbitrary non-square sizes, so I’ve also been playing with running inference on 640x416 pixel inputs to both use all of your tensors and keep a reasonable aspect ratio.

Edit2: and if you’re interested in what YOLOv8 is, I’d start here:

You really don’t need to know the details. Basically it’s better in every way from its predecessors. (In theory.)
Thanks for the 411 @mailseth
I did read about YOLO8 last night, was impressed and installed it. Switched from the tried and true YOLO5 (MLD or whatever its called) to YOLO8.
It did not trigger on any motions. After restarts and other misc efforts I flipped back to the tried and true YOLO5.

My Host is an Epyc with plenty of CPU/RAM etc.. which is far more than my 5 t5442t-z's need. I have been quite happy with YOLO5 & the NUC for over a year. All the AI, camera and model sizes where dialed in handsomely and gave great results even at night.
We use the cams to capture wildlife on our property.
I may give YOLO8 another shot but after several hours of it not doing what it should I fell back to a more defensible position. :)

Again, appreciate the input!
 
Thanks for the 411 @mailseth
I did read about YOLO8 last night, was impressed and installed it. Switched from the tried and true YOLO5 (MLD or whatever its called) to YOLO8.
It did not trigger on any motions. After restarts and other misc efforts I flipped back to the tried and true YOLO5.

My Host is an Epyc with plenty of CPU/RAM etc.. which is far more than my 5 t5442t-z's need. I have been quite happy with YOLO5 & the NUC for over a year. All the AI, camera and model sizes where dialed in handsomely and gave great results even at night.
We use the cams to capture wildlife on our property.
I may give YOLO8 another shot but after several hours of it not doing what it should I fell back to a more defensible position. :)

Again, appreciate the input!
Sounds like there’s probably a bug in there. I’d wait for another release and see if it gets fixed. Or if you’re feeling energetic you could work on debugging it yourself. To do that you’d probably start with uploading some known-good images directly to the CPAI interface and see if that returns good results. But I’m not a good person for guiding you through the process.
 
  • Like
Reactions: slidermike
When you get BI/CPAI setup to properly detect animals, can you share your settings?
I'm not particularly optimistic at the moment, but need a lot more experience before drawing final conclusions. Yesterday CPAI missed a coyote in plain daylight, and a rabbit at night. The coyote hit was rejected, but the rabbit hit was identified as a vehicle, not because of the rabbit, but by a shadow identified as a vehicle. This makes me start thinking that CPAI has a real handicap. The camera and/or BI knows where in the image it detects motion, but CPAI has the handicap of needing to search the entire frame looking for something. I assume the technology will improve over time. Perhaps BI sending only the area of motion instead of the whole frame to CPAI is one way. This wouldn't help with IVS hits however because BI doesn't know where the motion is in the image.

As a long term NVR user the big surprise to me is how much faster I can comb through the false triggers with BI. I don't know where this BI+CPAI adventure will end up. It could possibly be BI without CPAI. Not everything about BI is better, but the faster clip reviewing might be the elephant in the room.
 
  • Like
Reactions: David L
As a long term NVR user the big surprise to me is how much faster I can comb through the false triggers with BI. I don't know where this BI+CPAI adventure will end up. It could possibly be BI without CPAI. Not everything about BI is better, but the faster clip reviewing might be the elephant in the room.

You are now seeing why some of us that started with an NVR praise the review of footage capability in BI compared to our experiences with an NVR!

Like everything, nothing is perfect, and while you and I and others have great experience with IVS rules, we see people come here that it fails them - usually it is a result of their setup - trying to do too much with one field of view or a camera too high up, etc.

CPAI is still in its "infancy" and has come a long way in a short amount of time. I have noticed there is more upfront work on our end to ensure the CPAI is getting the image that we want it to see and analyze. Garbage in = Garbage out certainly applies in this situation.

I am sure you are doing this, but the .dat files are invaluable while you troubleshoot to figure out which images are going to CPAI. And then running a static image thru CPAI to confirm it can trigger on the object you want if presented with the correct image. Then it is just a matter (and sometimes an overwhelming matter) to figure out the right combination of things to improve the ability for it to send the right photo.

And then sometimes the images that CPAI was trained with might be too far different than the field of view images we generate. And for some, they have gone the effort to train their own model.

I am sure you will get there with enough playing with it!
 
I'm not particularly optimistic at the moment, but need a lot more experience before drawing final conclusions. Yesterday CPAI missed a coyote in plain daylight, and a rabbit at night. The coyote hit was rejected, but the rabbit hit was identified as a vehicle, not because of the rabbit, but by a shadow identified as a vehicle. This makes me start thinking that CPAI has a real handicap. The camera and/or BI knows where in the image it detects motion, but CPAI has the handicap of needing to search the entire frame looking for something. I assume the technology will improve over time. Perhaps BI sending only the area of motion instead of the whole frame to CPAI is one way. This wouldn't help with IVS hits however because BI doesn't know where the motion is in the image.

As a long term NVR user the big surprise to me is how much faster I can comb through the false triggers with BI. I don't know where this BI+CPAI adventure will end up. It could possibly be BI without CPAI. Not everything about BI is better, but the faster clip reviewing might be the elephant in the room.
I don't have the time to explore it myself, but I'd be very curious if this is a size issue. For example, the cat picture you posted a while ago had the cat being very small relative to the entire frame. Same with the car picture looking down on the road. The AI models only work at 300x300 to 640x640 resolution, and at that resolution it might be hard for anyone to decipher a cat within the frame. I'd like to know if you get better results by splitting the full frame into 2x2 or 3x3 images, and then running each of those images through CPAI. You could probably experiment with this manually with an image editor that can crop images and then submit the 4 (or 9) sub-image-tiles to the CPAI web interface. See if you start picking up the detections that you're looking for. Does that make sense?

It's not something that can really be done by default because now not only are you running 4x or 9x the number of images through inference, but you're also limiting the objects that can be recognized at a large scale. For example, when I do this with my cameras, the accuracy drops fast because they're zoomed in too much. But if you have a concrete use case and concrete results, it's something we can think about supporting. Perhaps there's a user option that can be exposed.

I have a few cameras that I'd be interested in doing this on if it works for you.
 
Last edited:
  • Like
Reactions: David L
I agree with both @wittaj and @mailseth

My experience has been Deepstack worked very good for my cameras, one being a Doorbell Cam and one 5442 being my front yard/driveway Cam. I switched to CPAI when Deepstack stopped support and after a month or so I got CPAI tweaked back to what I was getting in Deepstack. Detecting Persons, vehicles, delivery vehicles, all worked for me. Presently I am down to just my ReoLink Doorbell Camera which in it's new location is doing exactly what I want it to do, lets me know when a person approaches our house.

@tigerwillow1 I would give this a bit more time, also I have now a lot of property, not a 1/4 of an acre like our old house. My cameras there were easy to get proper detection since everything was close. I know when I setup my cameras here, AI detection at 50-100 feet away will be much different. I will just have to move our deer feeders closer, lol, but not too close, don't need hogs and coons tearing up our yard ;) Hang in there, I had many post in the beginning of my AI adventure, think of how long it took us all to get our Cams setup just right, but never perfect.

I am pretty sure I am going to need a Cam close to our feeder :cool:
 
  • Like
Reactions: tigerwillow1