5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

I'm so looking forward to implement this feature, I think is really cool. Should this image capture that it is Amazon Delivery vehicle?
 

Attachments

  • 2022-12-21 18_23_54-.png
    2022-12-21 18_23_54-.png
    566.1 KB · Views: 102
Thanks did it. Any suggestion for the above post "I'm so looking forward to implement this feature, I think is really cool. Should this image capture that it is Amazon Delivery vehicle? "
 
Thanks did it. Any suggestion for the above post "I'm so looking forward to implement this feature, I think is really cool. Should this image capture that it is Amazon Delivery vehicle? "
It might detect the Amazon Delivery vehicle, I do not use that model, I have seen other people with a similar FOV and it did not work. The other models should detect it as vehicle.
 
Maybe you could train it on the amazon logo instead.I'm guessing a few closeups of the Amazon logo might help. RIsk of misdetection by being close and not whole vehicle, but then again how many people have naked erect penises wandering in their garden?
 
I'm just being that annoying person asking to see if there are any updates on the new version with the alpr module :D Lots of snow and ice and work is closed, thus I've got free time. It's either gaming or tech'ing (made up word lol). Hope everyone is staying safe and warm.
 
At this point, if I were the developers I would wait until after Christmas to release it. The last thing I would think they would want to do is have to put out any fires with the new release over the Christmas holiday.
 
One thing I've wondered about is if BI can draw a box around the area in which it sees motion, why can't it send just what is in that box to the AI for analysis? I have a camera that looks down our driveway to a cul-de-sac.
BI often triggers motion on vehicles in the cul de sac, but when CPAI or DS processes the entire image rarely is a vehicle identified. I just took a still from a cancelled motion capture (nothing found) and cropped it down to just the cul-de-sac and had CPAI explorer scan that cropped image it came back with a 93% match as a vehicle.
Good point digger11. I guess there may be two problems doing this - firstly, at least on my system, the box drawn around the motion may not be synchronised well enough with the moving vehicle and secondly, would doing this would use too much CPU time?

I suppose the reason why the cropped image does a much better job is that, so far as I know, CPAI uses a resized image. Assuming this to be true, a vehicle in your full cul de sac image will comprise far fewer pixels than the vehicle in a similarly resized cropped image, most of which are the vehicle.
 
then again how many people have naked erect penises wandering in their garden?

I'm not sure how many others there are, but I actually do have a lot of these stinkhorn mushrooms popping up near where I cut down an apricot tree years ago:

licensed-image


Now thanks to this thread, I'll think of Amazon next time I see them. Fortunately that's in my backyard, and the deliveries ai is only used on one of my front cameras. Anyhow, they don't move fast enough so they would labeled "occupied" anyways. ;)
 
  • Haha
Reactions: CCTVCam
Good point digger11. I guess there may be two problems doing this - firstly, at least on my system, the box drawn around the motion may not be synchronised well enough with the moving vehicle and secondly, would doing this would use too much CPU time?

I suppose the reason why the cropped image does a much better job is that, so far as I know, CPAI uses a resized image. Assuming this to be true, a vehicle in your full cul de sac image will comprise far fewer pixels than the vehicle in a similarly resized cropped image, most of which are the vehicle.
I would love to see if there's a way for CP.AI to review the entire image to compare against the results rather than just if the trained images are within the same sizing as the object in the image, like you mentioned. For instance my 180 degree camera that covers the entire front of my house isn't detecting a person in the side yard/driveway next to my house and even my 4k normal size camera covering the same area barely identifies a person in that same area. Only a select number of frames. Then none of them seem to identify people or kids walking on the sidewalk across the street or playing in the yard across the street, when my son goes over to the neighbor's house to play.
 
@XDRDX I believe the people in your images are simply too small, especially the ones across the street. I also believe the image processing doesn’t happen on a full 4k image, at least not by default, even if that’s what the camera supports. At least with the DeepStack implementation, I believe it explicitly reduced all images to 640x480. If you reduce all your sample images (posted here) to be only 640 or 800 pixels wide, or whatever the detection actually uses, I’m not sure even a human eye could pick out a head, legs, and arms. In this regard, a single 180-degree image is worse for recognition than two separate images that have not been stitched together.

Perhaps this kind of wide angle, high resolution image justifies using the original resolution, at the cost of a massive amount of CPU or GPU, but that decision should at least be an option up to the end user? Of course then we’ll have people complaining that it slows down the whole system.
 
@XDRDX I believe the people in your images are simply too small, especially the ones across the street. I also believe the image processing doesn’t happen on a full 4k image, at least not by default, even if that’s what the camera supports. At least with the DeepStack implementation, I believe it explicitly reduced all images to 640x480. If you reduce all your sample images (posted here) to be only 640 or 800 pixels wide, or whatever the detection actually uses, I’m not sure even a human eye could pick out a head, legs, and arms. In this regard, a single 180-degree image is worse for recognition than two separate images that have not been stitched together.

Perhaps this kind of wide angle, high resolution image justifies using the original resolution, at the cost of a massive amount of CPU or GPU, but that decision should at least be an option up to the end user? Of course then we’ll have people complaining that it slows down the whole system.
Most people disable the high definition processing for AI, however I have it enabled on all of mine since I only have 8x 4k cameras and have an RTX 2060 8gb card for processing this alone on a test system that system has an i9-9900 cpu, then on my primary current I am using an i5-9500 doing cpu processing in high definition also. Neither of them seem to be able to catch it. I did find through testing this afternoon though that leaving default object detection enabled also and Tiny option rather than medium makes it able to detect objects further away, still not perfect, but better.
 
Yep, classic case of trying to do too much with one field of view.

Even using high res, it downrezes before it is analyzed.
 
Yep, classic case of trying to do too much with one field of view.

Even using high res, it downrezes before it is analyzed.
Ah that’s the part I didn’t know (downrezing). I’ve just been racking my brain trying to figure out why it wasn’t being detected. I guess I need to start figuring out line crossing with BI rather than just relying on AI detection. Any idea why the regular single camera still only barely detects it sometimes? Last pic was from the color4k-T and it still only detected sometimes.

The 20mph street sign and mailbox across the street to the left frequently get false triggered as a person so I was figuring it should be able to identify a person that was actually in my yard and bigger without a problem…
 
Well you got a lot going on there - going thru a window, lots of trees and other things. up on 2nd floor, some distance you are trying to cover with the 2.8 or 3.6mm cams. Each one of those can contribute to failures.

Next you will say you are on default settings LOL.

Does the 4K-T pick it up if you use smart plan with IVS for human detection? And if so, then run with that and feed the triggers back into BI, unless you really need the rectangles and label LOL.
 
I would love to see if there's a way for CP.AI to review the entire image to compare against the results rather than just if the trained images are within the same sizing as the object in the image, like you mentioned. For instance my 180 degree camera that covers the entire front of my house isn't detecting a person in the side yard/driveway next to my house and even my 4k normal size camera covering the same area barely identifies a person in that same area. Only a select number of frames. Then none of them seem to identify people or kids walking on the sidewalk across the street or playing in the yard across the street, when my son goes over to the neighbor's house to play.
The AI will identify smaller images. Are you certain these images are being sent to AI. Did you look at the AI logs to see the images actually sent to AI?
 
The AI will identify smaller images. Are you certain these images are being sent to AI. Did you look at the AI logs to see the images actually sent to AI?
I open the BI log box with the AI tab and it identifies my car from my driveway on the left, just nothing from the right seems to get identified. I believe the IVS/Smart Plan is detecting on the camera itself and that it is feeding the events into BI, however I usually use the AI to verify events because I like knowing if it’s person or vehicle on the timeline/alerts filter. That way it’s a lot easier to identify events I want to review from the day. I usually like to go through all person events from the day quickly with the mouse over preview in UI3. We have a fairly busy street though so having to review all events rather than just person makes a large difference. I’m not 100% certain those are feeding to BI properly though. The few reolink cameras I have record to SD card and I can review easily from the mobile app and those seem to feed events to BI for recording properly with very few false positives. The Dahua and Hikvision have a lot more false positives get triggered, however since I don’t have the lines, boxes, or event information from those where I can easily see what’s triggering them I haven’t tried to tune them yet. I have just been trying to use the AI from BI/CP.AI to filter and reduce down the false positives. I guess the next step would be to setup IVMS-4200 and Dahua DSS express or the other Dahua desktop app, I can’t remember the name…, and see what is causing the false positives.
I did find that the Dahua DSS Express actually supports adding third party cameras and gives a good NVR type experience if only using Dahua cameras including line and smart detection when I was testing it before deciding on going with Blue Iris. It felt very similar to having a regular NVR appliance but running on Windows.
I attached a couple of UI3 mobile screen shots of a two people walking on the sidewalk across the street tonight and the ONVIF event was verified with Person from my Andy’s 5842, but showed nothing found from my Annke NCD800/Hik 180 degree camera. The non-demo 180 degree one I zoomed in on but it showed nothing found. The BI demo one is my in progress setup with the GPU. Still working through some heat issues and getting that box setup with surveillance drives, etc…
 
Last edited: