Email chris.maunder@codeproject.com and sean.ewington@codeproject.com , let them know Mike Lud sent you and you have GeForce GT 710 with compute capability 3.5 and you want to help test the Object Detection (YOLOv5 3.1) module.I have GeForce GT 710 and willing to test.
It might detect the Amazon Delivery vehicle, I do not use that model, I have seen other people with a similar FOV and it did not work. The other models should detect it as vehicle.Thanks did it. Any suggestion for the above post "I'm so looking forward to implement this feature, I think is really cool. Should this image capture that it is Amazon Delivery vehicle? "
how many people have naked erect penises wandering in their garden?
Good point digger11. I guess there may be two problems doing this - firstly, at least on my system, the box drawn around the motion may not be synchronised well enough with the moving vehicle and secondly, would doing this would use too much CPU time?One thing I've wondered about is if BI can draw a box around the area in which it sees motion, why can't it send just what is in that box to the AI for analysis? I have a camera that looks down our driveway to a cul-de-sac.
BI often triggers motion on vehicles in the cul de sac, but when CPAI or DS processes the entire image rarely is a vehicle identified. I just took a still from a cancelled motion capture (nothing found) and cropped it down to just the cul-de-sac and had CPAI explorer scan that cropped image it came back with a 93% match as a vehicle.
I was hoping for a Christmas present in the morning.At this point, if I were the developers I would wait until after Christmas to release it. The last thing I would think they would want to do is have to put out any fires with the new release over the Christmas holiday.
then again how many people have naked erect penises wandering in their garden?
I would love to see if there's a way for CP.AI to review the entire image to compare against the results rather than just if the trained images are within the same sizing as the object in the image, like you mentioned. For instance my 180 degree camera that covers the entire front of my house isn't detecting a person in the side yard/driveway next to my house and even my 4k normal size camera covering the same area barely identifies a person in that same area. Only a select number of frames. Then none of them seem to identify people or kids walking on the sidewalk across the street or playing in the yard across the street, when my son goes over to the neighbor's house to play.Good point digger11. I guess there may be two problems doing this - firstly, at least on my system, the box drawn around the motion may not be synchronised well enough with the moving vehicle and secondly, would doing this would use too much CPU time?
I suppose the reason why the cropped image does a much better job is that, so far as I know, CPAI uses a resized image. Assuming this to be true, a vehicle in your full cul de sac image will comprise far fewer pixels than the vehicle in a similarly resized cropped image, most of which are the vehicle.
Most people disable the high definition processing for AI, however I have it enabled on all of mine since I only have 8x 4k cameras and have an RTX 2060 8gb card for processing this alone on a test system that system has an i9-9900 cpu, then on my primary current I am using an i5-9500 doing cpu processing in high definition also. Neither of them seem to be able to catch it. I did find through testing this afternoon though that leaving default object detection enabled also and Tiny option rather than medium makes it able to detect objects further away, still not perfect, but better.@XDRDX I believe the people in your images are simply too small, especially the ones across the street. I also believe the image processing doesn’t happen on a full 4k image, at least not by default, even if that’s what the camera supports. At least with the DeepStack implementation, I believe it explicitly reduced all images to 640x480. If you reduce all your sample images (posted here) to be only 640 or 800 pixels wide, or whatever the detection actually uses, I’m not sure even a human eye could pick out a head, legs, and arms. In this regard, a single 180-degree image is worse for recognition than two separate images that have not been stitched together.
Perhaps this kind of wide angle, high resolution image justifies using the original resolution, at the cost of a massive amount of CPU or GPU, but that decision should at least be an option up to the end user? Of course then we’ll have people complaining that it slows down the whole system.
Ah that’s the part I didn’t know (downrezing). I’ve just been racking my brain trying to figure out why it wasn’t being detected. I guess I need to start figuring out line crossing with BI rather than just relying on AI detection. Any idea why the regular single camera still only barely detects it sometimes? Last pic was from the color4k-T and it still only detected sometimes.Yep, classic case of trying to do too much with one field of view.
Even using high res, it downrezes before it is analyzed.
The AI will identify smaller images. Are you certain these images are being sent to AI. Did you look at the AI logs to see the images actually sent to AI?I would love to see if there's a way for CP.AI to review the entire image to compare against the results rather than just if the trained images are within the same sizing as the object in the image, like you mentioned. For instance my 180 degree camera that covers the entire front of my house isn't detecting a person in the side yard/driveway next to my house and even my 4k normal size camera covering the same area barely identifies a person in that same area. Only a select number of frames. Then none of them seem to identify people or kids walking on the sidewalk across the street or playing in the yard across the street, when my son goes over to the neighbor's house to play.
I open the BI log box with the AI tab and it identifies my car from my driveway on the left, just nothing from the right seems to get identified. I believe the IVS/Smart Plan is detecting on the camera itself and that it is feeding the events into BI, however I usually use the AI to verify events because I like knowing if it’s person or vehicle on the timeline/alerts filter. That way it’s a lot easier to identify events I want to review from the day. I usually like to go through all person events from the day quickly with the mouse over preview in UI3. We have a fairly busy street though so having to review all events rather than just person makes a large difference. I’m not 100% certain those are feeding to BI properly though. The few reolink cameras I have record to SD card and I can review easily from the mobile app and those seem to feed events to BI for recording properly with very few false positives. The Dahua and Hikvision have a lot more false positives get triggered, however since I don’t have the lines, boxes, or event information from those where I can easily see what’s triggering them I haven’t tried to tune them yet. I have just been trying to use the AI from BI/CP.AI to filter and reduce down the false positives. I guess the next step would be to setup IVMS-4200 and Dahua DSS express or the other Dahua desktop app, I can’t remember the name…, and see what is causing the false positives.The AI will identify smaller images. Are you certain these images are being sent to AI. Did you look at the AI logs to see the images actually sent to AI?