Yes, if you have any issues when closing this dialog (spinning wheel, etc..) wait about 30-60 seconds. then see if the AI server is responding (open the web browser interface to check). If in doubt, reboot your server, that should clear it up.
Yup. That is what I do.
This feature, if not mistaken, is still under work in progress.Thanks guys,
Does anyone use face detection? I've tried playing around with it by enabling facial recognition in the AI options, and saving the unknown faces to 'new', however I can't find any pictures of faces in that folder. Has anyone else had any luck?
Not to doubt your findings, but I can confirm that any word that represents an "unlikely" object in the "To cancel" box will result in CPAI going through all of the collected images, rather than just the first few. Have you tried that as well, and found differences with your suggested setup?Having the up-to-date BI and CP.AI, I've been experimenting with the different custom model folders. This works best for me:-
View attachment 142068
For cameras including vehicles, I also "Mark as vehicle" with car,bus,motorcycle.
I have very sensitive (simple algorithm, short minimum duration) trigger sensitivities for the cameras and it seems they do not miss any wanted alerts in the confirmed clip list. I do get loads of mistaken labels, but I can see what the images are in the clips list and eventually, after the dust has settled with CP, I won't even need the labels. ipcam-dark doesn't include "truck" but reports a truck as a "bus" so that's fine.
Although I'm using two model folders instead of simply using ipcam-combined, the CP.AI analysis times are short. And, the number of AI layers in each 14MB model folder is fixed, so I assume the image database for each label is far bigger when using two separate folders, giving better results.
The segregation between Confirmed and Cancelled seems to be nearly perfect, relegating all the images of moving shadows, insects, spiders, shaking webs, rain, fog and snow to the cancelled clip list.
Using "Nothing found:0" in the "To cancel" box is essential, it eliminates (green) "Nothing found" from the Confirmed alerts list. It forces the AI to search through all the images in an alert to select the best one, which is then nicely centred in the clips list. In the example below, without "Nothing found:0" in "To cancel", the car's headlights would cause a confirmed "nothing found" to be reported in the confirmed clips list way before the car arrives.
View attachment 142102
Why not give it a whirl and report your results?
I don't know why Nothing found ends up in the confirmed alerts list
??? Read #966Basically, if you don't want "nothing found" to show up in the confirmed alerts list, don't include it in the "To confirm:" box.
I get this far more frequently for some reason under CodeProjectAI than I did previously under DeepStack. It occurs most often when a person is walking in front of a stationary vehicle. Anyone else notice that trend?
If someone is walking in front of the static vehicle, the image is no longer static. The pixels have changed which makes the AI think the image has changed and the object is moving/moved. This is also why sometimes a static vehicle will trigger at night. If headlights point on the static vehicle, the pixels likely changed from a dark object to an illuminated object so the AI again thinks it has moved.
I do understand what you are saying, but I have to think that a certain amount of AI logic and/or latency (or persistence of vision) is applied as well on static objects. I know Ken has mentioned not too long ago that some improvements were made to "static object detection"...it just seems that they could still use a bit more refinement. Sure, the pixels of the moving object crossing in front of the static object are changing, but hopefully the AI can keep track of what was there (statically) at least to a certain extent (perhaps a fixed lengh of time).If someone is walking in front of the static vehicle, the image is no longer static. The pixels have changed which makes the AI think the image has changed and the object is moving/moved. This is also why sometimes a static vehicle will trigger at night. If headlights point on the static vehicle, the pixels likely changed from a dark object to an illuminated object so the AI again thinks it has moved.
I do understand what you are saying, but I have to think that a certain amount of AI logic and/or latency (or persistence of vision) is applied as well on static objects. I know Ken has mentioned not too long ago that some improvements were made to "static object detection"...it just seems that they could still use a bit more refinement. Sure, the pixels of the moving object crossing in front of the static object are changing, but hopefully the AI can keep track of what was there (statically) at least to a certain extent (perhaps a fixed lengh of time).
You can resize the window by grabbing the lower right corner.The photo in my AI analysis in the status window is very tiny.
See below image. Photo is in top right corner.
Please let me know how to adjust to a larger photo size in this analysis window.
Thank you for the assistance.
View attachment 142649