5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

The CodeProject team has indicated that while the current setup does allow for the use of port 5000, it is deprecated, as that port is shared with UPnP. As such, they are suggesting migrating to port 32168 when possible.

Is this all I have to do then?

1665240619961.png
 
Thanks guys,

Does anyone use face detection? I've tried playing around with it by enabling facial recognition in the AI options, and saving the unknown faces to 'new', however I can't find any pictures of faces in that folder. Has anyone else had any luck?
 
Thanks guys,

Does anyone use face detection? I've tried playing around with it by enabling facial recognition in the AI options, and saving the unknown faces to 'new', however I can't find any pictures of faces in that folder. Has anyone else had any luck?
This feature, if not mistaken, is still under work in progress.
 
Having the up-to-date BI and CP.AI, I've been experimenting with the different custom model folders. This works best for me:-

Screenshot 2022-10-09 161450.png
For cameras including vehicles, I also "Mark as vehicle" with car,bus,motorcycle.

I have very sensitive (simple algorithm, short minimum duration) trigger sensitivities for the cameras and it seems they do not miss any wanted alerts in the confirmed clip list. I do get loads of mistaken labels, but I can see what the images are in the clips list and eventually, after the dust has settled with CP, I won't even need the labels. ipcam-dark doesn't include "truck" but reports a truck as a "bus" so that's fine.
Although I'm using two model folders instead of simply using ipcam-combined, the CP.AI analysis times are short. And, the number of AI layers in each 14MB model folder is fixed, so I assume the image database for each label is far bigger when using two separate folders, giving better results.

The segregation between Confirmed and Cancelled seems to be nearly perfect, relegating all the images of moving shadows, insects, spiders, shaking webs, rain, fog and snow to the cancelled clip list.

Using "Nothing found:0" in the "To cancel" box is essential, it eliminates (green) "Nothing found" from the Confirmed alerts list. It forces the AI to search through all the images in an alert to select the best one, which is then nicely centred in the clips list. In the example below, without "Nothing found:0" in "To cancel", the car's headlights would cause a confirmed "nothing found" to be reported in the confirmed clips list way before the car arrives.

Screenshot 2022-10-09 160633.png

Why not give it a whirl and report your results?
 
Having the up-to-date BI and CP.AI, I've been experimenting with the different custom model folders. This works best for me:-

View attachment 142068
For cameras including vehicles, I also "Mark as vehicle" with car,bus,motorcycle.

I have very sensitive (simple algorithm, short minimum duration) trigger sensitivities for the cameras and it seems they do not miss any wanted alerts in the confirmed clip list. I do get loads of mistaken labels, but I can see what the images are in the clips list and eventually, after the dust has settled with CP, I won't even need the labels. ipcam-dark doesn't include "truck" but reports a truck as a "bus" so that's fine.
Although I'm using two model folders instead of simply using ipcam-combined, the CP.AI analysis times are short. And, the number of AI layers in each 14MB model folder is fixed, so I assume the image database for each label is far bigger when using two separate folders, giving better results.

The segregation between Confirmed and Cancelled seems to be nearly perfect, relegating all the images of moving shadows, insects, spiders, shaking webs, rain, fog and snow to the cancelled clip list.

Using "Nothing found:0" in the "To cancel" box is essential, it eliminates (green) "Nothing found" from the Confirmed alerts list. It forces the AI to search through all the images in an alert to select the best one, which is then nicely centred in the clips list. In the example below, without "Nothing found:0" in "To cancel", the car's headlights would cause a confirmed "nothing found" to be reported in the confirmed clips list way before the car arrives.

View attachment 142102

Why not give it a whirl and report your results?
Not to doubt your findings, but I can confirm that any word that represents an "unlikely" object in the "To cancel" box will result in CPAI going through all of the collected images, rather than just the first few. Have you tried that as well, and found differences with your suggested setup?
 
Last edited:
Hi jrbeddow, others will know that I am definitely not an expert and I welcome every time I'm proven wrong - it's all part of the learning process. In this case, my rationale is as follows:-

1. Back in the DeepStack days, I think I was the first to use "banana" in To cancel, since used by others. I chose banana from the inventory of object labels that were available knowing that DS would continue to search for one through all the real time images specified. However, I want to confirm all of the labels available when using the ipcam-xxx family. I don't think CP.AI will search for a label that's not in its inventory.

2. An example below from one of my trials. Although To cancel is blank, perhaps having a very high confidence, ipcam-animal selects "Nothing found" at T+0msec (green tick). This means that all the subsequent images are terminated, including a red cross for the car at 95%. I don't know why Nothing found ends up in the confirmed alerts list (perhaps Mike Ludd knows why) but putting "Nothing found:0" in the "To cancel" box definitely fixes it.



I ask again, surely somebody else is willing to try my simple settings and hopefully also get the same excellent results that I'm getting...
 

Attachments

  • Screenshot 2022-10-10 190056.png
    Screenshot 2022-10-10 190056.png
    297.6 KB · Views: 361
Last edited:
Basically, if you don't want "nothing found" to show up in the confirmed alerts list, don't include it in the "To confirm:" box.
 
  • Like
Reactions: Tinman
Basically, if you don't want "nothing found" to show up in the confirmed alerts list, don't include it in the "To confirm:" box.
??? Read #966
Hi cferd5
You made me look again at my example in #968 and notice I had chosen one that was complete rubbish. Sorry and thanks for that! Here's a better example showing that nothing found is in the confirmed alerts list when To confirm is empty.

Screenshot 2022-10-13 221203.png

Another error - looking again through my experimental results, it seems that the remark made by jrbeddow in #967 is also true. If I put an unlikely object that's not in the model's database in To cancel, CP.AI does search through all the images. Nevertheless, I stand by the need to enter Nothing found:0 in To cancel to send Nothing found to the cancelled alerts list, unless a valid object appears as in the example below.

Screenshot 2022-10-13 224322.png
 
Last edited:
  • Like
Reactions: Philip Gonzales
Hmm. I've never had "nothing found" show up in the Confirmed alerts list so I did a quick test just to try to somewhat replicate your findings.
I left "To Confirm:" blank, and added "banana" to "To Cancel:". BI started showing "nothing found" in the Confirmed alerts list when no valid objects were detected. But if you enter only valid label(s) in "To confirm", then "nothing found" will only show up in the Cancelled alerts list, even if you enter non-existing labels in "To cancel:".
 
I get this far more frequently for some reason under CodeProjectAI than I did previously under DeepStack. It occurs most often when a person is walking in front of a stationary vehicle. Anyone else notice that trend?

If someone is walking in front of the static vehicle, the image is no longer static. The pixels have changed which makes the AI think the image has changed and the object is moving/moved. This is also why sometimes a static vehicle will trigger at night. If headlights point on the static vehicle, the pixels likely changed from a dark object to an illuminated object so the AI again thinks it has moved.
 
  • Like
Reactions: cferd5
If someone is walking in front of the static vehicle, the image is no longer static. The pixels have changed which makes the AI think the image has changed and the object is moving/moved. This is also why sometimes a static vehicle will trigger at night. If headlights point on the static vehicle, the pixels likely changed from a dark object to an illuminated object so the AI again thinks it has moved.

Exactly. I notice this every time it rains and the car is parked outside the garage. The car will keep triggering the camera the whole time it's pouring rain.
 
If someone is walking in front of the static vehicle, the image is no longer static. The pixels have changed which makes the AI think the image has changed and the object is moving/moved. This is also why sometimes a static vehicle will trigger at night. If headlights point on the static vehicle, the pixels likely changed from a dark object to an illuminated object so the AI again thinks it has moved.
I do understand what you are saying, but I have to think that a certain amount of AI logic and/or latency (or persistence of vision) is applied as well on static objects. I know Ken has mentioned not too long ago that some improvements were made to "static object detection"...it just seems that they could still use a bit more refinement. Sure, the pixels of the moving object crossing in front of the static object are changing, but hopefully the AI can keep track of what was there (statically) at least to a certain extent (perhaps a fixed lengh of time).
 
  • Like
Reactions: cferd5
I do understand what you are saying, but I have to think that a certain amount of AI logic and/or latency (or persistence of vision) is applied as well on static objects. I know Ken has mentioned not too long ago that some improvements were made to "static object detection"...it just seems that they could still use a bit more refinement. Sure, the pixels of the moving object crossing in front of the static object are changing, but hopefully the AI can keep track of what was there (statically) at least to a certain extent (perhaps a fixed lengh of time).

Sounds like you need to join the development team. Let us know once you get this implemented! ;)
 
The photo in my AI analysis in the status window is very tiny.
See below image. Photo is in top right corner.

Please let me know how to adjust to a larger photo size in this analysis window.
Thank you for the assistance.


1665773992404.png
 
Another question:

I was using motion from the camera to trigger senseai (custom models) using the camera digital input as the trigger source in Blue Iris.

I have turned off (unchecked) all sources in the trigger tab in Blue Iris.

SenseAi is still processing object detection approximately every 30 seconds or every 90 seconds.
There should not be any triggers (all unchecked) to ask SenseAI for object confirmation.

I am trying to test for motion sensitivity.
I also restarted the Blue Iris PC.

Why is SenseAI still processing images when there are no triggers?

1665810547717.png
 
Last edited: