I'm going to watch this for a while before I make the jump to SenseAI. DS is working well with detection times in the 50-100ms range.
As an aside, is there a way, besides scripting, to shut down face recognition and such? Unless SenseAI has made a quantum leap in facial recognition on the PC level it's more of a waste of time and processor than anything else. I don't see a use for Scene Classification and Portrait Filter either. Maybe someone can explain what Background Removal is, too, and if/why it might be needed.
If you have CUDA and cuDNN installed all you need to do is install v1.5.6. The General & Animal models are included in the install. The new names for the models are ipcam-general & ipcam-animal
I'm going to watch this for a while before I make the jump to SenseAI. DS is working well with detection times in the 50-100ms range.
As an aside, is there a way, besides scripting, to shut down face recognition and such? Unless SenseAI has made a quantum leap in facial recognition on the PC level it's more of a waste of time and processor than anything else. I don't see a use for Scene Classification and Portrait Filter either. Maybe someone can explain what Background Removal is, too, and if/why it might be needed.
You can turn off each of the modules by editing the modulesettings.json that is in each of the module folders, change the highlighted to False and the module will be off. I think they are going to add the ability to turn off module on the dashboard.
The background removal tool basically removes everything in an image that is not in one of the models you are using. It still seems kinda buggy and I haven't found it useful yet.
So I had a chance to play around with this today and found that it works surprisingly well. I have my front door camera setup to recognize my face and trigger an alert which unlocks my front door. However, until I'm confident that it is 100% accurate I'm only going to have this enabled while I'm home.
To set this up I went into CodeProject.AI Explorer and registered my face using a selfie I took with my iPhone's camera and named it accordingly. I then went into my BI settings and in the AI tab I enabled Facial recognition and added the same image I used in CodeProject.AI Explorer along with the same name. a face and named it appropriately.
I then went into the camera's settings and in the trigger tab under Artificial Intelligence I added my name to the "To confirm" field.
From there I added an on alert action that sends an http request to my home automation system that unlocks my front door.
So I had a chance to play around with this today and found that it works surprisingly well. I have my front door camera setup to recognize my face and trigger an alert which unlocks my front door. However, until I'm confident that it is 100% accurate I'm only going to have this enabled while I'm home.
To set this up I went into CodeProject.AI Explorer and registered my face using a selfie I took with my iPhone's camera and named it accordingly. I then went into my BI settings and in the AI tab I enabled Facial recognition and added the same image I used in CodeProject.AI Explorer along with the same name.
I then went into the camera's settings and in the trigger tab under Artificial Intelligence I added my name to the "To confirm" field.
From there I added an on alert action that sends an http request to my home automation system that unlocks my front door.
Excellent work, but which faces is it using? The ones we add to bi, or the ones we add directly to sensai? Seems redundant to do both. Sensai's face training seems a bit more advanced than bi's current integration
Excellent work, but which faces is it using? The ones we add to bi, or the ones we add directly to sensai? Seems redundant to do both. Sensai's face training seems a bit more advanced than bi's current integration
So I deleted the face that I added in BI and then went into CodeProject.AI Explorer and the face I had previously added had been removed as well. I went back into BI and re-added the face and it shows up in CodeProject.AI Explorer. So apparently you just need to add the face to BI for this to work.
So I deleted that face that I added in BI and then went into CodeProject.AI Explorer and the face I had previously added had been removed. I went back into BI and re-added the face and it shows up in CodeProject.AI Explorer. So apparently you just need to add the face to BI for this to work.
I also noticed some faces that were detected with deepstack are not recognizable with sensai. Couldn't reinput about half of my face captures
edit: i also noticed sensai supports multiple faces under the same object tag (as opposed to dude_1, dude_2, dude_3, etc), but BI does not support this since it uses simple file names in a folder.
I'm also getting terrible processing times with this compared to DS. Cuda is working fine according to the dashboard and default models are disabled (using ipcam-combined). Went from 150ms DS to 500ms sensai
I'm also getting terrible processing times with this compared to DS. Cuda is working fine according to the dashboard and default models are disabled (using ipcam-combined). Went from 150ms DS to 500ms sensai