[tool] [tutorial] Free AI Person Detection for Blue Iris

Opened AI Tool as Admin, same logs.
Here is the docker log:
[GIN] 2020/11/30 - 03:12:02 | 403 | 3.578903ms | 192.168.0.100 | POST /v1/vision/detection
[GIN] 2020/11/30 - 03:12:07 | 403 | 67.691µs | 192.168.0.100 | POST /v1/vision/detection
[GIN] 2020/11/30 - 03:12:12 | 403 | 83.971µs | 192.168.0.100 | POST /v1/vision/detection

looks fine there, I think it posts the results back to AItool...

[29.11.2020, 20:00:35.579]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )

not sure why the object error is thrown.... what does your object settings look like in AItools?
 
looks fine there, I think it posts the results back to AItool...



not sure why the object error is thrown.... what does your object settings look like in AItools?
If you are referring to the camera settings>relevant objects> I have everything selected just for testing, confidence limits are default, cooldown is 0.
 
If you are referring to the camera settings>relevant objects> I have everything selected just for testing, confidence limits are default, cooldown is 0.

Try to select only 2 or 3 relevant objects, as there might have been a limit imposed if I remember right.

otherwise hopefully someone else can help you from here
 
I hate to be the one to add to an already long string, but i haven't found anything similar to this.

installed BI v5, DequestAI in Docker, and unzipped AI Tool 1.67.
Everything seems to be set up correctly but I am getting error (code: -2147467261 ) in my AI Tool logs.

Where do I start to try and de-bug this?

Can you advise how you are starting deepstack i.e the command line. If it is not correctly setup it can cause the error you are experiencing.
 
Hello.
This program has been working very well for me but now I would like to add a speaker that tells a message when a person is detected.
I know this could be done with home assistant and node red as I saw on "the hook up" youtube video, but I would like to know if there is a more simple way to do this.

I'm not sure what my options are because I understand that I need an URL that this program will activate when a relevant alert occurs, so not every hub would work.
I'm almost sure that google home can do it with something called assistant links. Not sure if Alexa could do it directly, I understand that it could be done through Home Assistant.

If someone could please point me in the right direction or give some advice for my project, that would be great.

Thanks
 
Can you advise how you are starting deepstack i.e the command line. If it is not correctly setup it can cause the error you are experiencing.

Village Guy, so this is a test to hopefully move away from QVR Pro. I had created a VM with Windows 10Pro, running the Blue Iris and AI Tool on that. I originally had the Deepstack running in a QNAP container via docker.
This evening i stopped the container and just did a compelte windows install of Deepstack on my Windows 10 VM with everytthing else. Not ideal but will give me a feel for the software during my trial period.

After all this i am still getting errors: here is my current AI Tool Log followed by the Deepstack Log:
[30.11.2020, 21:53:37.616]: Starting analysis of \\DetmerHomeQNAP\VMfolder\AIinput\FrontDoorSD.20201130_215337520.jpg
[30.11.2020, 21:53:37.645]: System.IO.IOException | The process cannot access the file '\\DetmerHomeQNAP\VMfolder\AIinput\FrontDoorSD.20201130_215337520.jpg' because it is being used by another process. (code: -2147024864 )
[30.11.2020, 21:53:37.694]: Could not access file - will retry after 10 ms delay
[30.11.2020, 21:53:37.743]: Retrying image processing - retry 1
[30.11.2020, 21:53:37.828]: (1/6) Uploading image to DeepQuestAI Server
[30.11.2020, 21:53:37.947]: (2/6) Waiting for results
[30.11.2020, 21:53:37.969]: (3/6) Processing results:
[30.11.2020, 21:53:37.998]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[30.11.2020, 21:53:38.025]: ERROR: Processing the following image '\\DetmerHomeQNAP\VMfolder\AIinput\FrontDoorSD.20201130_215337520.jpg' failed. Failure in AI Tool processing the image.
[30.11.2020, 21:53:38.077]:
[30.11.2020, 21:53:38.105]: Starting analysis of \\DetmerHomeQNAP\VMfolder\AIinput\FrontDoorSD.20201130_215337520.jpg
[30.11.2020, 21:53:38.138]: (1/6) Uploading image to DeepQuestAI Server
[30.11.2020, 21:53:38.415]: (2/6) Waiting for results
[30.11.2020, 21:53:38.452]: (3/6) Processing results:
[30.11.2020, 21:53:38.482]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[30.11.2020, 21:53:38.510]: ERROR: Processing the following image '\\DetmerHomeQNAP\VMfolder\AIinput\FrontDoorSD.20201130_215337520.jpg' failed. Failure in AI Tool processing the image.

Here is the Deepstack logs (attached)
 
Last edited:
Here is the Deepstack logs (attached)

he's asking what command was typed to start your deepstack (originally anyways, if it auto-restarts) in docker, or powershell, or where ever you call it (when your computer first powers up for instance). if it is deepstack for windows it might just be a setting or a line within that program.

from your logs, you shouldn't need all those face detection api's, but you DO need the vision-detection api (which recognizes objects), which isn't shown in your log. this api is called and configured in the deepstack run command, as is other settings including the deepstack version (gpu/cpu, etc). For instance, the command will need to contain the following in it somewhere:
VISION-DETECTION=True
 
Last edited:
  • Like
Reactions: Village Guy
Reporting back that I got telegram working, I was using the wrong chat ID.
Steps:
Message @BotFather to make a bot. Name it whatever.
Send /token and get the Token
Then go to the telegram menu and make a new group chat, then add your @botname_bot to it
Then send at least one message. Doesn't matter what you send.
Now if you're lazy, copy your Token and go here: Get Telegram Chat ID - CodeSandbox
Paste your key, press go...It'll return the number, make sure it has a - in front.

In AiTool settings page, paste in your Token and chat ID in, then in the actions page on each individual camera make sure to check "send alert images to Telegram".

Fin!

Caveat: I'm getting a number of failed to send errors, but I think it's a frequency issue. All cameras are reliably sending images.

I have set the Token and chat ID in AITool settings, but I do not receive an message in Telegram.

Do any of you have any clue to what can be missing or wrong?
 
he's asking what command was typed to start your deepstack (originally anyways, if it auto-restarts) in docker, or powershell, or where ever you call it (when your computer first powers up for instance). if it is deepstack for windows it might just be a setting or a line within that program.

from your logs, you shouldn't need all those face detection api's, but you DO need the vision-detection api (which recognizes objects), which isn't shown in your log. this api is called and configured in the deepstack run command, as is other settings including the deepstack version (gpu/cpu, etc). For instance, the command will need to contain the following in it somewhere:
VISION-DETECTION=True

I downloaded via Windows, no Docker or cmd line inputs.
As far as the API's go, the only one i selected during the install was vision detection.
I just did a shutdown and restart, everything looks good and running.

Deepstack is showing a POST /vi/vision/detection and the IP is correct. I did check to make sure that DQ was active @ all good here. But when I check the full url: i get a 404 error. Is this normal?

[01.12.2020, 13:07:52.268]: Starting analysis of C:\Users\ddetm\Documents\AIinput\FrontDoor.20201201_130750296.jpg
[01.12.2020, 13:07:52.775]: (1/6) Uploading image to DeepQuestAI Server
[01.12.2020, 13:07:53.400]: (2/6) Waiting for results
[01.12.2020, 13:07:53.589]: Newtonsoft.Json.JsonSerializationException | Error converting value 404 to type 'WindowsFormsApp2.Response'. Path '', line 1, position 3. (code: -2146233088 )
[01.12.2020, 13:07:53.786]: ERROR: Processing the following image 'C:\Users\ddetm\Documents\AIinput\FrontDoor.20201201_130750296.jpg' failed. Can't reach DeepQuestAI Server at .
[01.12.2020, 13:07:55.100]:
[01.12.2020, 13:07:55.127]: Starting analysis of C:\Users\ddetm\Documents\AIinput\FrontDoor.20201201_130754343.jpg
[01.12.2020, 13:07:55.265]: (1/6) Uploading image to DeepQuestAI Server
[01.12.2020, 13:07:55.762]: (2/6) Waiting for results
[01.12.2020, 13:07:56.257]: Newtonsoft.Json.JsonSerializationException | Error converting value 404 to type 'WindowsFormsApp2.Response'. Path '', line 1, position 3. (code: -2146233088 )
[01.12.2020, 13:07:56.519]: ERROR: Processing the following image 'C:\Users\ddetm\Documents\AIinput\FrontDoor.20201201_130754343.jpg' failed. Can't reach DeepQuestAI Server at .
[01.12.2020, 13:07:58.678]:
[01.12.2020, 13:07:58.828]: Starting analysis of C:\Users\ddetm\Documents\AIinput\FrontDoor.20201201_130758379.jpg
[01.12.2020, 13:07:58.856]: (1/6) Uploading image to DeepQuestAI Server
[01.12.2020, 13:07:59.491]: (2/6) Waiting for results
[01.12.2020, 13:07:59.762]: Newtonsoft.Json.JsonSerializationException | Error converting value 404 to type 'WindowsFormsApp2.Response'. Path '', line 1, position 3. (code: -2146233088 )
[01.12.2020, 13:07:59.981]: ERROR: Processing the following image 'C:\Users\ddetm\Documents\AIinput\FrontDoor.20201201_130758379.jpg' failed. Can't reach DeepQuestAI Server at .
 
So after combing through this thread and the tutorials for a few days now and I think I have the resources/it makes sense for me to simply record on the 4k streams 24x7 (I currently have all my cameras cloned and am recording 24x7 on the low res stream and sending alerts to record the 4k stream)

I have a few questions:



1. If I'm only using the 4k stream recording 24x7 and then flagging events that AITool finds, why would I still need the first URL? I guess I'm wrong to have guessed that one calls BI to start recording?
2. And is the "[summary]" supposed to be included in the url verbatim? and for that matter is leaving it as [camera] supposed to pull in the camera name you set above in settings? Currently I have them all "hard coded" all as the same Short Name i entered above. So a different URL for each camera.

Finally, I have one clarification on the process:

3. Will i be able to use these "flags" to define what clips i save going forward? For example. If I'm recording the 4k stream 24x7 but AI detects a motion/person on that camera only for 5 minutes in a 24 hour period, will i be able to separate out the 5 minute clip for longer term storage and delete (or allow to be overwritten) the other 23 hours and 55 minutes?

Sorry to bump everyone, but I was just curious if anyone could help me answer these 3 questions? Or at least the last one about using the "flags" to progmatically wittle down what I long term store?
 
I have a few questions:

1. If I'm only using the 4k stream recording 24x7 and then flagging events that AITool finds, why would I still need the first URL? I guess I'm wrong to have guessed that one calls BI to start recording?

3. Will i be able to use these "flags" to define what clips i save going forward? For example. If I'm recording the 4k stream 24x7 but AI detects a motion/person on that camera only for 5 minutes in 24 hour period, will i be able to separate out the 5 minute clip for longer term storage and delete (or allow to be overwritten) the other 23 hours and 55 minutes?

I can answer two of your questions.

  1. If your recording 24/7 you only need the flag URL to external trigger BI. ie: http://127.0.0.1:81/admin?trigger&camera=SecCam_1&user=AI&pw=Tool&flagalert=1 No need for two. I'm unsure under what circumstances you would need two trigger URL's but I have seen examples where people are using them.
  2. the flagged clips are already separated in a way, as in the flagged alert will start playing in the hour long clip where the trigger was given. If you want to export the alert to save it in another location you can adjust the stop point to where ever you want from the options in the export function.
The only issue Ive run into, is if a alert has been triggered a few seconds before a hour long clip (or how ever long you set your clips) ends. It will cut off the alert when the clip ends.

If you want to delete a flagged alert you can right click and delete it. It will only delete the alert or more appropriately "bookmark" that is stored in the hour long clip.

If you want to save the alert longer than the hour long clip will be kept, you need to right click the alert and protect it.

If you have the option "Auto Protect when flagged" checked in "Clips And Archiving", the alerts triggered and flagged by AI Tool will not get deleted and end up filling up your storage. Which is bad.

Hope this helps
 
Last edited:
If your recording 24/7 you only need the flag URL to external trigger BI. ie: http://127.0.0.1:81/admin?trigger&camera=SecCam_1&user=AI&pw=Tool&flagalert=1 No need for two. I'm unsure under what circumstances you would need two trigger URL's but I have seen examples where people are using them.

ahh ok this does make more sense, but both the opening post to this thread and the example post I quoted both had 2 urls. I suspected, or rather hoped, that perhaps there was a third method of enacting BI/AITool, that is to not record 4k 24/7 but simply take jpg's every 4 seconds when motion is detected, then have the URL start 4k recording when an alert is recorded by AITool, but I also suspect this is unlikely to be possible.

  1. the flagged clips are already separated in a way, as in the flagged alert will start playing in the hour long clip where the trigger was given. If you want to export the alert to save it in another location you can adjust the stop point to where ever you want from the options in the export function.

Thank you so much for the clear explanation. I was really hoping this wasn't the case and there was a more automatic way to "extract" the flagged clip from the hour (or whatever defined length) clip and move just those flagged clips to another folder before the ongoing recording folder was overwritten.

The only issue Ive run into, is if a alert has been triggered a few seconds before a hour long clip (or how ever long you set your clips) ends. It will cut off the alert when the clip ends.

This is not a contingency I had considered but see how this would be an issue...

If you want to delete a flagged alert you can right click and delete it. It will only delete the alert or more appropriately "bookmark" that is stored in the hour long clip
If you want to save the alert longer than the hour long clip will be kept, you need to right click the alert and protect it.

Wow, this all seems to require quite a bit of manual interference.

If you have the option "Auto Protect when flagged" checked in "Clips And Archiving", the alerts triggered and flagged by AI Tool will not get deleted and end up filling up your storage. Which is bad.

Now this is really good info, I absolutely would've seen that option and thought it was my salvation.

Hope this helps

This was insanely helpful, thank you so much. I'm unfortunately still not sure which method is best for me. I keep waffling. I want the most accurate alerts possible (think AITools jpgs should be 4k for that) and have a fair amount of storage but I'd also like more curated long term storage and not to be spinning up 30tb to 60tb to save a month or 2 of (probably unused) surveillance video. Ideally, I'd like a full recording of all my cams for 24 hours (or 72), with the relevant events flagged, then just the flagged event "clips" (as in not the whole hour (or however long) clip) moved to another folder on my BI box, then after a month moved again to unused space on the unraid server for another month or so (all automatically of course:)). I've never had a surveillance system before, so I guess I just don't know what my practical uses for it are and maybe this is crazy and I should just get a big hdd for my BI box (it's currently 6tb) record 24/7 then scrap it all after a month, I'm obviously open to any and all suggestions, you taking the time is def getting me closer to my goal though and I thank you.
 
Thanks to everyone that has contributed to this project, it's really fun to work with and see objects detected from my BlueIris Cameras and trigger alerts based on them.

Has anyone here tried to train deepquest AI? I see some mention of it but nothing concrete. Someone mentioned training is done in the coral model?
 
Last edited:
I have been running v1.67 with clone cameras and only moved over to the latest version yesterday! I assume you meant Clone and not simply dual stream?

Hello, Newb with my first post. I've pored over the questions posed by @seth-feinberg and graciously answered by @Village Guy . At the risk of claiming the prize for most dense newbie, I still don't understand cloned cameras and their relationship to dual stream. Here's my question:

My goal is to use the substream at SD resolutions for AI detection (and triggers) as well as recording the SD stream 24/7. I only want to keep HD clips that are triggered by Deepstack. I've seen the videos on this, but what is confusing is it's hard to tell how much is still relevant with the current BIv5. Do I want/need to create both HD and SD cameras? Is that "cloning" or something else entirely? Conversely, can a single logical BI camera use both streams to send the image to AItools->Deepstack and then be triggered by the AI engine?

One of the videos that used a single camera definition seems to suggest the latter was possible, but he wasn't keeping the SD stream so that might be dealbreaker?

Thanks for any help you can send my way. I promise to improve my self-sufficiency in time.
 
Hi!

AI Tool is a wonderful tool! I just started evaluating it since a couple of days ago! Thank you for sharing it.

I would like to make a modification proposal. Taking my personal case as an example, I would need each Telegram chat-id to be per camera, and not a single global chat-id. In my case, this need is due to the fact that certain cameras are intended to be shown by some users, and others by a different group of people, who do not necessarily share the same cameras.

Or may be anyone has a different idea to accomplish this?

I say it again, the tool is excellent and the tutorial to put it into operation very well explained. Thanks a lot.
 
My goal is to use the substream at SD resolutions for AI detection (and triggers) as well as recording the SD stream 24/7. I only want to keep HD clips that are triggered by Deepstack. I've seen the videos on this, but what is confusing is it's hard to tell how much is still relevant with the current BIv5. Do I want/need to create both HD and SD cameras? Is that "cloning" or something else entirely? Conversely, can a single logical BI camera use both streams to send the image to AItools->Deepstack and then be triggered by the AI engine?

So, obviously, i struggled with this a lot (maybe I still do!) but my current understanding is this:
  • Dual Streams: when setting up your cameras IN Blue Iris, you define a second stream (which is your cameras SD substream you defined IN the Camera's Settings i.e. Amcrest). I believe the attached screenshot from this link: danecreekphotography/node-deepstackai-trigger shows this. This, I believe, allows BI to show the substream when in the Blue Iris interface in the SD resolution (when that particular Camera is not focused or in a large format), where an HD resolution would be a waste of resources (since it's encoding on the fly and HD would be pointless in a small, say, 2" x 1" box in the Blue Iris interface).
  • Cloned Cameras: When you add TWO "different" cameras into BI. I put different in quotes because again it's the same camera, just the HD Stream and its equivalent SD substream (again set IN the Camera's Settings) BUT you add both the HD Stream AND the SD substream as their own cameras in BI (i.e. if viewing all cameras in BI, there are 2 cameras that show the same picture, one HD one SD).
You can do BOTH. As in you can set your HD camera with the Dual Stream setting in the screenshot, and add the Cloned camera with the SD substream. This is the method that The Hookup Video and the above link both use, but only the link makes reference to the Dual Streams. I hope this helps and that I didn't add more confusion (and really hope I'm not wrong or it's back to the drawing board for me).
 

Attachments

  • hd_ip_network_camera_configuration.png
    hd_ip_network_camera_configuration.png
    193.1 KB · Views: 47
  • Like
Reactions: CAL7
[03.12.2020, 06:25:48.577]: ERROR: Can't write to cameras/history.csv!


Any idea why I'm getting this error?

Sent from my SM-N960F using Tapatalk

I am FAR from an expert, so this might be less than helpful, but seems like a permissions issue? I also know the latest AI Tool doesn't use the Cameras folder any more, so what version of AITool are you using?