[tool] [tutorial] Free AI Person Detection for Blue Iris

Is there any way to add text to the telegram alerts and not just the image? i.e. what was detected. what time, what confidence level, etc.?
 
Also, is there any way to modify the URL trigger so that it 'flags' the alert? Furthermore, it would be great if you could add a description in the alerts panel where it normally says "Motion A" or "external" to include the RELEVANT triggers, i.e. "Person, cat", or, "Dog" or whatever is revelant based on the settings? I am guessing it be something like:


or


or


etc.

Thoughts?
 
Any help is greatly appreciated. I'm getting the error like many below. I have installed all the C++ packages that were suggested, restarted the PC, used the noavx version, etc. I have deepstacks installed on Portainer in Home Assistant and tried installing the Windows version (which gave the could not find server error). Thanks in advance for any help.

[23.05.2020, 01:40:33.028]: Starting analysis of X:\AI Input/FCSD.20200523_014031937.jpg
[23.05.2020, 01:40:33.081]: (1/6) Uploading image to DeepQuestAI Server
[23.05.2020, 01:40:33.188]: (2/6) Waiting for results
[23.05.2020, 01:40:33.341]: (3/6) Processing results:
[23.05.2020, 01:40:33.347]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[23.05.2020, 01:40:33.353]: ERROR: Processing the following image 'X:\AI Input/FCSD.20200523_014031937.jpg' failed. Failure in AI Tool processing the image.
 
Any help is greatly appreciated. I'm getting the error like many below. I have installed all the C++ packages that were suggested, restarted the PC, used the noavx version, etc. I have deepstacks installed on Portainer in Home Assistant and tried installing the Windows version (which gave the could not find server error). Thanks in advance for any help.

[23.05.2020, 01:40:33.028]: Starting analysis of X:\AI Input/FCSD.20200523_014031937.jpg
[23.05.2020, 01:40:33.081]: (1/6) Uploading image to DeepQuestAI Server
[23.05.2020, 01:40:33.188]: (2/6) Waiting for results
[23.05.2020, 01:40:33.341]: (3/6) Processing results:
[23.05.2020, 01:40:33.347]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[23.05.2020, 01:40:33.353]: ERROR: Processing the following image 'X:\AI Input/FCSD.20200523_014031937.jpg' failed. Failure in AI Tool processing the image.

When you setup the DeepStack docker, did you add "-e VISION-DETECTION=True" to the docker command when you created it I had similar problems until I did that.
 
When you setup the DeepStack docker, did you add "-e VISION-DETECTION=True" to the docker command when you created it I had similar problems until I did that.

Yep added that to the Enviroment tab.

I am running Deepstack and the AI tool on a different PC than i am running Blue Iris. Would that be an issue? I tested that the triggers work and i have access to the still that are being analyzed so i don't think it is a problem just figured i would mention. I'm also seeing the requests into DeepStack so i don't think that is the issue.

[GIN] 2020/05/23 - 19:50:24 | 403 | 64.764µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:50:27 | 403 | 90.153µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:52:15 | 403 | 141.876µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:52:17 | 403 | 91.782µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:52:20 | 403 | 137.769µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:52:22 | 403 | 147.11µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:52:25 | 403 | 1.763197ms | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:52:27 | 403 | 675.513µs | 192.168.XX.XX | POST /v1/vision/detection

[GIN] 2020/05/23 - 19:54:54 | 403 | 64.74µs | 192.168.XX.XX | POST /v1/vision/detection



Untitled.png
 
Right. I started up an Ubuntu (19.10) VM. Then I used the Get started with the USB Accelerator information on google to install the dependencies needed for linux. I then downloaded the MobileNet SSD v2 (COCO) object detection Edge TPU model from Models | Coral. I then installed coral-pi-rest-server on the server. This is basically the "deepstack" equivelent and you point the AI Tool to that instead of the deepstack program. This will use the COCO object detection instead of sending off the images. Of course, you could also make your own object detection to make the processing even better.
Thanks a lot. I've ordered the USB-accellerator. For some reason I could not get it delivered in France, where I live. Had to get it delivered to my father in Norway. It arrived yesterday, and he will mail it to me next week. Now I'm wondering if in the meantime I should spend the time and test tensorflow-lite without the Edge.

I'm guessing we'll soon see cameras with e.g. edge TPU's. I'm also thinking it might benifit some of us (like myself) to improve the model with training on actual photos taken by the respective Blue Iris installations (especially photos at night). We live in exciting times. Thanks again for the feedback.
 
Any help is greatly appreciated. I'm getting the error like many below. I have installed all the C++ packages that were suggested, restarted the PC, used the noavx version, etc. I have deepstacks installed on Portainer in Home Assistant and tried installing the Windows version (which gave the could not find server error). Thanks in advance for any help.

[23.05.2020, 01:40:33.028]: Starting analysis of X:\AI Input/FCSD.20200523_014031937.jpg
[23.05.2020, 01:40:33.081]: (1/6) Uploading image to DeepQuestAI Server
[23.05.2020, 01:40:33.188]: (2/6) Waiting for results
[23.05.2020, 01:40:33.341]: (3/6) Processing results:
[23.05.2020, 01:40:33.347]: System.NullReferenceException | Object reference not set to an instance of an object. (code: -2147467261 )
[23.05.2020, 01:40:33.353]: ERROR: Processing the following image 'X:\AI Input/FCSD.20200523_014031937.jpg' failed. Failure in AI Tool processing the image.

I'm having the same issue. It's a problem with the tool, when you run it in debugging mode via Visual Studio it spits out errors with some of the code.

This is the first error:

I then commented out that bit of code since logging isn't critical. I then got this error:

I'm not sure how to code in this language so unsure how to fix this. Hope the dev can assist.
 
Hi All,
I have running about a month without any problem this tool but I added detection for dogs (now that I have a dog :) and its not identifying it as a dog (or a cat)... I attached two images... I know that the dog is small but even with dogs or cats, it doesnt detect it. Do you know what I could configure in order to work? Its strange because it detects like an object ... Regards

Yeah, it doesn't work overly well with things apart from humans and vehicles. I currently have no insight in Deepstack internals, so ....
 
  • Like
Reactions: Tanaban
I am not trying to monitor the road in the bad - thats my issues, i am trying to mask the entire area in the back but as shown in my pictures - when i mask the back, the front door area is also getting masked so as shown the person at the door then is half cut from the top as i am trying to block the street from the back. Am i doing something wrong with the mask?

As per the Pic, you can see that i have masked the whole street and sidewalk in the back. But now if you notice, my front porch is now masked as well. So if a person comes to the door, the top half of them will be cut. How would i address this?

Thank you,
Sorry, I didn't want to attack you. I think kumar2020 advice with motion zones is indeed the best solution to that issue.
 
I'm having the same issue. It's a problem with the tool, when you run it in debugging mode via Visual Studio it spits out errors with some of the code.

This is the first error:

I then commented out that bit of code since logging isn't critical. I then got this error:

I'm not sure how to code in this language so unsure how to fix this. Hope the dev can assist.

My best bet would be that Deepstack returns a wrong answer, maybe because the detection api isn't enabled? Strange.
 
could you please add more to the filter not just show relevant or irrelevant
i want to filter if it shows human or cats etc

thanks.
 
  • Like
Reactions: joshwah
Is there a way to pass the name of the image file that resulted in a positive identification to the trigger url? A variable name perhaps?
 
@GentlePumpkin sorry if this has been asked/explored... I just was told about your app, today so have only read the first post. Have you contacted the BI team to see if there was a way to auto-create the duplicate cameras and set the appropriate settings on them... and hide them from the GUI? It would be great to have the AI be seamless and not have to manage multiple cams that are not 'directly used' Just a thought. thx!
 
@GentlePumpkin wirh the latest version of BI ken has made a change which will allow a new URL trigger variable.

The new variable is &trigger&memo=text which allows up to 35 characters to be stored. This is great because it allows you to specify keywords ie “person” or “dog” or the actual relevant triggers?

Therefrore is it possible if you could add some variables into the program so we can do %trigger% Or similiar which will allow us to store the exact relevant trigger word? Great for high level overviews!

thoughts?
 
  • Like
Reactions: fenderman
@GentlePumpkin sorry if this has been asked/explored... I just was told about your app, today so have only read the first post. Have you contacted the BI team to see if there was a way to auto-create the duplicate cameras and set the appropriate settings on them... and hide them from the GUI? It would be great to have the AI be seamless and not have to manage multiple cams that are not 'directly used' Just a thought. thx!
An update is in testing, which includes the ability to flag alerts and ...

@GentlePumpkin wirh the latest version of BI ken has made a change which will allow a new URL trigger variable.

The new variable is &trigger&memo=text which allows up to 35 characters to be stored. This is great because it allows you to specify keywords ie “person” or “dog” or the actual relevant triggers?

Therefrore is it possible if you could add some variables into the program so we can do %trigger% Or similiar which will allow us to store the exact relevant trigger word? Great for high level overviews!

thoughts?

add the detected objecttto the trigger url. I'm still testing stability. The source code is already on Github.
 
First - thanks for this the potential is enormous.

My motion detection "needs" are mostly about wildlife. I started by attempting to trigger on my dogs. It does not seem to pick them up. For instance in the 2 attached images, it picks up the human, but not the dogs. My guess is because they are too far away? I have every possible animal selected in my settings, the only thing deselected is "boats", which is good because it occasionally thinks my greenhouse is a boat. :)

Any advice on this? Sure would love to make this work....

Thanks!

AI1.PNGAI2.PNG