[tool] [tutorial] Free AI Person Detection for Blue Iris

Yeah, the more I read, the more I worry about the maturity. I know it's free and all, but perhaps the tensorflow AI is a bit more mature at this stage. I should have held off on the NCS2, but lesson learnt

It might just be that things are in transition for DeepStack. They have discontinued their Premium (paid) plan and they are working to release their code as open source. The Alpha Pi version was released in August of 2019 and it doesn't seem that much has been done since then, but it's not their core product. They have likely been putting more effort into their primary Docker and Windows versions. I'm hopeful and optimistic that after the transition that they will work on the Jetson version. I'm not expecting to see a dependable Jetson version until the end of the year, but if it comes sooner then I'll be happy.

I run the Windows version (no Docker) on my Blue Iris computer that has an i7-4770 which dates back to 2013 and DS returns results in around 700ms. I also run DS on my personal computer which has an i7-4790 from 2014 which also returns results in around 700ms. I have AI Tool set to use the 4790 first and if the 4790 is busy then the 4770 is used. Three of my cameras cover overlapping areas so it's possible to trigger all three cameras at once and I don't have a problem with the two DeepStacks keeping up. I still get notifications pretty quickly on my phone.

I understand that you currently have a Celeron system that's not working for you and you probably don't want to wait for the Pi or Jetson versions to be stable. So if you don't have another computer on your network that can pitch in to handle DeepStack processing then maybe your best route is to pick up a used computer. I don't know the computer prices in your area but in the US you can get a refurbished i7 computer similar to what I have with 16GB of RAM, a SSD, and Windows 10 Pro on Amazon for around $300. Much more than a Pi or a Jetson but it will do the job with the current version of DeepStack. If you look locally you might be able to find a similar system for much less. I run a computer shop in a small town. We install and maintain networks of around 50 computers or less, and we do computer repair. It's common for customers to not want to repair a computer that's a few years old even if it just needs a software reload. They buy a new computer, we transfer their data, and then they leave their old computer behind for recycling. After erasing the drive I often give the computers away to needy people that want to reload Windows themselves, or I spend the time to run diagnostics and install a fresh copy of Windows and sell them for $200 to cover my labor and the fact that I have to give some kind of warranty and support. If you want a bargain system then perhaps a computer shop in your area can help. If a computer is left behind that doesn't have a Win 10 entitlement then I don't sell it. I either give it away or take it to the recycler. So if you are willing to run Linux and Docker then you have a better chance of finding a cheap or free used computer.
 
I know a few of you are running deepstack on a raspberry pi, any feedback on reliability? My celeron NUC isn't cutting it. Docker on windows 10 it does a 720p res image in about 5 seconds. In the VM I originally had it took 20plus seconds. I see some feedback about the pi and NCS2 running in under a second, so looking for some feedback before I go down that route. It appears it's still in alpha
Still in alpha but works very well !!! i've a Pi4 & NCS2.
Beta should be delivered on september
 
hey i will use sub streams to save CPU.
I first need to delete my clone cam ? Config my maincam with sub and create a new AI clone cam ?

Clone cam working with sub streams ?
 
hey i will use sub streams to save CPU.
I first need to delete my clone cam ? Config my maincam with sub and create a new AI clone cam ?

Clone cam working with sub streams ?

You can use clones with substreams but an issue I found is that the snapshot captured is from the substream so a lower res which for me caused issues with reliable recognition of objects in DQ. I’ve submitted a request to BI to see if an option to have the snapshots from the main stream when using substreams, I think a few others may have also submitted the same request so am hoping that it will be implemented soon.

Right now my clone cams for AI aren’t really clones as far as BI is concerned as I’ve removed the substream to get the higher res snapshots.
 
I run the Windows version (no Docker) on my Blue Iris computer that has an i7-4770 which dates back to 2013 and DS returns results in around 700ms

Those are some good times, and thanks for the advice. I have been offered a i5 NUC for a good price, hence my indecision.

Still in alpha but works very well !!! i've a Pi4 & NCS2.

Good to hear, thanks. I'll have a look at how to get it all set up. Planning to run it headless.
 
It might just be that things are in transition for DeepStack. They have discontinued their Premium (paid) plan and they are working to release their code as open source. The Alpha Pi version was released in August of 2019 and it doesn't seem that much has been done since then, but it's not their core product. They have likely been putting more effort into their primary Docker and Windows versions. I'm hopeful and optimistic that after the transition that they will work on the Jetson version. I'm not expecting to see a dependable Jetson version until the end of the year, but if it comes sooner then I'll be happy.

I run the Windows version (no Docker) on my Blue Iris computer that has an i7-4770 which dates back to 2013 and DS returns results in around 700ms. I also run DS on my personal computer which has an i7-4790 from 2014 which also returns results in around 700ms. I have AI Tool set to use the 4790 first and if the 4790 is busy then the 4770 is used. Three of my cameras cover overlapping areas so it's possible to trigger all three cameras at once and I don't have a problem with the two DeepStacks keeping up. I still get notifications pretty quickly on my phone.

I understand that you currently have a Celeron system that's not working for you and you probably don't want to wait for the Pi or Jetson versions to be stable. So if you don't have another computer on your network that can pitch in to handle DeepStack processing then maybe your best route is to pick up a used computer. I don't know the computer prices in your area but in the US you can get a refurbished i7 computer similar to what I have with 16GB of RAM, a SSD, and Windows 10 Pro on Amazon for around $300. Much more than a Pi or a Jetson but it will do the job with the current version of DeepStack. If you look locally you might be able to find a similar system for much less. I run a computer shop in a small town. We install and maintain networks of around 50 computers or less, and we do computer repair. It's common for customers to not want to repair a computer that's a few years old even if it just needs a software reload. They buy a new computer, we transfer their data, and then they leave their old computer behind for recycling. After erasing the drive I often give the computers away to needy people that want to reload Windows themselves, or I spend the time to run diagnostics and install a fresh copy of Windows and sell them for $200 to cover my labor and the fact that I have to give some kind of warranty and support. If you want a bargain system then perhaps a computer shop in your area can help. If a computer is left behind that doesn't have a Win 10 entitlement then I don't sell it. I either give it away or take it to the recycler. So if you are willing to run Linux and Docker then you have a better chance of finding a cheap or free used computer.
How are you load balancing the two Deepsack servers, are you using the VorlonCD fork to do this?
/
 
I am as well and likewise though I am getting some duplications so once that's sorted it'll be even better.

I've had issues with duplicates being sent to DS since moving to the VorlonCD fork, before it supported multiple DS servers. At one time all images were being sent to DS two or three times. Chris Dodge suggested removing the default input path and only having the input path in the camera settings. After making the change everything worked correctly. However, some time after that while messing around with settings the duplicates came back. The way I worked around it was to remove the input path from each camera, save the settings, then enter the input path back into each camera, save the settings, close AI Tool and then reopen it.

After upgrading to the VorlonCD version that supports multiple DS servers I noticed some random duplicates so I went through my regular routine to fix it. I didn't monitor much after that so I don't know if I'm still getting random duplicates. I noticed in the log that the random duplicates were happening when both DS servers were being used during heavy load.
 
An update to my VorlonCD mod:

  • Adds ability to copy alert images to any folder
  • Can run any external script on alert
  • Can play sounds based on object detected
  • Allows you to graphically create a masked area similar to how BI does it
  • Maybe fix dupe image issues sigh
  • Misc bugs fixed AND added

 

For comparison with my i7-4790 running Windows DeepStack in High mode I get times of 660 to 707ms when feeding it images of 2688 x 1520, 2048 x 1536, and 1920 x 1080.

I have a Celeron J1900 and an old i5-4460 system laying around and this coming weekend I plan to put fresh copies of Windows on them and check DS times, just out of curiosity and to see how they do against the Pi. The Celeron is a dog and I don't expect it to do well, but I'm curious to see how the old i5 does.
 
An update to my VorlonCD mod:
  • Maybe fix dupe image issues sigh

Yes, but we all appreciate it! :-)

We would not have this wonderful tool if not for GentlePumpkin, and it wouldn't be so awesome if not for your efforts as well as the work of others such as @classObject.

I had done all that I could to reduce false alerts but they were still annoying me. I used multiple cameras with 6mm lenses set up to view only important areas instead of wide angle lenses that capture trees and bushes that blow in the wind, along with AB>C unidirectional zone crossing and optimization of motion detection settings. But I was still getting false alerts from ground shadows as trees blew in the wind, from the movement of clouds overhead on sunny days, and a lot of false alerts at night due to bugs flying around the camera.

Most of the false alerts have been eliminated with AI. I rarely ever get a false alert from bugs flying around the camera at night, but one time AI thought that a wasp on the lens was a bear. It also detected a small bush in my yard as a person, but the bush needed trimming anyway. Since the trim there's been no false detections.

I believe that some of the previous work that I had done to eliminate false alerts such as using narrower lenses than normal, placing the cameras closer to the subject, mounting the cameras relatively low, and using auxiliary IR illuminators instead of the built in IR LEDs where appropriate has helped with my AI success. AI acts on what it sees so feeding it a better image can lead to better results. AI isn't an instant fix for a sloppy camera setup.

Again, I would like to express my appreciation for all the work that has been put into AI Tool and for making my life a little easier! :-)
 
For comparison with my i7-4790 running Windows DeepStack in High mode I get times of 660 to 707ms when feeding it images of 2688 x 1520, 2048 x 1536, and 1920 x 1080.

I have a Celeron J1900 and an old i5-4460 system laying around and this coming weekend I plan to put fresh copies of Windows on them and check DS times, just out of curiosity and to see how they do against the Pi. The Celeron is a dog and I don't expect it to do well, but I'm curious to see how the old i5 does.

My i7 and i9 were getting time like that for 1080 images but for some reason they are now about 1 sec. I changed to the beta version of Deepstack and run in Windows Docker Desktop.

My interest in the pi is for power consumption reasons. I'm trying to be self sufficient which is hard during the winter months.
 
Yes, but we all appreciate it! :)

We would not have this wonderful tool if not for GentlePumpkin, and it wouldn't be so awesome if not for your efforts as well as the work of others such as @classObject.

I had done all that I could to reduce false alerts but they were still annoying me. I used multiple cameras with 6mm lenses set up to view only important areas instead of wide angle lenses that capture trees and bushes that blow in the wind, along with AB>C unidirectional zone crossing and optimization of motion detection settings. But I was still getting false alerts from ground shadows as trees blew in the wind, from the movement of clouds overhead on sunny days, and a lot of false alerts at night due to bugs flying around the camera.

Most of the false alerts have been eliminated with AI. I rarely ever get a false alert from bugs flying around the camera at night, but one time AI thought that a wasp on the lens was a bear. It also detected a small bush in my yard as a person, but the bush needed trimming anyway. Since the trim there's been no false detections.

I believe that some of the previous work that I had done to eliminate false alerts such as using narrower lenses than normal, placing the cameras closer to the subject, mounting the cameras relatively low, and using auxiliary IR illuminators instead of the built in IR LEDs where appropriate has helped with my AI success. AI acts on what it sees so feeding it a better image can lead to better results. AI isn't an instant fix for a sloppy camera setup.

Again, I would like to express my appreciation for all the work that has been put into AI Tool and for making my life a little easier! :)


Ditto and thanks for the dynamic to static mask ability.
 
hmm i get no telegram BOT ID and Message is same aitool but in the new VorlonCD mod 1.72 i get mo message. Nothing in log. I have activate telegram alarm in my cams


edit: i see i get wrong flags .. maybe tigger URL change ?
I get aitool 1x cat (78%( irrelevant ) but in BI i get flag : person 99,21

other i get aitool Irrlevant 3x cars 99-100% i get a tigger to BI with "summary"
AItool say Irrlevant (not green) but tigger urls send .. but no telegram.
my Tigger all new Line:


LOG:
Code:
[09:47:23.308]:           OnCreatedAsync> Adding new image to queue: C:\BlueIris\aiinput\aigarage2.20200904_094723291.jpg
[09:47:30.569]:            DetectObjects> 192.168.2.1:280 - (2/6) Posted in 10797ms, Received a 123 byte response.
[09:47:30.571]:            DetectObjects> 192.168.2.1:280 - (3/6) Processing results...
[09:47:30.573]:            DetectObjects> 192.168.2.1:280 -    Detected objects:cat (78,33%),
[09:47:30.574]:            DetectObjects> 192.168.2.1:280 - (4/6) Checking if detected object is relevant and within confidence limits:
[09:47:30.627]:            DetectObjects> 192.168.2.1:280 -    cat (78,33%) is irrelevant.
[09:47:30.630]:    CleanUpExpiredHistory> Removing expired history: key=2420546, name=person, xmin=1953, ymin=1192, xmax=2321, ymax=1435, counter=0, create date: 04.09.2020 08:16:00 for camera garage2 which existed for 31 minutes.
[09:47:30.629]:            DetectObjects> 192.168.2.1:280 - ### Masked objects summary for camera garage2 ###
[09:47:30.632]:            DetectObjects> 192.168.2.1:280 - (5/6) Performing alert CANCEL actions:
[09:47:30.673]:          CallTriggerURLs>    -> trigger URL called: http://192.168.2.5:81/admin?camera=garage2&trigger&user=iobroker&pw=123, response: 'signal=green profile=1 lock=0 clip=448933051 camera=garage2 '
[09:47:31.029]:          CallTriggerURLs>    -> trigger URL called: http://192.168.2.5:81/admin?camera=garage2&flagalert=1&trigger&memo=person%20(99,21%25)&user=iobroker&pw=123, response: 'signal=green profile=1 lock=0 clip=448933051 camera=garage2 '
[09:47:31.118]:                     Save> Settings saved to C:\Users\cam\Desktop\aitool\AITool.Settings.json
[09:47:31.118]:            DetectObjects> 192.168.2.1:280 - (6/6) Camera garage2 caused an irrelevant alert.
[09:47:31.121]:            DetectObjects> 192.168.2.1:280 - 1x irrelevant, so it's an irrelevant alert.
 
Last edited: