[tool] [tutorial] Free AI Person Detection for Blue Iris

Tonight I thought I would install DeepQuestAI on another computer and have AI Tool point to this other computer. I entered the IP address in AI Tool but the errors I received were “can’t reach localhost:81”.

I am running DeepQuestAI on a Win10 machine.

Has anyone else tried to run DeepQuestAI on another computer (not the BI computer)?
I think most people run it on another computer (or another VM in my case). I would guess a vast majority of the installs are Docker containers on some Linux variant, but even if you are in the minority that runs Docker on Windows the DeepStack container would have its own IP address and not that of the host.

EDIT: I think you probably are asking about running the Windows version (non-docker) on a separate Windows machine. I could see the DeepStack people having a bug where they assumed any WIndows native install was going to be used by something on that machine and hard-coding "localhost". The Windows version came out a year ago though, so it would probably have been fixed by now.
 
Last edited:
Mostly missed completely (false alerts), although a couple where it identified me as a dog (40%). I had only set up my front door camera (DS-2CD2342WD-I) before which is mounted above and to the left of the porch so not as much contrast in night view between people and the ground I guess. I just now set up my garage camera (DS-2CD2343G0-I) with deepstack and it seems to work great even in night mode. The IR image is much better in the garage I think, and the substream feed for that camera is 640x480 instead of 640x360 for the front door.
I'll probably keep the front door on the motion sensor profile at night, but hopefully my other cameras can be full time deepstack triggered.

Last night I changed my iaxxxxxx cameras to save a jpg every second, and to reset the trigger after 3 seconds. My theory was that if deepstack got several pictures of each trigger, there was a higher chance that it got at least 1 right. That kinda worked out as I hoped. I had a walk around the house at 23:30, and got at least 1 trigger on each camera. Of course this only works if the Deepstack server can keep up. For that test I used the latest Windows version on the i7. Log looks like this:

Code:
[14.05.2020, 23:21:04.815]: Starting analysis of C:\BlueIris\aiinput/aiporte.20200514_232104782.jpg
[14.05.2020, 23:21:04.815]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\aiinput\aiporte.20200514_232104782.jpg' because it is being used by another process. (code: -2147024864 )
[14.05.2020, 23:21:04.815]: Could not access file - will retry after 10 ms delay
[14.05.2020, 23:21:04.829]: Retrying image processing - retry  1
[14.05.2020, 23:21:04.829]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\aiinput\aiporte.20200514_232104782.jpg' because it is being used by another process. (code: -2147024864 )
[14.05.2020, 23:21:04.829]: Could not access file - will retry after 20 ms delay
[14.05.2020, 23:21:04.851]: Retrying image processing - retry  2
[14.05.2020, 23:21:04.857]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\aiinput\aiporte.20200514_232104782.jpg' because it is being used by another process. (code: -2147024864 )
[14.05.2020, 23:21:04.865]: Could not access file - will retry after 30 ms delay
[14.05.2020, 23:21:04.901]: Retrying image processing - retry  3
[14.05.2020, 23:21:04.901]: (1/6) Uploading image to DeepQuestAI Server
[14.05.2020, 23:21:05.592]: (2/6) Waiting for results
[14.05.2020, 23:21:05.592]: (3/6) Processing results:
[14.05.2020, 23:21:05.592]:    Detected objects:person (92.58%),
[14.05.2020, 23:21:05.592]: (4/6) Checking if detected object is relevant and within confidence limits:
[14.05.2020, 23:21:05.594]:    person (92.58%):
[14.05.2020, 23:21:05.626]:       Checking if object is outside privacy mask of aiporte:
[14.05.2020, 23:21:05.626]:          Loading mask file...
[14.05.2020, 23:21:05.626]:      ->Camera has no mask, the object is OUTSIDE of the masked area.
[14.05.2020, 23:21:05.626]:    person (92.58%) confirmed.
[14.05.2020, 23:21:05.628]: (5/6) Performing alert actions:
[14.05.2020, 23:21:05.628]:    trigger url: http://192.168.86.153:81/admin?trigger&camera=porte&user=xxxxxxx&pw=xxxxxxxxx
[14.05.2020, 23:21:05.630]:    -> Trigger URL called.
[14.05.2020, 23:21:05.630]: (6/6) SUCCESS.
[14.05.2020, 23:21:05.632]: Adding detection to history list.

One question: Does anyone else get that the jpg's are "being used by another process."? I get that 2 or 3 times on each trigger.
 
Mostly missed completely (false alerts), although a couple where it identified me as a dog (40%). I had only set up my front door camera (DS-2CD2342WD-I) before which is mounted above and to the left of the porch so not as much contrast in night view between people and the ground I guess. I just now set up my garage camera (DS-2CD2343G0-I) with deepstack and it seems to work great even in night mode. The IR image is much better in the garage I think, and the substream feed for that camera is 640x480 instead of 640x360 for the front door.
I'll probably keep the front door on the motion sensor profile at night, but hopefully my other cameras can be full time deepstack triggered.
I was seeing very poor detection performance at night just like @morten67 above. I have my confidence interval set from 10% - 100%. I'd still say it misses on over half my front door triggers, but is almost perfect in daylight. Do you have any other pointers on getting night detection to work better or is the contrast / overhead angle of my camera just bad for night detection?
I ended up setting up a day and night profile in Blue Iris for now. The day profile takes the screenshots that are processed by deepstack, while after sunset the night profile falls back on my old motion sensor trigger settings. This fixed most of my false alerts (shadows from trees) but I do still get the occasional spiderweb in front of a camera at night causing it to record almost all night long.

Another thing that seems to be working to improve responstime from Deepstack is masking out zones that you don't want triggered:

BI_AI.PNG

To the right the camera has no mask. The log for that camera:

BI_AI2.PNG

First of all: AITool is checking the same jpg twice, with different result (!). I also see that Deepstack takes 2-3 times longer for jpg's without mask. In addition the masked ones are 2560x1920, and the unmasked one is 1920x1080 (I'm saving 100% jpg's in Blue Iris for the testing).

Edit: I was too fast here. I'm showing the log for when Deepstack was running on another PC (I believe docker noavx on the i5) . Not sure how representative this is for my present setup.

Update: Finally got it running with avx on the i5. Have set it to HIGH. The i7 with the Windows version is about 50% faster:

Code:
morten@ubuntu2:~$ sudo docker run -e MODE=High -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack
[sudo] password for morten:
/v1/vision/detection
---------------------------------------
v1/vision/addmodel
---------------------------------------
v1/vision/listmodels
---------------------------------------
v1/vision/deletemodel
---------------------------------------
---------------------------------------
v1/backup
---------------------------------------
v1/restore
[GIN] 2020/05/15 - 10:58:56 | 200 |  2.937931186s |  192.168.86.153 | POST     /v1/vision/detection
[GIN] 2020/05/15 - 10:58:58 | 200 |  1.166647037s |  192.168.86.153 | POST     /v1/vision/detection
[GIN] 2020/05/15 - 10:58:59 | 200 |  1.171422407s |  192.168.86.153 | POST     /v1/vision/detection
[GIN] 2020/05/15 - 10:59:00 | 200 |  1.173187622s |  192.168.86.153 | POST     /v1/vision/detection
[GIN] 2020/05/15 - 10:59:01 | 200 |  1.159681019s |  192.168.86.153 | POST     /v1/vision/detection
[GIN] 2020/05/15 - 10:59:03 | 200 |  1.164701584s |  192.168.86.153 | POST     /v1/vision/detection
[GIN] 2020/05/15 - 11:00:10 | 200 |  1.172835704s |  192.168.86.153 | POST     /v1/vision/detection
 

Attachments

  • BI_AI2.PNG
    BI_AI2.PNG
    513.5 KB · Views: 42
Last edited:
  • Like
Reactions: nstig8
Last night I changed my iaxxxxxx cameras to save a jpg every second, and to reset the trigger after 3 seconds. My theory was that if deepstack got several pictures of each trigger, there was a higher chance that it got at least 1 right. That kinda worked out as I hoped. I had a walk around the house at 23:30, and got at least 1 trigger on each camera. Of course this only works if the Deepstack server can keep up. For that test I used the latest Windows version on the i7. Log looks like this:

Code:
[14.05.2020, 23:21:04.815]: Starting analysis of C:\BlueIris\aiinput/aiporte.20200514_232104782.jpg
[14.05.2020, 23:21:04.815]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\aiinput\aiporte.20200514_232104782.jpg' because it is being used by another process. (code: -2147024864 )
[14.05.2020, 23:21:04.815]: Could not access file - will retry after 10 ms delay
[14.05.2020, 23:21:04.829]: Retrying image processing - retry  1
[14.05.2020, 23:21:04.829]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\aiinput\aiporte.20200514_232104782.jpg' because it is being used by another process. (code: -2147024864 )
[14.05.2020, 23:21:04.829]: Could not access file - will retry after 20 ms delay
[14.05.2020, 23:21:04.851]: Retrying image processing - retry  2
[14.05.2020, 23:21:04.857]: System.IO.IOException | The process cannot access the file 'C:\BlueIris\aiinput\aiporte.20200514_232104782.jpg' because it is being used by another process. (code: -2147024864 )
[14.05.2020, 23:21:04.865]: Could not access file - will retry after 30 ms delay
[14.05.2020, 23:21:04.901]: Retrying image processing - retry  3
[14.05.2020, 23:21:04.901]: (1/6) Uploading image to DeepQuestAI Server
[14.05.2020, 23:21:05.592]: (2/6) Waiting for results
[14.05.2020, 23:21:05.592]: (3/6) Processing results:
[14.05.2020, 23:21:05.592]:    Detected objects:person (92.58%),
[14.05.2020, 23:21:05.592]: (4/6) Checking if detected object is relevant and within confidence limits:
[14.05.2020, 23:21:05.594]:    person (92.58%):
[14.05.2020, 23:21:05.626]:       Checking if object is outside privacy mask of aiporte:
[14.05.2020, 23:21:05.626]:          Loading mask file...
[14.05.2020, 23:21:05.626]:      ->Camera has no mask, the object is OUTSIDE of the masked area.
[14.05.2020, 23:21:05.626]:    person (92.58%) confirmed.
[14.05.2020, 23:21:05.628]: (5/6) Performing alert actions:
[14.05.2020, 23:21:05.628]:    trigger url: http://192.168.86.153:81/admin?trigger&camera=porte&user=xxxxxxx&pw=xxxxxxxxx
[14.05.2020, 23:21:05.630]:    -> Trigger URL called.
[14.05.2020, 23:21:05.630]: (6/6) SUCCESS.
[14.05.2020, 23:21:05.632]: Adding detection to history list.

One question: Does anyone else get that the jpg's are "being used by another process."? I get that 2 or 3 times on each trigger.


same here before configuring clone camera...
  • the clone is used only to motion capture and write pictures to AITools scanned folder
  • once AI tool confirm it trigger the clonED camera which is used to record and send mail

check your BI settings/setup and follow AItools setup on page 1
 
Last edited:
Mostly missed completely (false alerts), although a couple where it identified me as a dog (40%). I had only set up my front door camera (DS-2CD2342WD-I) before which is mounted above and to the left of the porch so not as much contrast in night view between people and the ground I guess. I just now set up my garage camera (DS-2CD2343G0-I) with deepstack and it seems to work great even in night mode. The IR image is much better in the garage I think, and the substream feed for that camera is 640x480 instead of 640x360 for the front door.
I'll probably keep the front door on the motion sensor profile at night, but hopefully my other cameras can be full time deepstack triggered.
Is the front door camera running in day or IR mode? I've had a few instances of false alerts on one of my cameras that run forced color mode that is in a low light area where Deesptack thought that deer were a person. You may want to send a higher resolution image to Deepstack for this camera. I noticed that when I was sending low resolution images, i.e. 640X480 that Deepstack was missing people more frequently than when I was sending native or higher resolution images. This may not be the case for everyone, but, in my testing, the CPU decrease and response time was negligible when sending lower resolution images to Deepstack versus sending native resolution images.
 
  • Like
Reactions: nstig8
Tonight I thought I would install DeepQuestAI on another computer and have AI Tool point to this other computer. I entered the IP address in AI Tool but the errors I received were “can’t reach localhost:81”.

I am running DeepQuestAI on a Win10 machine.

Has anyone else tried to run DeepQuestAI on another computer (not the BI computer)?
Please see my post #338 in this thread. I outlined the prerequisites for running Deepstack on Windows. The one prerequisite is Visual C++. The other is that your processor HAS to support AVX for Deepstack to run in Windows. If the processor doesn't support AVX then you can run Deepstack in Docker on Ubuntu with the noavx tag.
 
Is the front door camera running in day or IR mode? I've had a few instances of false alerts on one of my cameras that run forced color mode that is in a low light area where Deesptack thought that deer were a person. You may want to send a higher resolution image to Deepstack for this camera. I noticed that when I was sending low resolution images, i.e. 640X480 that Deepstack was missing people more frequently than when I was sending native or higher resolution images. This may not be the case for everyone, but, in my testing, the CPU decrease and response time was negligible when sending lower resolution images to Deepstack versus sending native resolution images.
I have it on Auto so it is in IR mode. The front porch does have a porch light on a timer all night so I could probably put that one on forced color mode and get decent video. My other cameras aren't near lights so I need IR, but they seem to perform much better than the front porch cam. The low resolution comes from using the new feature in Blue Iris 5 where you can set up the substream for all your cameras and it uses that for motion detection and live views (but once triggered records from the full resolution stream). It dramatically cut my CPU usage when I switched my cameras over, from 30% down to 11%. I could switch just that camera back to full resolution if forcing color mode doesn't improve detection or doesn't give me acceptable night videos.
 
I have it on Auto so it is in IR mode. The front porch does have a porch light on a timer all night so I could probably put that one on forced color mode and get decent video. My other cameras aren't near lights so I need IR, but they seem to perform much better than the front porch cam. The low resolution comes from using the new feature in Blue Iris 5 where you can set up the substream for all your cameras and it uses that for motion detection and live views (but once triggered records from the full resolution stream). It dramatically cut my CPU usage when I switched my cameras over, from 30% down to 11%. I could switch just that camera back to full resolution if forcing color mode doesn't improve detection or doesn't give me acceptable night videos.
I was unaware of this new feature in BI. When I did testing with lower resolution images I pulled the sub stream from the camera into a cloned camera in BI. Where is this feature located in BI?
 
  • Like
Reactions: DwightMorgan
I was unaware of this new feature in BI. When I did testing with lower resolution images I pulled the sub stream from the camera into a cloned camera in BI. Where is this feature located in BI?
The latest version of Blue Iris 5 added substream optimization. If you set up your camera with the Main stream and also put in the url for the Sub stream then Blue Iris will use the sub stream to do all the motion processing and live viewing. I still needed to clone those cameras to a hidden one to take the AI screenshots. For example, I have a camera named 'drivewayai' that only detects motion and takes the screenshots, then deepstack triggers the 'driveway' camera to take the videos. If you do a straight clone of a camera with the same stream feeds / resolution and just change the trigger/recording settings then it doesn't use any extra CPU resources in Blue Iris.
Sub stream processing is definitely the top new feature in Blue Iris 5, and he has it at the top of the release notes for a reason. Going to make it much easier to add all those 4K and up cameras without killing the CPU. Updates - Blue Iris Software
 
  • Like
Reactions: morten67
The latest version of Blue Iris 5 added substream optimization. If you set up your camera with the Main stream and also put in the url for the Sub stream then Blue Iris will use the sub stream to do all the motion processing and live viewing. I still needed to clone those cameras to a hidden one to take the AI screenshots. For example, I have a camera named 'drivewayai' that only detects motion and takes the screenshots, then deepstack triggers the 'driveway' camera to take the videos. If you do a straight clone of a camera with the same stream feeds / resolution and just change the trigger/recording settings then it doesn't use any extra CPU resources in Blue Iris.
Sub stream processing is definitely the top new feature in Blue Iris 5, and he has it at the top of the release notes for a reason. Going to make it much easier to add all those 4K and up cameras without killing the CPU. Updates - Blue Iris Software
Thanks for the explanation. I have my cameras set up by cloning each camera and hiding the cloned AI cameras. The only thing that I changed on the cloned cameras was the overlay, record settings , etc. then I hid these clones. I thought that cloned cameras already didn't use additional CPU cycles as well as it only pulled one stream for some time now in BI. I was unaware that BI now uses the cloned, lower resolution clones for motion detection and live view.
 
I am running AI Tools and DeepQuestAI on the same windows machine (VM) as Blue Iris and I am getting the same error. They are both running with admin privileges. Any ideas what is going on?
Are you running VMware Workstation or ESXi? If you are running ESXi it is because the AVX tag isn't passed to the VM from the host. VMware Workstation passes this tag natively.
 
I am running the VM in unraid without an extra GPU so there is no GPU pass through. I was going to try and run this via CPU. I am picking up a google coral later today and was hoping to run the AI through that.
For information, I run DeepStack CPU into a docker in unraid. Work fine.
Also, in unraid, I tested using DeepStack GPU. Work for few minutes.
The last test I did is DeepStack GPU in a ubuntu VM. Worked for 2hrs, than drop the GPU session.
I realy want to find a working solution for the DeepStack GPU, it's faster and did not bounce the CPU each time it treat a picture.
 
I am running the VM in unraid without an extra GPU so there is no GPU pass through. I was going to try and run this via CPU. I am picking up a google coral later today and was hoping to run the AI through that.
I am not referring to the GPU being passed through from the host to the VM, I am talking about AVX from the host CPU being passed through. From my testing ESXi doesn't natively pass the AVX feature through to the guest. I am not sure which Hypervisor you're running.
 
Last edited:
  • Like
Reactions: fenderman
For information, I run DeepStack CPU into a docker in unraid. Work fine.
Also, in unraid, I tested using DeepStack GPU. Work for few minutes.
The last test I did is DeepStack GPU in a ubuntu VM. Worked for 2hrs, than drop the GPU session.
I realy want to find a working solution for the DeepStack GPU, it's faster and did not bounce the CPU each time it treat a picture.
Same here. I managed to run the GPU for a couple of minutes on an i7 / nvidia /Ubuntu 20.04. Processing time around 2-300 ms. On the same machine, the windows version is around 750ms.
 
Please see my post #338 in this thread. I outlined the prerequisites for running Deepstack on Windows. The one prerequisite is Visual C++. The other is that your processor HAS to support AVX for Deepstack to run in Windows. If the processor doesn't support AVX then you can run Deepstack in Docker on Ubuntu with the noavx tag.

Really wish I had noticed your post earlier in me tinkering with this. First I kept trying to run Deepstack on a machine along side BlueIris with an old i7 cpu that doesn't have AVX. Then tested it on another windows machine that didn't have the right Visual C++ redistribution. Tried it on my main desktop and it worked... which was confusing at first because that desktop has a slightly older CPU etc, but unbeknownst to me it had the right C++. After much gnashing of teeth I saw your posts. Moved back to getting deep stack running on the second machine, by loading up 2010 and 2015 visual C++. Didn't seem to work. More frustrated loading of more Visual C++ redistributions starting with the older ones first, since that was the only noticeable difference between the second machine and my desktop. Rebooting and testing after each year and x86 & x64 version were installed I got 2005, 2008, (already had 2010 & 2015-2019) I had nothing working. Then walked away for a bit to do something else for a moment and came back to a working Deepstack. For some reason on my machines it has a long initial start up process. It will look like it has loaded the APIs and I can reach it via a web browser. Sending it pictures via AiTool I get stonewalled and timed out-ed, unless I become a patient person and wait. Then suddenly it will decide to become unconstipated and work. Throwing a horrible process time that isn't even the actual full time for the first image from when it was really sent. After that works as it should. If I stop Deepstack after that and start it back up I have to let it work through that startup process and initial slow image again. Odd but it works. Hooray Windows and Deepstack.

Point is, thank you for posting about the C++ and AVX requirements in a bit of detail. It helped me get it all going.

With Deepstack Ai is there a way to train it better or tweak its settings? I am sending it full res mostly 1080 jpgs. Most of the time it is great, but sometimes it is bad and other times it is funny what it comes up with. That cat is not a bird, that storage box in the garage is not a TV, and my truck is NOT a bear no matter what it tries to tell you Deepstack. I know the truck bear one is in the rain so I don't blame the Ai. That said I am still going to have to nickname the Ai "Bobby", so I can yell "DAMMIT Bobby!" at it and mumble "that boy ain't right" while getting it set the way that works best for me. I am fine with a few false positives, since it has cut down the mass of false positives I needed to just catch the real instances of something going on. There are a few that it misses though, that are people, and some animals in broad daylight, so any ideas on training or tweaking would be appreciated.

One type of camera Deepstack Ai seemed to really struggle with is a fisheye cameras. I have one inside and one on my doorbell. The Doorbell camera does moderately well since its view is looking horizontally. The other fisheye is on a ceiling and that perspective and fisheye distortion seemed to really make the Deepstack Ai struggle to match people/cats/dogs. Anyone using a fisheye successfully/well with Deepstack Ai?
 

Attachments

  • Capture.JPG
    Capture.JPG
    87.2 KB · Views: 52
  • whawhawaaaaa01.JPG
    whawhawaaaaa01.JPG
    241 KB · Views: 52
  • whawhawaaaaa02.JPG
    whawhawaaaaa02.JPG
    335.4 KB · Views: 51
  • 97405182_1281949598674984_6098437932039798784_n.png
    97405182_1281949598674984_6098437932039798784_n.png
    668.2 KB · Views: 52
Really wish I had noticed your post earlier in me tinkering with this. First I kept trying to run Deepstack on a machine along side BlueIris with an old i7 cpu that doesn't have AVX. Then tested it on another windows machine that didn't have the right Visual C++ redistribution. Tried it on my main desktop and it worked... which was confusing at first because that desktop has a slightly older CPU etc, but unbeknownst to me it had the right C++. After much gnashing of teeth I saw your posts. Moved back to getting deep stack running on the second machine, by loading up 2010 and 2015 visual C++. Didn't seem to work. More frustrated loading of more Visual C++ redistributions starting with the older ones first, since that was the only noticeable difference between the second machine and my desktop. Rebooting and testing after each year and x86 & x64 version were installed I got 2005, 2008, (already had 2010 & 2015-2019) I had nothing working. Then walked away for a bit to do something else for a moment and came back to a working Deepstack. For some reason on my machines it has a long initial start up process. It will look like it has loaded the APIs and I can reach it via a web browser. Sending it pictures via AiTool I get stonewalled and timed out-ed, unless I become a patient person and wait. Then suddenly it will decide to become unconstipated and work. Throwing a horrible process time that isn't even the actual full time for the first image from when it was really sent. After that works as it should. If I stop Deepstack after that and start it back up I have to let it work through that startup process and initial slow image again. Odd but it works. Hooray Windows and Deepstack.

Point is, thank you for posting about the C++ and AVX requirements in a bit of detail. It helped me get it all going.

With Deepstack Ai is there a way to train it better or tweak its settings? I am sending it full res mostly 1080 jpgs. Most of the time it is great, but sometimes it is bad and other times it is funny what it comes up with. That cat is not a bird, that storage box in the garage is not a TV, and my truck is NOT a bear no matter what it tries to tell you Deepstack. I know the truck bear one is in the rain so I don't blame the Ai. That said I am still going to have to nickname the Ai "Bobby", so I can yell "DAMMIT Bobby!" at it and mumble "that boy ain't right" while getting it set the way that works best for me. I am fine with a few false positives, since it has cut down the mass of false positives I needed to just catch the real instances of something going on. There are a few that it misses though, that are people, and some animals in broad daylight, so any ideas on training or tweaking would be appreciated.

One type of camera Deepstack Ai seemed to really struggle with is a fisheye cameras. I have one inside and one on my doorbell. The Doorbell camera does moderately well since its view is looking horizontally. The other fisheye is on a ceiling and that perspective and fisheye distortion seemed to really make the Deepstack Ai struggle to match people/cats/dogs. Anyone using a fisheye successfully/well with Deepstack Ai?

Glad to hear that you were able get Deepstack working on Windows; it can be a pain getting it working initially. I had a similar experience when I first stated running Deepstack on my BI machine where it wouldn’t accept images until some time had passed.

I had it pick up incorrect objects, i.e. it thought that a cat was a dog, a bird was a cat, etc. I ended up only setting AI Tool to detect people and I’ve had great success with this. I would like to detect dogs, but it was generating too many false positives. I also adjusted the confidence levels to better detect people rather than false alerts from cats, dogs, etc.

Edit: you can also set up a mask to ignore the flower pots in the image where Deepstack thought they were a bird.

In regards to training Deepstack, @GentlePumpkin posted a week or so ago that Deepstack is going to be open source in the future, so once he starts AI Tool development again he can add in the ability to train Deepstack.


Sent from my iPhone using Tapatalk
 
Last edited:
  • Like
Reactions: Tanaban
In regards to training Deepstack, @GentlePumpkin posted a week or so ago that Deepstack is going to be open source in the future, so once he starts AI Tool development again he can add in the ability to train Deepstack.
Awesome to hear about the open source in the future.


I had it pick up incorrect objects, i.e. it thought that a cat was a dog, a bird was a cat, etc. I ended up only setting AI Tool to detect people and I’ve had great success with this. I would like to detect dogs, but it was generating too many false positives. I also adjusted the confidence levels to better detect people rather than false alerts from cats, dogs, etc.

Edit: you can also set up a mask to ignore the flower pots in the image where Deepstack thought they were a bird.
There aren't too many of what I would call "true false positives" where it really isn't an object of note. The amount of objects I'm getting alerted about is a small fraction compared to how I use to have it BI set with just motion. With just BI motion there were simply too many variables in a scene to make a simple blanket rule. If I ratcheted it too tight I would miss many things that needed to be an alert. If I opened it up to let the motion sense be sensitive enough to see small but still important motion, everything would set it off. I'm pretty happy with the letting the motion sensing run wide open to sense just about any motion and then let the Ai filter through the mess for actual objects that warrant an alert.

All that said, I did run into an issue today with AiTool just randomly not sending new images to Deepstack. It was odd and took me a minute to realize that was what was happening and not some issue with Deepstack (I auto pointed my finger at that program since I had a bit of hell getting it running). BI was still snap shooting motion. AiTool was still functioning as far as the interface (I could click around, see that it was "running", and could review previous Deepstack processed images). Ended up having to shut down AiTool and restart it. After that it started sending newly created images off to Deepstack (no restart on Deepstack). Lost half a day's worth of motion analysis with alerts. Not the end of the world since I keep continuous recording, if I found out about something I need to go back to look at. Anyone ran into that kind of issue where AiTool, where it just kind of stops sending new images ? Currently I am just running AiTool as a regular program not a service.

Beyond that it's time to look at each camera's motion settings and tweak those a bit and do the dreaded new server search. My current BI computer is getting a bit bottlenecked by the CPU these days. If anyone has some good threads/posts on VM setups and what to absolutely have on a new system for BI, Deepstack, etc, that would be great.
 
  • Like
Reactions: pmcross
Please see my post #338 in this thread. I outlined the prerequisites for running Deepstack on Windows. The one prerequisite is Visual C++. The other is that your processor HAS to support AVX for Deepstack to run in Windows. If the processor doesn't support AVX then you can run Deepstack in Docker on Ubuntu with the noavx tag.
I am running HA through portainer on ESXI. How would I run the noavx tag on the docker? This may be the issue that I am running into.