AI motion detection with BlueIris: an alternate approach

Making progress (I think) followed steps to edit files (only trigger file as default time zone and AIinput locations already matched in docker compose). But when I try to run docker-compose up in CMD, I get this error

C:\Users\mark2>docker-compose up
ERROR: yaml.parser.ParserError: expected '<document start>', but found '<block mapping start>'
in ".\..\..\docker-compose.yml", line 427, column 11

This means there's an error in your docker-compose.yaml file. I just tested with the sample config that's available on GitHub and it worked fine. The path and line count in that error is highly suspicious and makes me think you are not running docker-compose in the same directory as your downloaded yaml file.

If you need assistance figuring out why please open a support issue in GitHub and include the contents of your file.
 
  • Like
Reactions: Robotpedlr
Does this also support masks like AITools?

Got the week off this week so may have a play with both.
Masks are supported but different to AITools. Here you need to configure the zone in the triggers.json (IE "masks": [{"xMinimum": 530, "yMinimum": 230, "xMaximum": 630, "yMaximum": 290}]). I found using Paint.NET to find the coordinates of the area I wanted to mask worked well then defined them in the file.

Check out Defining masks here
 
  • Like
Reactions: neile
Any Pushbullet fans out there? I've got everything written to add Pushbullet as one of the notification methods but wouldn't mind having someone take it for a spin before I promote the code to the dev image. If so let me know.
 
Great work. First time Windows 10 (2004) Docker user. I can't for the life of me get the image paths correct. What exactly should I be using if the files are at "P:\Security\images" in the config and trigger files?
 
Great work. First time Windows 10 (2004) Docker user. I can't for the life of me get the image paths correct. What exactly should I be using if the files are at "P:\Security\images" in the config and trigger files?

See this comment on a support issue in GitHub: Correctly Write Mount Point for Source Images · Issue #286 · danecreekphotography/node-deepstackai-trigger.

If that doesn't help feel free to open a support issue there for more assistance and be sure to include a copy of your docker compose and trigger files.
 
Hi
I have a camera on top of my garage, looking over my own two cars.
Is it possible to turn on car detection, but ignore my cars. I know I can mask, but looking if there is a smarter (AI) way to train tool to ignore my car, maybe based on make/model/license?
 
Hi
I have a camera on top of my garage, looking over my own two cars.
Is it possible to turn on car detection, but ignore my cars. I know I can mask, but looking if there is a smarter (AI) way to train tool to ignore my car, maybe based on make/model/license?

The system uses an off-the-shelf AI solution that doesn't have a way to train it with different things. For your situation you can set up two triggers with the same camera. The first trigger should mask where the cars are parked and should trigger on cars. The second trigger shouldn't have a mask and should trigger on people.
 
What is the best way to mount my windows aiinput folder to linux where the docker and script is running? Also, anyone have done and node red integration to /arm /disarm mqtt notification from telegram?
 
What is the best way to mount my windows aiinput folder to linux where the docker and script is running? Also, anyone have done and node red integration to /arm /disarm mqtt notification from telegram?

Are you running Docker on the same Windows machine as BlueIris? If not, do. It's by far the simplest and easiest way to get this up and running. Once you have it on the same machine BlueIris see this comment in GitHub for an example of setting it up.

I have Home Assistant + NodeRed managing all my camera motion stuff for me, including setting notifications. MQTT events are sent if you configure MQTT in settings.json, see the wiki for how to enable it and how to define an MQTT handler plus what the events look like. There's also a page on how to add MQTT binary sensors to Home Assistant which you can then use directly in NodeRed.
 
Version 4.0.0 is out with Pushbullet support!

Breaking changes

  • settings.json and triggers.json can no longer have unrecognized properties in them. While this is technically a breaking change it shouldn't impact anyone in practice. The addition of this requirement is to ensure new users get real-time notifications of typos/mistakes in their configuration files while editing in tools that support schema validation (such as Visual Studio Code).

Non-breaking changes

  • Pushbullet notifications are now supported as trigger handlers. Enable it in settings.json then add pushbullet handlers to your triggers in triggers.json.

To update simply run docker-compose pull from the command line then docker-compose up to restart the services.
 
  • Like
Reactions: Splat
Awesome project @neile !
It looks like you are running BI + dockerized home assistant, node red and your deepstack integration on the same machine. How much memory do you have on it and how cpu-intensive is your deepstack project?
Ive been running home assistant and node red on a RPI 3 and its more than sufficient for it, but I think I dont think RPI will have the juice for the image analysis required.
Does deepstack benefit from any hardware acceleration provided by 6+gen intel cpus?
 
  • Like
Reactions: neile
Awesome project @neile !
It looks like you are running BI + dockerized home assistant, node red and your deepstack integration on the same machine. How much memory do you have on it and how cpu-intensive is your deepstack project?
Ive been running home assistant and node red on a RPI 3 and its more than sufficient for it, but I think I dont think RPI will have the juice for the image analysis required.
Does deepstack benefit from any hardware acceleration provided by 6+gen intel cpus?

I have Home Assistant and NodeRed on an RPI4. BlueIris, Deepstack, and this trigger system are on a dedicated PC. It is a normal PC I built for this but nothing fancy. Entry level AMD CPU, 16GB RAM. I would not run DeepStack on an RPI.
 
Awesome project @neile !
It looks like you are running BI + dockerized home assistant, node red and your deepstack integration on the same machine. How much memory do you have on it and how cpu-intensive is your deepstack project?
Ive been running home assistant and node red on a RPI 3 and its more than sufficient for it, but I think I dont think RPI will have the juice for the image analysis required.
Does deepstack benefit from any hardware acceleration provided by 6+gen intel cpus?

Yes Deepstack uses some of the hardware stuff if available I believe.
 
Are you running Docker on the same Windows machine as BlueIris? If not, do. It's by far the simplest and easiest way to get this up and running. Once you have it on the same machine BlueIris see this comment in GitHub for an example of setting it up.

I have Home Assistant + NodeRed managing all my camera motion stuff for me, including setting notifications. MQTT events are sent if you configure MQTT in settings.json, see the wiki for how to enable it and how to define an MQTT handler plus what the events look like. There's also a page on how to add MQTT binary sensors to Home Assistant which you can then use directly in NodeRed.

My HA, Deepqest, is running on another machine (Linux). If I mount my windows drive onto my Linux drive will that work (