AI motion detection with BlueIris: an alternate approach

I have Home Assistant and NodeRed on an RPI4. BlueIris, Deepstack, and this trigger system are on a dedicated PC. It is a normal PC I built for this but nothing fancy. Entry level AMD CPU, 16GB RAM. I would not run DeepStack on an RPI.
Woah, a quick search revealed that deepstack currently supports intel's $75 Neural Compute Stick and it does pretty well on RPI4 + NCS: Using DeepStack on Raspberry PI4 with Coral USB Accelerator (USB3)
 
This is great. Thanks for pulling this together. I'm trying to get it up and running and can't seem to get it to trigger. Below is pulled from the logs where the object is being detected but isn't recognizing that it is on the object list. Anyone have any ideas?

2020-06-24T12:18:02-04:00 [Trigger DoorBellSD] /aiinput/DoorBellSD.20200624_121805374.jpg: Detected object person is not in the watch objects list [bird, person]
 
This is great. Thanks for pulling this together. I'm trying to get it up and running and can't seem to get it to trigger. Below is pulled from the logs where the object is being detected but isn't recognizing that it is on the object list. Anyone have any ideas?

2020-06-24T12:18:02-04:00 [Trigger DoorBellSD] /aiinput/DoorBellSD.20200624_121805374.jpg: Detected object person is not in the watch objects list [bird, person]

Open a support request on github and make sure to include your complete triggers.json file and all the log from startup through to first detection that fails.
 
Anyone has setup notification in node red /HA? I am not too family with templates. Whenever there is detection I want it to send the image to iphone.

How do i extract basename and predictions from payload?

platform: mqtt
device_class: motion
name: Front Door Motion
state_topic: "aimotion/frontdoor"
payload_on: "on"
payload_off: "off"

Would the best way to extract template data and separate it so it can be formatted to send it as message?

frigate make this really easy: MQTT signal than you just pull image as last/person/snapshot which has all the information of detection.

Also for some reason I am not able to pull the images:
1593029954612.png


Thanks
 
Last edited:
Anyone has setup notification in node red /HA? I am not too family with templates. Whenever there is detection I want it to send the image to iphone.

There's two different kinds of integrations worth doing. The first is setting up MQTT Binary Sensors to get motion history. I use this to show a history chart on my Lovelace page:

1593029973999.png
The second is getting detailed info for use in the sequence in NodeRed. For that just directly register for the MQTT events using the MQTT in node. I register for "aimotion/triggers/+" as the topic and have the output set to "a parsed JSON object". The default messages sent contain all the info you need (scroll down a bit to see the sample standard message content).

In my case I use HA + NodeRed to send a notification to my phone for any front door motion. The notification is done using the native Home Assistant notifications to my mobile app. formattedPredictions has a nicely formatted version of the predictions ready to send in a message caption/title/body/whatever.

For the image what I do is actually have Home Assistant request a dedicated snapshot from the camera. This gets the image stored locally on my HA web server so I can access it remotely. Alternatively you can reference it directly from the trigger Docker image with some additional configuration, but frankly I found that more hassle than it was worth... and I even wrote the feature specifically because I wanted it!

Picture of the flow. You'll note that I also have HA managing the triggering of the recording by BlueIris, and turning landscape lights on when garage or front door motion is detected:

1593030305328.png


Here's the function node code, which extracts the camera name and only allows the sequence to proceed if the state is on:

JavaScript:
msg.cameraName = `${msg.topic.split("/").pop()}`

if (msg.payload.state === "off") {
    node.status({
        fill: "red",
        shape: "dot",
        text: msg.cameraName
    });
    return null;
}

    node.status({
        fill: "green",
        shape: "dot",
        text: msg.cameraName
    });

return msg;

Edit: I just realized the splitting of the camera name is unnecessary since I now send name (the name of the trigger) as a part of the default MQTT message. So the only thing this function really does is check the state is on/off, and that can be a simple switch node instead.
 
There's two different kinds of integrations worth doing. The first is setting up MQTT Binary Sensors to get motion history. I use this to show a history chart on my Lovelace page:

View attachment 64579
The second is getting detailed info for use in the sequence in NodeRed. For that just directly register for the MQTT events using the MQTT in node. I register for "aimotion/triggers/+" as the topic and have the output set to "a parsed JSON object". The default messages sent contain all the info you need (scroll down a bit to see the sample standard message content).

In my case I use HA + NodeRed to send a notification to my phone for any front door motion. The notification is done using the native Home Assistant notifications to my mobile app. formattedPredictions has a nicely formatted version of the predictions ready to send in a message caption/title/body/whatever.

For the image what I do is actually have Home Assistant request a dedicated snapshot from the camera. This gets the image stored locally on my HA web server so I can access it remotely. Alternatively you can reference it directly from the trigger Docker image with some additional configuration, but frankly I found that more hassle than it was worth... and I even wrote the feature specifically because I wanted it!

Picture of the flow. You'll note that I also have HA managing the triggering of the recording by BlueIris, and turning landscape lights on when garage or front door motion is detected:

View attachment 64580

Here's the function node code, which extracts the camera name and only allows the sequence to proceed if the state is on:

JavaScript:
msg.cameraName = `${msg.topic.split("/").pop()}`

if (msg.payload.state === "off") {
    node.status({
        fill: "red",
        shape: "dot",
        text: msg.cameraName
    });
    return null;
}

    node.status({
        fill: "green",
        shape: "dot",
        text: msg.cameraName
    });

return msg;

Edit: I just realized the splitting of the camera name is unnecessary since I now send name (the name of the trigger) as a part of the default MQTT message. So the only thing this function really does is check the state is on/off, and that can be a simple switch node instead.


Thanks for sharing the flow. Can you also please share how you "a parsed JSON object" did you use change node to extract basename to send the URL:4242/basename as image with formattedPredictions?
 
You don't need a change node for it. On the MQTT in node set the output dropdown to "a parsed JSON object". Then basename will be in msg.payload.basename.

Looking at the code it appears I never bothered to check in the changes to store the original image locally. Oops :D The only image available will be the annotated image, which you'll have to enable with enableAnnotations in settings.json.

Note that I recommend having HA capture the snapshot used in the notification. I believe you have to be able to access the URL from your mobile device, and chances are your install of this trigger system is not visible on the public Internet (and it really shouldn't be).

If you have more questions about this or need more assistance please open a support request over on Github. Thanks!
 
Open a support request on github and make sure to include your complete triggers.json file and all the log from startup through to first detection that fails.
Neile, I know this is going to shock you, but it ended up being user error. I didn't have the quotation marks around each item in the watchObjects list. So when there was more than one item it would fail to recognize the object.
 
  • Like
Reactions: aristobrat
Neile, I know this is going to shock you, but it ended up being user error. I didn't have the quotation marks around each item in the watchObjects list. So when there was more than one item it would fail to recognize the object.

Happens to the best of us :) This is why I recommend using Visual Studio Code to edit the settings files. It'll detect those errors for you as you type.
 
  • Like
Reactions: Ranger10
You don't need a change node for it. On the MQTT in node set the output dropdown to "a parsed JSON object". Then basename will be in msg.payload.basename.

Looking at the code it appears I never bothered to check in the changes to store the original image locally. Oops :D The only image available will be the annotated image, which you'll have to enable with enableAnnotations in settings.json.

Note that I recommend having HA capture the snapshot used in the notification. I believe you have to be able to access the URL from your mobile device, and chances are your install of this trigger system is not visible on the public Internet (and it really shouldn't be).

If you have more questions about this or need more assistance please open a support request over on Github. Thanks!

Thanks for help. Node red configured. I am currently running Deepquest and frigate (coral) side by side. Both the systems can be armed and disarmed by telegram flow. I am going to post the function node script to send to ios or telegram.

JSON:
msg.payload =
{
  "data": {
    "message": msg.payload.name,
    "data": {
      "photo": [
        {
          "url": "http://<url>:4242/"+msg.payload.basename,
          "caption": msg.payload.name
}
      ]
    }
  }
}
return msg;

Here is the flow:

1593049018172.png
 
Great! I really think the function node can be omitted. A simple switch node that looks at the msg.payload.state for "on" would replace it entirely.

Just to confirm, you're running with enableAnnotations set to true, correct?
 
Great! I really think the function node can be omitted. A simple switch node that looks at the msg.payload.state for "on" would replace it entirely.

Just to confirm, you're running with enableAnnotations set to true, correct?

Yes running with enableannotations true. I had function node because of other integrations with coral/frigate binary switch. It seems like there off delay 30 seconds is that configurable in settings?
 
  • Like
Reactions: neile
Had an issue this morning. Used Bi UI3 to view my cameras and noticed that my CPU was 80%+. This should only be around 25%+/-. Determined that the trigger docker was showing 85-95% CPU usage. Restarting the container(s) and Docker altogether didn't help.

What did help was deleting all the photos in the aiinput folder (almost 1,000 of them). This is only for one camera and trigger. Then, the CPU whet back to 1-3% usage.

Any idea why it would use so much when it was doing nothing?
 
Had an issue this morning. Used Bi UI3 to view my cameras and noticed that my CPU was 80%+. This should only be around 25%+/-. Determined that the trigger docker was showing 85-95% CPU usage. Restarting the container(s) and Docker altogether didn't help.

What did help was deleting all the photos in the aiinput folder (almost 1,000 of them). This is only for one camera and trigger. Then, the CPU whet back to 1-3% usage.

Any idea why it would use so much when it was doing nothing?

It's because it's watching for new files in that directory and that's a lot of files to watch for. I suggest making the following changes on the BlueIris side:

1. Change the retention policy on your aiinput folder to delete above 1GB and 1 hours.
2. Change the motion sensitivity settings on the cameras in BlueIris to trigger less often. In Rob's video he has them set super sensitive but it's unnecessary and results in a ton of additional files to process.
 
Thanks for the quick suggestion. Yeah. I followed his video and set it to 1GB and left the days. I've set it to an hour and we'll see.

Anyway you can have it delete the images after they are processed by deepstack?