AI motion detection with BlueIris: an alternate approach

Sounds like you don't have virtualization enabled on your computer's CPU. I hit this pretty regularly too on new desktop installs. You'll need to find out how to turn that on in your computer's BIOS. For Intel CPUs it's usually pretty obvious and labeled as "Hyper-V" or "Virtualization" in the BIOS. For AMD CPU's it's named something completely obnoxious (I can't recall off the top of my head) but it is there. I know because I went through this for my BlueIris install a few weeks ago on a new AMD-based machine
No it's because that's already a virtual machine. I'm trying to run a virtual machine in a virtual machine. I guess I could run this on the host machine and use samba shares like someone said? But wow, that's a long way to go. I may have to give this up. I've tried everything now.
 
Is there any ai to help figure out best percentage for detection with least false positives? Lol
 
So far it sends a message to my google home mini announcing the detection based upon the prediction results and the match percentage. My aim is to log it and use some reporting on "traffic flow" on the road I live on.

If you use the MQTT on/off status feature that released in 1.7.0 you can have Home Assistant do this tracking for you with an MQTT Binary Sensor. See the wiki for an example of how to set one of those up.

I'm looking to see if Telegram can support using the object detected and percentage in the image caption also as well as highlight the detection area in the alert. So many opportunities! Thanks for the start.

Sending the object detected and percentage in the Telegram message is easy, there's a "caption" property on the sendPhoto command. Just put whatever text you want in there. To convert the list of prediction objects in the MQTT message to a nice string for display you can do something like this in a function node:

JavaScript:
predictions
    .map(prediction => {
      return `${prediction.label} (${(prediction.confidence * 100).toFixed(0)}%)`;
    })
    .join(", ");

Alternatively I'll have an official release out today or tomorrow that supports mustache templates in the trigger configuration so you'll be able to just include {{formattedPredictions}} in your MQTT payload directly if you wish. To play with this support now you can install the :dev tag image from Docker Hub, although there's no documentation on the mustache properties yet!
 
  • Like
Reactions: Splat
No it's because that's already a virtual machine. I'm trying to run a virtual machine in a virtual machine. I guess I could run this on the host machine and use samba shares like someone said? But wow, that's a long way to go. I may have to give this up. I've tried everything now.

Virtual machines in virtual machines are a nightmare. Keep it simple: run it natively on the base OS of the computer BlueIris is on.
 
Is there any ai to help figure out best percentage for detection with least false positives? Lol

Ha!

What I did was run it as 0 and 100% for a while for all the cameras, then check every so often in UI3 to see how many triggers happened incorrectly. If something's firing wrong I boost the minimum threshold quite a bit, like to 50%, and see how that works out.
 
My blue iris is runs on a VM, nothing to be done about it now.

That VM is running on something. Install the Docker stuff on that something and you'll be up and running. There's no reason to put Docker containers inside a VM when they themselves are VMs.
 
That VM is running on something. Install the Docker stuff on that something and you'll be up and running. There's no reason to put Docker containers inside a VM when they themselves are VMs.
yeah, but how do i handle folder access, SAMBA?
 
If you use the MQTT on/off status feature that released in 1.7.0 you can have Home Assistant do this tracking for you with an MQTT Binary Sensor. See the wiki for an example of how to set one of those up.



Sending the object detected and percentage in the Telegram message is easy, there's a "caption" property on the sendPhoto command. Just put whatever text you want in there. To convert the list of prediction objects in the MQTT message to a nice string for display you can do something like this in a function node:

JavaScript:
predictions
    .map(prediction => {
      return `${prediction.label} (${(prediction.confidence * 100).toFixed(0)}%)`;
    })
    .join(", ");

Alternatively I'll have an official release out today or tomorrow that supports mustache templates in the trigger configuration so you'll be able to just include {{formattedPredictions}} in your MQTT payload directly if you wish. To play with this support now you can install the :dev tag image from Docker Hub, although there's no documentation on the mustache properties yet!
Thanks I'll give it a go.
 
  • Like
Reactions: neile
The documentation for mustache templates is now live. You'll need to use the new messages property of the mqtt trigger handler to take advantage of it.

This is all available in the :dev tag Docker image for the moment.
Works like a treat! My Google mini is chirping away the predictions now and you've saved me 7 nodes in node-red. I simplified the MQTT payload to just the {{formattedPredictions}} and so far so good. You're a much better programmer than I am :).
 
Works like a treat! My Google mini is chirping away the predictions now and you've saved me 7 nodes in node-red. I simplified the MQTT payload to just the {{formattedPredictions}} and so far so good.

Awesome! That's why I added the formattedPredictions variable. Sometimes it's nice to have someone already do the formatting for you. I always groan when I have to pull out the function node in NodeRed

You're a much better programmer than I am :).

Lol. I just play a programmer on TV!
 
  • Like
Reactions: aristobrat
Awesome! That's why I added the formattedPredictions variable. Sometimes it's nice to have someone already do the formatting for you. I always groan when I have to pull out the function node in NodeRed



Lol. I just play a programmer on TV!
OK done some testing. The MQTT works like a treat but Telegram is still reporting the name of the trigger.

FYI this is my telegram configuration
JSON:
        "telegram": {
          "chatIds": [1234567890],
          "cooldownTime": 5,
          "caption": "{{formattedPredictions}}"

It shows as below:
C8264FDA-62C3-4CCA-ADB6-2AAA640D2E13_1_201_a.jpeg


I'm just having a look through the code now to see if it's something simple that I've missed.
 
I haven't hit merge yet, oops :D Give it about 10 minutes, I just merged and Docker Hub is slow to build.
 
  • Haha
Reactions: Splat
Is possible to incorporate Coral and live processing? A little different approach than gentle pumpkins method?
 
  • Like
Reactions: xdq
I just published version 1.9.0 which includes a number of improvements to MQTT, Telegram, and WebRequest messages. Most notably mustache templates are now supported, providing more flexibility in the content of messages sent to the various services.

Here's the full list of what's new from the changelog:

  • MQTT status messages with statistics are now sent on every received file. The total number of files received and the number of triggers actually fired are included in the message payload.
  • MQTT trigger handlers now support an array of messages to send instead of a single message, allowing for different format messages to different services. For example one message could be formatted in a way that works for Home Assistant use and another could be formatted to trigger BlueIris recording. This is an optional, more advanced, way to specify MQTT triggers. The previous, simple, single topic method still works and is recommended for most use cases. See the wiki for an example of the new format.
  • The MQTT overall configuration now supports specifying a topic for status messages. Right now the only status message sent is a LWT message for when the system goes offline.
  • A payload property is now supported on MQTT handler message configuration, along with support for mustache templates in the payload. This makes it possible to send a precisely formatted message to BlueIris that will trigger recording for a specific camera instead of having to use webRequest handlers.
  • Mustache templates are now supported in the webRequest handler URIs. One way to use this is to send additional data to BlueIris with the details of predictions that caused the trigger to fire, for example {{formattedPredictions}}.
    See the wiki for details on available mustache variables.
  • Telegram trigger handlers now support an optional caption property to specify the text sent as the caption for the photo that fired the trigger. This supports mustache templates so the caption can be something like {{name}}: {{formattedPredictions}}.
  • Logging level is now controlled by a VERBOSE environment variable. When set to true additional logging is shown in the console. When false or omitted only startup and successful detection messages are shown.
  • The system no longer exits when configuration errors prevent startup. This leaves the container in a running state so it is possible to open a terminal window to the container to inspect things like volume mount points for missing configuration files.
  • A clear message after initialization is output to the log indicating whether startup was successful. If it wasn't there's now a link to a troubleshooting wiki page for assistance.
There are no breaking changes in this release. To update simply run docker-compose pull from the directory your docker-compose.yaml is in and then docker-compose up to start things running again.
 
  • Like
Reactions: Splat
Version 2.0.0 just released. This update has two breaking changes, but only if you rely on MQTT LWT messages or previously applied a change to webRequest URIs to work around a double-encoding bug (I think this only impacted one person).

Breaking changes
  • MQTT online and offline status messages are now sent when the service starts or fails to start. This, combined with the LWT message, makes it easy to set up MQTT binary sensors in Home Assistant to track the status of the system and send notifications to people if the system goes down or isn't running. This is a breaking change if you rely on the LWT message. The format of the offline message sent for the LWT changed to align with the online and processing status messages. See the wiki for documentation on the status message format.
  • webRequest URIs are no longer double-encoded. Instead only the text replaced with a mustache template is encoded. This is a breaking change if you had previously modified your webRequest URIs to work around issue 176. If you previously worked around the bug by removing encoding from the URIs in the trigger configuration file you will need to put the encoding in again.
Other changes
  • MQTT detection messages now include a friendly formatted version of the predictions, for example: "formattedPredictions": "dog (98%)".
  • Failed calls to the Deepstack server no longer throw an unhandled promise rejection exception.
To update simply run docker-compose pull from the directory your docker-compose.yaml is in and then docker-compose up to start things running again.
 
  • Like
Reactions: Splat