Yes. They are passed directly from the value returned by DeepStack.Are the ObjectTypes specified for MQTT the same as what Deepstack passes? ie Person, Dog, Car, etc?
Yes. They are passed directly from the value returned by DeepStack.Are the ObjectTypes specified for MQTT the same as what Deepstack passes? ie Person, Dog, Car, etc?
But to extra clear, not everything DeepStack recognizes is passed. On Guard does filter stuff out (potted plants etc. are not particularly interesting for a security camera).Yes. They are passed directly from the value returned by DeepStack.
The payload of the MQTT message is the full file name and path. In theory I could pass the binary picture data, but I don't know what format people would like it in. It could be just the raw data, it could be Base 64, it could be Base64 within an xml page.Perfect. Testing it out again, might have what I need to be 80% operational. Might have to figure out on my own with the help of home assistant how to a snapshot to my phone (not email or sms).
The payload of the MQTT message is the full file name and path. In theory I could pass the binary picture data, but I don't know what format people would like it in. It could be just the raw data, it could be Base 64, it could be Base64 within an xml document, etc.Something I am kind of thinking is that the MQTT topic should be something like OnGuard/nameofcamera/AreaName and then the mqtt payload should be the thing detected. That way you can do somethings where multiple object types can set off one MQTT sensor
I haven't used AI Tool in about 6+ months, so I don't know what he is offering now. I'll take a look. I'm not attempting to compete with him, but I do want to offer the best experience possible. I personally don't use MQTT. I just don't have any devices that support it natively. Since this project at least started out as something for personal use MQTT just hasn't been a priority. That said, it isn't difficult to provide MQTT, and I have no problem providing it. I would just prefer to do it in a format is useful to the most people possible.My only experience is AITool and the newest version lets you set your payload to any word or set of words including variables like object types time of day, etc. I’m definitely not trying to say do everything they’re doing, just giving a reference point
I haven't used AI Tool in about 6+ months, so I don't know what he is offering now. I'll take a look. I'm not attempting to compete with him, but I do want to offer the best experience possible. I personally don't use MQTT. I just don't have any devices that support it natively. Since this project at least started out as something for personal use MQTT just hasn't been a priority. That said, it isn't difficult to provide MQTT, and I have no problem providing it. I would just prefer to do it in a format is useful to the most people possible.
Right now it seems like a payload with the following in a JSON format: Full picture path, object type name(s) and confidence level, binary picture data in base64 format. I could also provide the object location(s) and sizes, but I don't see that as particularly useful.
Someone here also asked for an MQTT notification when motion has stopped. That is something I see as generally useful, and I am seriously considering it. However, that brings up the question of what "stopped" means. It could mean that there are no longer any objects recognized by DeepStack, no objects in pictures for X time, it could mean that there is no movement of specific types in specified areas, etc..
By cancel urls do you mean like the ability to enable/disable areas of interest or objects of interest? I am planning on ignoring object types in my home automation system based on conditions like home/away, dark/daylight, awake/asleep.I think the Json payload is a good idea, I’d love to see it.
Got other thoughts about “cancel” urls but I really need to be at a computer to elaborate. Not sure how many people have this use case.
To be clear on "with GPU"... does it mattery what GPU to consider purchasing if I do not already have one other than the imbedded on motherboard ? Example: using Intel latest generation CPU's works with QuickSync. What mid range NVidia card did you use to test? Am thinking of the one pictured ($59) mostly because of the 4 HDMI outputs which would be handy in my case:OK, I've installed the new DeepStack Windows native application with GPU. The performance with my mid range NVidia card is very impressive. Frame processing for me was about 200ms or less. This is twice as fast as the WSL version I installed last week and 5 to 10 times faster than the CPU version of the old Windows application. I'd be interested on hearing the performance of the CPU version.
I got the GeForce 1660i. T hat card is $300. The latest and greatest cards are > $1000 right now due to the pandemic. Any NVidia card that supports CUDA should work, but I assume that the more CUDA cores you have the better your results. I doubt that a $60 card would help much, but I could very well be wrong.To be clear on "with GPU"... does it mattery what GPU to consider purchasing if I do not already have one other than the imbedded on motherboard ? Example: using Intel latest generation CPU's works with QuickSync. What mid range NVidia card did you use to test? Am thinking of the one pictured ($59) mostly because of the 4 HDMI outputs which would be handy in my case:
View attachment 77346
I can't seem to find the same link to download DeepStack that I used a week ago. The one in the On Guard ReadMe eventually takes me to a GitHub link for Deepstack-Leduc updated in 2017. Were do you get the latest windows version?
anything has to be better than imbedded motherboard video I'll wait til after the holidays and order one. Granted, still have to get deepstack working. One problem at a time....I got the GeForce 1660i. T hat card is $300. The latest and greatest cards are > $1000 right now due to the pandemic. Any NVidia card that supports CUDA should work, but I assume that the more CUDA cores you have the better your results. I doubt that a $60 card would help much, but I could very well be wrong.
I think I understand. I'll look into it.I forgot to come back and explain what I meant by cancel URLs. So for example with AITool you specify a URL to trigger a recording, but also a trigger to cancel marking a recording as an alert. If I understand it correctly what happens is if an object is detected that you want to mark the alert or flag it, it goes in the alert URL. If the program gets some pictures to analyze, but doesnt have the object you want to detect, it will send a cancel url which has &flagalert=0 in it. Blue Iris will mark any clip it was recording for that camera as cancelled.
The outcome is that I filter all my alerts in the mobile by flagged. And it only shows me what the ai program saw an object i wanted detected. Any other clips where the ai program sent the cancel trigger instead, it marks as cancelled and you can view under cancelled alerts.
I think I understand. I'll look into it.
anything has to be better than imbedded motherboard video I'll wait til after the holidays and order one. Granted, still have to get deepstack working. One problem at a time....