Chris Dodge
Pulling my weight
@WildDoktor - It depends. I still mask out the street in BI so I dont get alerts all the time for passing cars. The dynamic masking is only useful for when you get repeated detection's in the same area.
You need to email BI support as I’ve done to request this, the more people that do it the more likely it will be implemented unless there’s a technical reason why it can’t?Is it going to be possible to make BI to use the get the screenshot from the main stream when detecting motion from the substream?
Right now, since I use substreams to reduce CPU usage, the screenshots I get are very low res. I would prefer to get the full main stream screenshot.
Is it possible to request the feature?
I do and would. I find Vmmem very cpu intensive. You can also still use the normal mask feature in AITool
I just sent them a support email explaining why it would be a nice addition.You need to email BI support as I’ve done to request this, the more people that do it the more likely it will be implemented unless there’s a technical reason why it can’t?
So you're close, though I've not tested with just your method so wanted to share this anyway.Here is an update to my fork of @GentlePumpkin 's awesome tool.
Change log:
- Camera option for 'Trigger Cancels' - Basically, rather than the URL triggering an event in BI, it will only be called when the detection is CANCELED. Note that you MUST change &flagalert=1 to &flagalert=0. As I understand this is how Sentry AI works - it just cancels a detection sent by BI. I haven't tested the 'Trigger cancels' camera checkbox yet since I dont have that configuration right now - let me know if it works.
Me and Ken spoke on this in some details last week spitballing ideas. The issue is to get a high quality image means decoding must happen on the mean stream. If decoding is happening that is CPU time and now what's the point of the sub-stream if you have to decode the main still?? We had a few ideas which may work. It will still cause CPU time but just be big spikes vs. constant decoding. I suspect he'll keep tinkering with the idea because he seemed interested in it but it's a little complex. He's a smart guy though so even if not perfect the fact he's exploring it and what not is great!You need to email BI support as I’ve done to request this, the more people that do it the more likely it will be implemented unless there’s a technical reason why it can’t?
Ok, getting this set up I have hit a roadblock and need some help.
I've followed the hook up tutorial, and I'm at the point of inputting the deepstack server address. When I trigger it gets stuck processing then gives the error log saying server unreachable. But when I copy paste the address into chrome the deepstack server window comes up. Any clues as to what I have done wrong?
I've had no issues so far with images being sent. When I started had it send the full res camera image 2688x1520 until I switched to the 640x360 sub stream which still works.I was wondering though, the Telegram support sounded interesting but the image transfers get denied. Looking at the Telegram API it looks like it only supports 320 pixel max images. Is this why it's failing (AI Tool isn't reducing the images prior to send?)
Do I need anything underlying to get it working or just make the bot and join the chat? I made one with BotFather and entered the token into AI Tool but I get this in the logs:I've had no issues so far with images being sent. When I started had it send the full res camera image 2688x1520 until I switched to the 640x360 sub stream which still works.
I modified and compiled the code to add the binding boxs and include a caption on the image of what it detected.OK figured out you have to have a GROUP and invite the bot in there and the group # is what you put in AI Tool for the Chat ID, not the bot name... Was a little confusing but makes sense now. How did you get the identifier boxes/names on yours? Mine just has images w/o any notations...
Thank you so much, I'm pretty stupid for not spotting that. I had used the key when trialling the windows version, but had totally forgotten to do it second time around with the docker version.1. Visit 192.168.1.12:82 (or your IP and Deepstack port you set(i set mint to 82))
2. Go to Sign In
3. Copy the activation key ( See attached image)
4. Paste it in your 192.168.1.12:82 < Deepstack url (see attached image)
5. Make sure the necessary check boxes are marked on DeepStack window.
View attachment 69453View attachment 69454
Show mask button is not related to the new Dynamic Masking. GP prob talks about that in the first post of this thread.Is the show mask supposed to display something (I have a dynamic mask showing as set) or is that broken?
It will show a mask set up as per the instructions for AITool as an image in the cameras folder.Is the show mask supposed to display something (I have a dynamic mask showing as set) or is that broken?