Dunno if possible...but since OnGuard relies on Deepstack... is it possible to have a red/green indicator for OnGuard to show the Deepstack server status? Or even the seconds to process a Deepstack image in a OnGuard text box?
I gotta find out how to do masking. Keeps thinking a bush is a person. And that's a problem.
View attachment 80005
Yes, with the GPU version it is helpful to use multiple instances of DeepStack. However, DeepStack in GPU mode isn't entirely CPU neutral. In addition, if you are running Blue Iris on the same computer the BI processing also goes up.I will have to test this out.
I am using 6 instances of the Windows GPU version of DeepStack with the Vorlon Version of AI Tools and that has been working very well, but always keen to try something new.
The Windows GPU version, like the updated CPU version just starts from the command line, so easy to automate the start-up.
Thank-you for the reply, although it does not align with my experiences.Yes, with the GPU version it is helpful to use multiple instances of DeepStack. However, DeepStack in GPU mode isn't entirely CPU neutral. In addition, if you are running Blue Iris on the same computer the BI processing also goes up.
I've had another request for supporting multiple DeepStack instances. I'll work on that. However, the real solution is for DeepStack to improve their simultaneous processing for GPU. That should actually be quite easy given the complexity of the rest of their application. Having 6 DeepStack versions running on one machine has got to add to memory use more than them just fixing that.
For the CPU version just sending 3 or 4 pictures close together would max out the CPU (unless you could process on multiple remote computers), so their wasn't a huge need for multiple AI server settings for most people.
I should have support for multiple instances in a week or so unless I get sidetracked by bugs and other more immediate work items.
Yes, I am using the latest GPU Windows Beta, but I haven't really had the time to look in any detail at the CPU use of DeepStack with that. I do see a barely noticeable GPU bump when a picture is processed. For me with 4 cameras (2 high, 2 low resolution at 15fps) I see BI at about 35% CPU. I'll get the multiple DeepStack instance support in relatively soon.Thank-you for the reply, although it does not align with my experiences.
I am using BI on quite an old i7 using H.265 multiple 4K cams and Intel Beta H\W decoding. It's headless with a dongle on the on-board HDMI with BIOS set to use Integrated Graphics.
All images are sent to the DeepStack in 4K, every 2seconds per camera. (when using the CPU version I was having BI downsize them, but not don't downsize them).
CPU usage of each Windows PowerShell Instance is peaking at 0.6% (0.3 Python, 0.3 redis-server, deepstack itself seems to sit on zero), so I would have to say that in my deployment Deepstack is effectively using less than 1% CPU.
My GPU is a very old GTX 745 (not over clocked) and is averaging 60ms per image, was 100ms but has reduced further with the latest deepstack update.
Note:
The GPU version in the documentation and on GitHub is using the CPU. If you follow the bugs, they link to the following latest compiled version for GPU.
By itself, CPU usage is a terrible metric as it doesn't take into count the clock speed, which as we know all modern CPU's are designed to run at higher-utilisation but lower the core speed to save energy.Yes, I am using the latest GPU Windows Beta, but I haven't really had the time to look in any detail at the CPU use of DeepStack with that. I do see a barely noticeable GPU bump when a picture is processed. For me with 4 cameras (2 high, 2 low resolution at 15fps) I see BI at about 35% CPU. I'll get the multiple DeepStack instance support in relatively soon.
If I knew Python better I might just try to improve the DeepStack thread use since it is now open source. Since I don't I guess I'll just work around it.
how about that. was unaware of this "ignore".Put an "Ignore" area around the bush. I have had similar problems. Sometimes it depends entirely on lighting. One of the pictures in the manual has a "pig planter" in it. Sometimes it hits the animal definition, sometimes not. Deepstack isn't perfect, but it will probably get better over time.
Yes, I can put some timing data on the screen and the connection status. In order to keep the connection status I'll need to hit it with a picture every minute or so. That shouldn't be a lot of work. DeepStack timing is from the time a picture hits the queue to the time it it processed. So, your last picture (of several close together) may show 5 seconds even though the "actual" time it takes it to process an individual image may be 0.75 seconds. So, I think 5 seconds may be misleading, but so is 0.75 seconds. Probably the 5 seconds figure is more helpful?
I do not use autofill. I actually use the entire trigger URL link.Was gonna try to give this a shot tonight but when i try to set notifications, i set autofill, and reduce the cooldown and set the confirmed flag for blue iris and hit ok. But if I go back into that one, the only thing that saved is the autofill. Cooldown doesnt have, and the flags dont save.
Sorry, I think that I saved off the wrong version of a file. I'll fix it (after doing a triple check)I do not use autofill. I actually use the entire trigger URL link.
Fixed it. Don't quite know how that happenedI do not use autofill. I actually use the entire trigger URL link.
Fixed it. Don't quite know how that happened
Any way I can manually test MQTT? I have had multiple cars drive through an area of interest where I am looking specifically through cars, and blue iris is recording and marking confirmed as required, but not sending any MQTT messages. MQTT notify is green in that areas settings and the server is setup correctly in the app settings.
I have to admit that I didn't test the image completely. I'll take a look soon.Disregard. Appears I got it working. Only thing I can't seem to figure out is the image part. I am decoding what the MQTT server is receiving for an image payload and it is only about the top 5-10% of the image. I believe MQTT payloads are limited to 250ish MB so I dont think image size is an issue.
I could also be doing something very wrong. I am trying to ingest into home assistant for alerts which isnt working at all. But if I pull the payload out of MQTT explorer and try to decode with a tool its only 5% of the image.I have to admit that I didn't test the image completely. I'll take a look soon.