BlueTalker
n3wb
Yes, it's a small fire hydrant on the right-hand side, painted white. Here's the actual mask image -->I don't see a mask in that second photo. How did you create your mask? Is it possible that the whole photo is a mask?
Yes, it's a small fire hydrant on the right-hand side, painted white. Here's the actual mask image -->I don't see a mask in that second photo. How did you create your mask? Is it possible that the whole photo is a mask?
Is this the masked image from AI Tool? The reason that I ask is because the masked area should show as black and not white. I believe that you have the mask inverted. When you open the actual .png image of the mask that AI Tool uses you should only see the masked area and not the image/background.Yes, it's a small fire hydrant on the right-hand side, painted white. Here's the actual mask image -->
No resizing in BI and I can@t understand it as its ok when analyzed by deepstack on a desktop but not by the Pi.@Tinbum - I would prop the pi up a little, let it see outside the window. Poor little guy probably feels neglected sitting over there in the corner half covered in the national geographic magazines. Or is that just mine? (only used as a 'PiHole')
Hmm do you have the BI resize image option enabled? If so, try disabling to see if anything changes.
The DPI scaling on your monitor could be a factor. See if it happens when you set to 100% dpi. Or maybe play with AITOOL shortcut > compatibility tab > DPI settings.
Are you VNC'd'd or remote desktop to view the image? If so, see if that is a factor.
I was wondering if its a bug in the pi but no one else on here has mentioned it. I guess it doesn't really matter that much when running the 1.67 AITool but when you start running it in your version the dynamic masks will be in two different positions for the same object depending if its a desktop or a pi analysing the image.@Tinbum - A bug in the pi version? Are using beta? See what happens when you run in a different mode. I think the switch is something like -MODE=High or medium or low. I seem to recall looking into the python code, that those switches may actually resize the image to a different resolution on the deepstack side of things
Are your detected images shown in the correct place?Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.
When sending images to the PI with the NCS2, are these times under heavy load, i.e. how many cameras/images are you sending to be analyzed? The reason that I ask is because I have 11 outdoor cameras which about 3-6 that overlap at any given time so AI Tool can throw multiple images to Deepstack. Just wondering if the Pi and NCS2 is up to this task? I do have my images pulling from sub stream with a resolution of 680x420 or 1280X720 depending on the cam.Tried out my Pi4 2GB RAM with Intel NCS2 and i have to say i'm pretty impressed. Unfortunately there is no easy way to get the service to auto start on boot or anything just yet, but the processing times are very quick.
Image processing times are between 66ms ( 640x352 image from my reolink substream) and 160ms (1920x1080 image from my EZVIZ spotlight camera). Why are my times so much faster than others? I have it plugged into the USB3 port if others havent been.
Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.Are your detected images shown in the correct place?
I used usb 3 port. I couldn't get it to install on a 4 so copied an sd card from my 3B+.
To add start up script at boot
sudo crontab -e
Select nano if you are prompted to ask for an editor.
Add a line at the end of the file that reads like this:
@reboot sudo deepstack start "VISION-DETECTION=True"
Save and exit. In nano, you do that by hitting CTRL + X, answering Y and hitting Enter when prompted.
Those times were me manually triggering the streams, so no, not a heavy load. Something seems to be triggering multiple images from the 1080 camera though, and i'm getting one every second during motion, but they are getting processed faster than they are being generated.When sending images to the PI with the NCS2, are these times under heavy load, i.e. how many cameras/images are you sending to be analyzed? The reason that I ask is because I have 11 outdoor cameras which about 3-6 that overlap at any given time so AI Tool can throw multiple images to Deepstack. Just wondering if the Pi and NCS2 is up to this task? I do have my images pulling from sub stream with a resolution of 680x420 or 1280X720 depending on the cam.
Thanks for this info. Can you report back after testing under a heavy load? I'm still deciding on buying another machine to run Deepstack on or to buy a Pi with a NCS2. For most of my cameras I have BI taking a snapshot every 3-5 seconds depending on camera location. My response times are anywhere from 600 ms to 1.5 s. This is of course dependent on how many cameras are triggered, etc.Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.
Those times were me manually triggering the streams, so no, not a heavy load. Something seems to be triggering multiple images from the 1080 camera though, and i'm getting one every second during motion, but they are getting processed faster than they are being generated.
Would you mind going through how you installed it on the Pi 4 as I couldn't get it installed even from a totally new install..Legend, that startup script worked. The boxes are in the correct places on the images when reviewing them in the AI Tool.
Fresh install of buster lite (with desktop). Then did the update and upgrade process.Would you mind going through how you installed it on the Pi 4 as I couldn't get it installed even from a totally new install..
Thanks, I used the full Raspian so i'll try lite.Fresh install of buster lite (with desktop). Then did the update and upgrade process.
i followed the instructions here - Using DeepStack on Raspberry PI - ALPHA
via SSH i ran:
wget
and then i had to bash install-deepstack.sh and it worked.
started it up, entered my license key. plugged in NCS2. Thats it.
Dont think they have a windows version out vet.Another question is has anyone got DeepStack GPU working on a Windows machine?
I wasnt sure if people have done a workaround or not.Dont think they have a windows version out vet.
Thanks that worked, the only difference to what I had been doing was the lite version.Fresh install of buster lite (with desktop). Then did the update and upgrade process.
i followed the instructions here - Using DeepStack on Raspberry PI - ALPHA
via SSH i ran:
wget
and then i had to bash install-deepstack.sh and it worked.
started it up, entered my license key. plugged in NCS2. Thats it.