[tool] [tutorial] Free AI Person Detection for Blue Iris

Good info. I agree I do not think I am going to need both containers running 2 different versions. Since my original post I did shut each down and only had 1 version running at a time, I assume shutting it down is as good as deleting it? They both were processing normally minus putting the custom label on certain pictures, not sure about the processing times I will need to go back and check that.
I agree you could just stop a container but you should also remove the port connection within aitool. You need to really ensure that you configure for one container and then enable/disable the different containers to ensure you are not getting accidentally mislead. Does your custom container recognise anything?
 
I agree you could just stop a container but you should also remove the port connection within aitool. You need to really ensure that you configure for one container and then enable/disable the different containers to ensure you are not getting accidentally mislead. Does your custom container recognise anything?
Yes what I did was stop the container, and remove all ref, to it in AI-Tool. Also, yes both models, the normal one everyone is using and my custom model are returning results as expected as far as "seeing" the objects. I am just not getting the custom label if it see's a picture of me for example. I have ran a post command in postman several times and I get the correct result so I am really stumped on what the issue is. John O is going to look at my pt file and see if it is the file or maybe an issue with deepstack. there is another person on GH and he and I are about at the same point and getting the same results so either we both of us are jacking it up or there is an issue is my guess.
 
I agree especially if another pioneer is having the same issue. Are you supposed to receive only your own models labels or a combination of the old and your new with the re modelled container?
 
@Ripper99
Sorry, I have found your post confusing. You keep referring to the rtsp address. What is the ip address for each of your camera's? Needless to say you cannot use the same ip address for the cameras unless they are cloned.
 
Last edited:
I agree especially if another pioneer is having the same issue. Are you supposed to receive only your own models labels or a combination of the old and your new with the re modelled container?
According to John we should be receiving both.
 
According to John we should be receiving both.
So infact then you only need one container!
I guess you are integrating your models with those that have already been developed in the earlier version which begs to have this question answered. How does Deepstack prioritise what it sees. For example it may recognise your face but indicate you are a person!
 
So infact then you only need one container!
I guess you are integrating your models with those that have already been developed in the earlier version which begs to have this question answered. How does Deepstack prioritise what it sees. For example it may recognise your face but indicate you are a person!
Exactly! That is the part I do not get either. I assume it does double duty so to speak, it runs the recognition twice? but I would think I would get double hits, one showing "person" and one showing "custom label" for the same image. I am not smart enough to look at the code and see if or how it would accomplish this.
 
I'd rather not have to put the Ubiquiti cameras in stand alone mode and I've heard of others using Ubiquiti cameras in Ubiquiti Protect mode while also using BI so if anybody else has a similar configuration please let me know if you are using Ubiquiti Protect mode and your cameras are NOT in stand alone mode, are you seeing this same sort of problem I am?

Thanks in advance for any help.

The problem is Unifi Protect. It causes the keyframe rate to be too low. Ideally it should be 1.00 as shown below:

KY6Sjf.png


There is a post on BI of possible fixes for cameras themselves:
Now as for protect there is a variable that causes the issue but it's not something you can edit.
However one can edit it by modifying the Protect system files and changing the default from 5 to 1 which will fix the issue if you are good with editing javascript code and using SSH.

I use a cloudkey gen 2 plus with protect 1.17.0 beta 6 currently.

WARNING - THE FOLLOWING MAY CAUSE ISSUES OR BREAK YOUR UNIFI PROTECT INSTALL

Using SSH goto in "/usr/share/unifi-protect/app" (This is for the Gen2+ with 2.0 firmware, it may differ for the Dream Machine and UNVR). Look for the file called service.js. Save a copy of this file in case you break something.

Open service.js (It's minified so it will be harder to read) and search for:

JavaScript:
;a.DEFAULTS=[{idrInterval:5,minClientAdaptiveBitRate:0},{idrInterval:5,minClientAdaptiveBitRate:15e4},{idrInterval:5,minClientAdaptiveBitRate:0}]

then change the three instances of idrInterval:5 to idrInterval:1 as shown below.

JavaScript:
;a.DEFAULTS=[{idrInterval:1,minClientAdaptiveBitRate:0},{idrInterval:1,minClientAdaptiveBitRate:15e4},{idrInterval:1,minClientAdaptiveBitRate:0}]

Save the file then use SSH to restart protect:

Code:
systemctl restart unifi-protect

I only have two cameras in protect so I don't know what would happen with many cameras.

Since doing this my keyframe rate is 1.00 and BI works the same as if my G3 cameras were in standalone mode. Also you will have to edit the file everytime you update your protect install and/or controller (in my case).

I made a post about it on ui.com but no one replied.
 
Running the beta GPU deepstack.

Works nice 90ms

But now when I look at the history in aitool, the trigger happens 3-7 secs after the snapshot photo timestamp. It used to be ~1sec everytime.

Anyone else notice this? Anything can check to decrease that time?

The snapshots go on an SSD, so the drive isn't slow.

Everything else is exactly the same, just GPU deepstack beta changed
 
I'm trying to get DS running on a Jetson Nano and I'm having a bit of trouble. Here's what I've done so far. I installed the latest Jetpack on an SD card and then set the nano up using headless mode with a USB cable. I assigned a static IP address and then accessed the Nano with Putty. I then ran the following commands.

sudo apt-get update
sudo apt-get upgrade

sudo docker pull deepquestai/deepstack:jetpack-2020.12

sudo docker run --runtime nvidia --restart=unless-stopped -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-2020.12

sudo docker volume create portainer_data

sudo docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

sudo systemctl enable docker.service

sudo reboot now

After the reboot both containers started and I could access Portainer as well as the DS web interface on port 80 but AI Tool can not communicate with DS.

12/19/2020 6:35:46 PM DetectObjects Unable to connect to the remote server [WebException] Mod: <DetectObjects>d__31 Line:999:48 Error AITOOLS.EXE 10.1.31.30:5000 GarageMotion GarageMotion.20201219_183544288.jpg 60 1 9 False aitool.[2020-12-19].log

The DS URL in AI Tool is set to 10.1.31.30:5000 which is the IP address of the Nano and I believe the port number is correct based on the docker run line.

This is my first time working with a Nano and my first experience with Docker so I'm not familiar with troubleshooting techniques on these platforms. From what I see in Portainer and the output of a few Docker commands everything looks fine to me, but then I don't have an experienced eye.

Now to complicate things. I have installed Jetpack on the Nano twice. Once using the Gui and once headless. I was able to get everything working with the Gui install but I wanted to try out headless install so I wiped the card, started over, and now I can't get DS to work again.

Any help will be appreciated.
As far I can tell from your explanation you have done everything correctly with the exception of the URL in AITool. You need to talk to port 80, not 5000. 5000 is the internal docker container port which is mapped to the host port 80.

I tend to define a non-standard port rather than 80 just to avoid any potential conflicts with web servers you may have or plan to setup. (But 80 should be fine too)

Hope this solves your issue!
 
  • Like
Reactions: robpur
In ref to the custom models need some help can anybody help me, trying to do the following:
1 - Clone DeepStack Trainer git clone git@github.com:johnolafenwa/deepstack-trainer.git
2 - CD to the repo root cd deepstack-trainer
3 - Put your the images in a folder you want to test
4 - from the repo root run, ```python detect.py --weights "C:/path-to-your-model.pt" --source "C:/path-to-your-test-images-folder"

#1-3 No problem.
When I run the code I am getting this-

PS C:\Users\user\Documents\GitHub\deepstack-trainer> python detect.py --weights "C:\Users\user\Documents\my-models" --source " \testimages"
** On entry to DGEBAL parameter number 3 had an illegal value
** On entry to DGEHRD parameter number 2 had an illegal value
** On entry to DORGHR DORGQR parameter number 2 had an illegal value
** On entry to DHSEQR parameter number 4 had an illegal value
Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\numpy\init.py", line 305, in <module>
_win_os_check()
File "C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\numpy\init.py", line 302, in _win_os_check
raise RuntimeError(msg.format(file)) from None
RuntimeError: The current Numpy installation ('C:\\Users\\user\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\site-packages\\numpy\\init.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: Traceback (most recent call last):
File "C:\Users\user\Documents\GitHub\deepstack-trainer\detect.py", line 5, in <module>
import cv2
File "C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\cv2\init.py", line 5, in <module>
from .cv2 import *
ImportError: numpy.core.multiarray failed to import

Any help is appreciated- Please Barney style the answer...
 
As far I can tell from your explanation you have done everything correctly with the exception of the URL in AITool. You need to talk to port 80, not 5000. 5000 is the internal docker container port which is mapped to the host port 80.

Well Duh, that was it. I've been running the native Windows version of DS with the web server running on port 80 and pic submission port on 5050. I didn't understand how the docker version works and assumed that it used two different ports like the Windows version. After setting -p 5050:5000 it works as desired.

I'm currently running MODE=High, using full size images of 1280x720, 1920x1080 and 2048x1536 and average DS time is around half a second. This time is fine for me since I'm running DS on three different machines on the network and my current camera setup can not saturate the collective DSs, but I saw a post on the DeepStack forum from sickidolderivative saying that he processes images on the Nano in around 130 ms, but he's submitting images between 300x300 and 500x500. I wonder if accuracy suffers with such small images. I prefer accuracy over speed but it's probably not necessary to submit full resolution images. I'll have to hunt for the sweet spot.

Thanks for helping me get up and running with the Nano!
 
  • Like
Reactions: AskNoOne
For those that are using PTZ cameras in your set up with BI/ AI-Tool / deepstack, I am curious if it works for detection and what you have done set up wise.
 
For those that are using PTZ cameras in your set up with BI/ AI-Tool / deepstack, I am curious if it works for detection and what you have done set up wise.
All my cameras are PTZ and they work just the same as non PTZ.
PTZ simply means you have more control over them one way or another.
It stands for Pan, Tilt, Zoom but it does not mean that your cameras necessarily support all functions.
 
Last edited:
All my cameras are PTZ and they work just the same as non PTZ.

PTZ simply means you have more control over them one way or another.
Yes that is correct, I should have posed my question better. Assuming you have them set up to auto scan, or maybe only during night you have them move between presets, etc. do you have any issues with detection, how do you handle missed/false detections because of the movement, etc. Also did you (or anyone) do anything different with your triggers or set in BI to adjust for issues caused because of movement vs. a static camera?
 
@Ripper99
Sorry, I have found your post confusing. You keep referring to the rtsp address. What is the ip address for each of your camera's? Needless to say you cannot use the same ip address for the cameras unless they are cloned.

I've shown this in my post?

Camera Office = 192.168.1.120:7447/JJnG64KrxTHEzSCP

Camera Theatre= 192.168.1.120:7447/MMtG64LryuKKzTTY

I'm very aware cameras cannot use the same address and thats why I mentioned this is a Unifi Protect system where it uses just that however the RTSP url is unique as shown in my example, also a camera may not be able to use the same IP however if you refer to the IP's I provide you can see the Ubiquiti system does just this and gives a unique URL for each camera that is connected to its CloudKey/Protect NVR
 
Yes that is correct, I should have posed my question better. Assuming you have them set up to auto scan, or maybe only during night you have them move between presets, etc. do you have any issues with detection, how do you handle missed/false detections because of the movement, etc. Also did you (or anyone) do anything different with your triggers or set in BI to adjust for issues caused because of movement vs. a static camera?
I suspect that you are referring to what is sometimes called Patrol mode.
BI has no way to know when the camera will move and will trigger as the lens moves. I guess aitool will simply do it's best to capture the events you are looking for.
 
  • Like
Reactions: balucanb
I've shown this in my post?

Camera Office = 192.168.1.120:7447/JJnG64KrxTHEzSCP

Camera Theatre= 192.168.1.120:7447/MMtG64LryuKKzTTY

I'm very aware cameras cannot use the same address and thats why I mentioned this is a Unifi Protect system where it uses just that however the RTSP url is unique as shown in my example, also a camera may not be able to use the same IP however if you refer to the IP's I provide you can see the Ubiquiti system does just this and gives a unique URL for each camera that is connected to its CloudKey/Protect NVR
Your question appears to be related specifically to Ubiquiti so I clearly missundstood what you are asking and to be honest still don't understand.
 
Last edited: