Run custom model on Ubuntu Docker Deepstack for GPU

durnovtsev

n3wb
Joined
Nov 9, 2021
Messages
12
Reaction score
2
Location
Perm
I want to run a custom model via Ubuntu Docker Deepstack for GPU, what command should I write in the terminal?
For the standard launch of embedded DeepStack models, I use: docker run --gps all -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu
This works great in Blue iris

I want to run a custom model and embedded models (person, truck, car...) at the same time

What commands do I need to write in the terminal?
 

digger11

Getting comfortable
Joined
Mar 26, 2014
Messages
355
Reaction score
373
I launched a custom OpenLogo model and Deepstack seems to have launched, but BI does not detect it.
In the screenshot, a green vertical line is marked on the top left
I managed to get MikeLud1's custom model from the IP Cam Talk Custom Community DeepStack Model thread working on an Nvidia Jetson Nano using a docker command similar to yours.
The command I used was
Code:
sudo docker run -v ~/Documents/DeepStack-Models:/modelstore/detection  --restart unless-stopped -d --gpus all --runtime nvidia -e VISION-DETECTION=True -e MODE=High -e TIMEOUT=30 -p 82:5000 deepquestai/deepstack:jetpack-2022.01.1
Initially BI wasn't using the custom model until I made sure the "Use custom model folder" checkbox was checked on the AI tab. The folder name didn't seem to matter, as I left it pointing to the Windows directory it had been pointed to when I was running DeepStack on the BI server itself
AI.jpg

Adding "combined" to a camera's Artificial Intelligence>Custom models setting, and then triggering the camera resulted in the docker container's log showing both the built in objects and the custom models being used:

user@nano:~$ sudo docker container logs boring_kalam | tail
[GIN] 2022/01/24 - 22:39:31 | 200 | 271.130453ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:31 | 200 | 390.641304ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:31 | 200 | 128.60058ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:31 | 200 | 371.229361ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:32 | 200 | 368.701372ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:32 | 200 | 307.351815ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:33 | 200 | 118.107581ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:33 | 200 | 120.855053ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:34 | 200 | 214.133355ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:34 | 200 | 143.520209ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
 

durnovtsev

n3wb
Joined
Nov 9, 2021
Messages
12
Reaction score
2
Location
Perm
I managed to get MikeLud1's custom model from the IP Cam Talk Custom Community DeepStack Model thread working on an Nvidia Jetson Nano using a docker command similar to yours.
The command I used was
Code:
sudo docker run -v ~/Documents/DeepStack-Models:/modelstore/detection  --restart unless-stopped -d --gpus all --runtime nvidia -e VISION-DETECTION=True -e MODE=High -e TIMEOUT=30 -p 82:5000 deepquestai/deepstack:jetpack-2022.01.1
Initially BI wasn't using the custom model until I made sure the "Use custom model folder" checkbox was checked on the AI tab. The folder name didn't seem to matter, as I left it pointing to the Windows directory it had been pointed to when I was running DeepStack on the BI server itself
View attachment 116633

Adding "combined" to a camera's Artificial Intelligence>Custom models setting, and then triggering the camera resulted in the docker container's log showing both the built in objects and the custom models being used:

user@nano:~$ sudo docker container logs boring_kalam | tail
[GIN] 2022/01/24 - 22:39:31 | 200 | 271.130453ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:31 | 200 | 390.641304ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:31 | 200 | 128.60058ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:31 | 200 | 371.229361ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:32 | 200 | 368.701372ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:32 | 200 | 307.351815ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:33 | 200 | 118.107581ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:33 | 200 | 120.855053ms | 192.168.1.240 | POST "/v1/vision/custom/combined"
[GIN] 2022/01/24 - 22:39:34 | 200 | 214.133355ms | 192.168.1.240 | POST "/v1/vision/detection"
[GIN] 2022/01/24 - 22:39:34 | 200 | 143.520209ms | 192.168.1.240 | POST "/v1/vision/custom/combined"

please help me, I'm confused. I clicked on the custom models checkbox, but it still doesn't work. Please write me a command
 

digger11

Getting comfortable
Joined
Mar 26, 2014
Messages
355
Reaction score
373
If I understand correctly you currently have DeepStack working using the included objects model, launching DeepStack in Docker with the command:
docker run --gps all -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu
Now you want to add the combined.pt custom model.
Correct?

If so you need to place the combined.pt model into a directory and make DeepStack aware of where that model resides.

On my Nano, running Ubuntu, I did the following:
1) Create a directory to hold custom models. I made a DeepStack-Models directory under my user's Documents directory with the command
mkdir ~/Documents/DeepStack-Models
2) Copy combined.pt into the DeepStack-Models directory. I used WinSCP to transfer the file from my BlueIris machine over to the Nano, but you can use what ever method you want.
3) Modify the docker run command to make the DeepStack-Models directory available to DeepStack
docker run --gps all -e VISION-DETECTION=True -v ~/Documents/DeepStack-Models:/modelstore/detection -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu

BTW, if you want to test whether the custom model is working you can issue a curl command in a command window on your BlueIris machine. My Nano is at IP address 192.168.1.250 and DeepStack is listening to port 82, so modify the IP address, port, and file location in the example command below to match your environment
curl --location --request POST 192.168.1.250:82/v1/vision/custom/combined --form image=@"C:\BlueIris\Alerts\LivingRoom.20220128_100000.732066.5123-1.30471.33470.jpg"
Testing the default object detection can be done with the command:
curl --location --request POST 192.168.1.250:82/v1/vision/detection --form image=@"C:\BlueIris\Alerts\LivingRoom.20220128_100000.732066.5123-1.30471.33470.jpg"
combined.jpg
 
Last edited:

durnovtsev

n3wb
Joined
Nov 9, 2021
Messages
12
Reaction score
2
Location
Perm
If I understand correctly you currently have DeepStack working using the included objects model, launching DeepStack in Docker with the command:


Now you want to add the combined.pt custom model.
Correct?

If so you need to place the combined.pt model into a directory and make DeepStack aware of where that model resides.

On my Nano, running Ubuntu, I did the following:
1) Create a directory to hold custom models. I made a DeepStack-Models directory under my user's Documents directory with the command

2) Copy combined.pt into the DeepStack-Models directory. I used WinSCP to transfer the file from my BlueIris machine over to the Nano, but you can use what ever method you want.
3) Modify the docker run command to make the DeepStack-Models directory available to DeepStack



BTW, if you want to test whether the custom model is working you can issue a curl command in a command window on your BlueIris machine. My Nano is at IP address 192.168.1.250 and DeepStack is listening to port 82, so modify the IP address, port, and file location in the example command below to match your environment

Testing the default object detection can be done with the command:

View attachment 116956
Hello, digger11. Unfortunately, the custom model is not visible in Blue Iris, but everything works when running the command line from under windows 10, I attach photos.

It turns out that deepstack correctly launches a custom model and there is already a problem in blue iris.

Which version of blue iris are you using?
 

Attachments

digger11

Getting comfortable
Joined
Mar 26, 2014
Messages
355
Reaction score
373
I think I've just about decided that the openlogo.pt model has issues. I downloaded it today and have been unsuccessful getting it to work on my Nano or running DeepStack GPU natively on my Windows machine. Just to make sure it wasn't something I was doing wrong I also pulled down the ExDark model and had no trouble getting it working on both platforms.
 

digger11

Getting comfortable
Joined
Mar 26, 2014
Messages
355
Reaction score
373
@durnovtsev since it looks like you have got the openlogo endpoint to process an image succesfully with a curl command, I would be curious to know what the processing time was from the docker container log. My theory is that the processing times are exceeding the timeout that BlueIris is imposing. If you look at the docker container logs output for lines that have a response code of 200 and the endpoint "/v1/vision/custom/openlogo", what sort of processing times are you seeing?

Which version of blue iris are you using?
I'm on the latest version, 5.5.4.5 x64 (1/20/2022) , but if you search the forum for openlogo you'll see posts (most of which are around not being able to get openlogo to work) going back to version 5.4.7.11.
 

chewbucka

n3wb
Joined
Nov 26, 2018
Messages
10
Reaction score
9
Location
Texas
To get custom models to work in Blue Iris 5.59.3 x64 with deepstack running in a docker on an Ubuntu server, I had to make a few adjustments to the general settings AI tab.

I have the "Use AI server on IP/port" checked with the IP and port that I have deepstack running.
I have the "Use custom model folder" checked and specify a local folder on the windows machine. I then copied the custom model files into this directory. In my case, I copied the general.pt and dark.pt files into this windows folder.
I unchecked the "Default object detection" checkbox. Unchecking this prevents Blue Iris from trying to use the default objects. With this un-checked, I don't have to specify objects:0,general or objects:0,dark in the AI configuration for each camera. I can simply put in general or dark for the custom models. Also, when analyzing clips using the "Testing & Tuning/Analyze with AI", Blue Iris doesn't use the default objects, but only the custom models.

Like above, I launch the docker image using a similar CLI.

sudo docker run -d --restart unless-stopped -e THREADCOUNT=10 -e VISION-DETECTION=True -v /home/deepstack/my-models:/modelstore/detection -p 5000:5000 --name deepstack deepquestai/deepstack

I copied the same custom model files from the folder in blue iris to the Ubuntu server into the /home/deepstack/my-models folder.

Now when viewing the logs (-n 50 means start with the last 50 logs and -f means to follow the live logs. deepstack is the name of my container as specified in the above CLI)

sudo docker logs -n 50 -f deepstack

I only see blue iris making calls to:

/v1/vision/custom/general

and

/v1/vision/custom/dark

My processing times went from 1500ms avg to about 700ms avg during the day using the general.pt custom model, and 550ms at night using the dark.pt custom model.
 
Top