Hi friend, did you find an answer? I'm having the same problem here.How should one configure BlueIris to call the "general" custom model when deepstack is run from Docker/Jetson Nano?
I tried (in the camera's "Trigger/Artificial Intelligence" form):
The general model runs fine when I call it directly (using /v1/vision/custom/general instead of /v1/vision/detection), so there is no problem on the deepstack side.
- "objects:0,general" or "objects:0,custom/general": deepstack does not get called (confirmed with wireshark)
- "general" : default detection gets called (/v1/vision/detection)
This how I load deepstack:
I put general.pt in ~/aimodels/
and started deepstack with:
sudo docker run -d --log-driver syslog --runtime nvidia --name deepstack --restart unless-stopped -e VISION-DETECTION=True -e MODE=High -v /home/myuser/aimodels:/modelstore/detection -p 80:5000 deepquestai/deepstack:jetpack-2021.09.1
I cannot run the combined model, as I found that loading that model provokes an out of memory on my 4GB Jetson Nano.
BI: 184.108.40.206 on Win10
Running on docker, if I call the general api directly it works.