[tool] [tutorial] Free AI Person Detection for Blue Iris

To the best of my knowledge you must use a different port address for each version. Try using 8384 for the second version but you will still have an issue with aitool handling more than one port. Port 80 is probably already being used by some other app.

Why can't you incorporate everything into one module?
You may very well be able to that...But I wouldn't know! LMAO!! I did some more searching and found where someone else was asking a similar question to mine and the response they were given was ......"you need to add a new volume mapping to map your model directory to the /modelstore/detection directory in docker, you can enable both your custom model and the vision detection in DeepStack".... Now I need to figure out how to try that, don't suppose I can just R-click someplace and create a new folder huh? Pretty sure it involves some arcane string of characters I have no idea about. :eek:
 
You may very well be able to that...But I wouldn't know! LMAO!! I did some more searching and found where someone else was asking a similar question to mine and the response they were given was ......"you need to add a new volume mapping to map your model directory to the /modelstore/detection directory in docker, you can enable both your custom model and the vision detection in DeepStack".... Now I need to figure out how to try that, don't suppose I can just R-click someplace and create a new folder huh? Pretty sure it involves some arcane string of characters I have no idea about. :eek:
Now mapping sounds like the way forward as you will presumably only need the one port address. Once you get this all working be ready to answer a lot of questions :)
 
  • Haha
Reactions: balucanb
Now mapping sounds like the way forward as you will presumably only need the one port address. Once you get this all working be ready to answer a lot of questions :)
That you think I am going to get that far is nice....not sure about the validity...but nice. I am now trying to figure out how exactly to do said mapping...MTF.
 
Last edited:
Hi Guys,

I'm new to this forum and IP cameras in general, but after reading a ton about how to set it up, I want to start with a system that won't overwhelm me with false notifications. The AI tool, Deepstack and Blue Iris seems like the way to go for my situation.

I'm having a problem getting the AI tool to communicate with the Deepstack server. I've tried Deepstack on a Linux VM in docker and on the Windows 10 VM that's also running BI and the AI tool. Neither one seems to work. I think the AI tool can see the images that come from BI, but there doesn't seem to be a proper connection to Deepstack. The history tab is empty.

Below I've pasted the log for the current windows setup. It looks like an error around like 126 in the traceback. I also usually get the following:

DateFuncDetailLevelSourceAIServerCameraImageIdxDepthColorThreadIDFromFileFilename
2020-12-18 11:32:05 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneDrivewaySDNone42911FalseAITool.[2020-12-18].log
Please let me know if anyone sees something that could lead to a solution.



Thanks.
DateFuncDetailLevelSourceAIServerCameraImageIdxDepthColorThreadIDFromFileFilename
2020-12-18 9:38:25 AMDSHandleRedisProcMSGDebug: DeepStack>> [6020] 18 Dec 09:38:25 # no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'WarnAITOOLS.EXENoneNoneREDIS-SERVER.EXE6814FalseAITool.[2020-12-18].log
2020-12-18 9:38:25 AMDSHandleRedisProcMSGDebug: DeepStack>> [6020] 18 Dec 09:38:25 # Opening port 6379: bind 10048ErrorAITOOLS.EXENoneNoneREDIS-SERVER.EXE6914FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Traceback (most recent call last):ErrorAITOOLS.EXENoneNonePYTHON.EXE89110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE90110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python.pywrap_tensorflow_internal import *ErrorAITOOLS.EXENoneNonePYTHON.EXE91110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE92110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _pywrap_tensorflow_internal = swig_import_helper()ErrorAITOOLS.EXENoneNonePYTHON.EXE93110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helperErrorAITOOLS.EXENoneNonePYTHON.EXE94110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)ErrorAITOOLS.EXENoneNonePYTHON.EXE95110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 242, in load_moduleErrorAITOOLS.EXENoneNonePYTHON.EXE96110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 342, in load_dynamicErrorAITOOLS.EXENoneNonePYTHON.EXE97110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Import DLL load failed: A dynamic link library (DLL) initialization routine failed.ErrorAITOOLS.EXENoneNonePYTHON.EXE98110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> During handling of the above exception, another exception occurred:ErrorAITOOLS.EXENoneNonePYTHON.EXE99110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Traceback (most recent call last):ErrorAITOOLS.EXENoneNonePYTHON.EXE100110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "../intelligence.py", line 13, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE101110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from sharedintelligence.commons import preprocessErrorAITOOLS.EXENoneNonePYTHON.EXE102110FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\init.py", line 5, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE103117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from .detection3 import DetectModel3ErrorAITOOLS.EXENoneNonePYTHON.EXE104117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\detection3\init.py", line 1, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE105117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from .process import DetectModel3ErrorAITOOLS.EXENoneNonePYTHON.EXE106117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\detection3\process.py", line 1, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE107117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from .utils import read_pb_return_tensors,cpu_nmsErrorAITOOLS.EXENoneNonePYTHON.EXE108117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\sharedintelligence\detection3\utils.py", line 1, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE109117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> import tensorflow as tfErrorAITOOLS.EXENoneNonePYTHON.EXE110117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\init.py", line 24, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE111117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importErrorAITOOLS.EXENoneNonePYTHON.EXE112117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\init.py", line 49, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE113117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python import pywrap_tensorflowErrorAITOOLS.EXENoneNonePYTHON.EXE114117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE115117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> raise ImportError(msg)ErrorAITOOLS.EXENoneNonePYTHON.EXE116117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Import Traceback (most recent call last):ErrorAITOOLS.EXENoneNonePYTHON.EXE117117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE118117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> from tensorflow.python.pywrap_tensorflow_internal import *ErrorAITOOLS.EXENoneNonePYTHON.EXE119117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>ErrorAITOOLS.EXENoneNonePYTHON.EXE120117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _pywrap_tensorflow_internal = swig_import_helper()ErrorAITOOLS.EXENoneNonePYTHON.EXE121117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "C:\DeepStack\interpreter\packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helperErrorAITOOLS.EXENoneNonePYTHON.EXE122117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)ErrorAITOOLS.EXENoneNonePYTHON.EXE123117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 242, in load_moduleErrorAITOOLS.EXENoneNonePYTHON.EXE124117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> File "imp.py", line 342, in load_dynamicErrorAITOOLS.EXENoneNonePYTHON.EXE125117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Import DLL load failed: A dynamic link library (DLL) initialization routine failed.ErrorAITOOLS.EXENoneNonePYTHON.EXE126117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> Failed to load the native TensorFlow runtime.ErrorAITOOLS.EXENoneNonePYTHON.EXE127117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> See Build and install error messages | TensorFlowErrorAITOOLS.EXENoneNonePYTHON.EXE128117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> for some common reasons and solutions. Include the entire stack traceErrorAITOOLS.EXENoneNonePYTHON.EXE129117FalseAITool.[2020-12-18].log
2020-12-18 9:38:30 AMDSHandlePythonProcERRORDeepStack>> above this error message when asking for help.ErrorAITOOLS.EXENoneNonePYTHON.EXE130117FalseAITool.[2020-12-18].log
2020-12-18 9:38:32 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneNoneNone13221FalseAITool.[2020-12-18].log
2020-12-18 9:38:32 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneNoneNone13441FalseAITool.[2020-12-18].log
2020-12-18 9:38:35 AMStart 5 python.exe processes did not fully start in 10110msErrorAITOOLS.EXENoneNoneNone13614FalseAITool.[2020-12-18].log
2020-12-18 9:38:35 AMGetDeepStackRun Deepstack partially running. You many need to manually kill server.exe, python.exe, redis-server.exeErrorAITOOLS.EXENoneNoneNone13711FalseAITool.[2020-12-18].log
2020-12-18 9:46:01 AMDetectObjects Got http status code 'Forbidden' (403) in 86ms: ForbiddenErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg148119FalseAITool.[2020-12-18].log
2020-12-18 9:46:01 AMDetectObjects Empty string returned from HTTP post.ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg149119FalseAITool.[2020-12-18].log
2020-12-18 9:46:31 AMDetectObjects Got http status code 'Forbidden' (403) in 9ms: ForbiddenErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg166119FalseAITool.[2020-12-18].log
2020-12-18 9:46:31 AMDetectObjects Empty string returned from HTTP post.ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094601131.jpg167119FalseAITool.[2020-12-18].log
2020-12-18 9:47:01 AMDetectObjects Got http status code 'Forbidden' (403) in 18ms: ForbiddenErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094606131.jpg182119FalseAITool.[2020-12-18].log
2020-12-18 9:47:01 AMDetectObjects Empty string returned from HTTP post.ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDDrivewaySD.20201218_094606131.jpg183119FalseAITool.[2020-12-18].log
2020-12-18 9:47:01 AMImageQueueLoop... AI URL for 'DeepStack' failed '6' times. Disabling: ''ErrorAITOOLS.EXE192.168.10.4:83DrivewaySDNone189019FalseAITool.[2020-12-18].log
 
Last edited:
@BossHogg
I propose that you remove the windows version of deepstack and install docker with deepstack running in a Linux environment. After installation, test it before integrating with aitool. The docker environment will allow you to run with current versions of deepstack that are not presently available to operating within a windows environment.
 
@BossHogg
I propose that you remove the windows version of deepstack and install docker with deepstack running in a Linux environment. After installation, test it before integrating with aitool. The docker environment will allow you to run with current versions of deepstack that are not presently available to operating within a windows environment.

Hi Village Guy,

I previously had An Ubuntu VM running docker and the :latest version of deepstack. that didn't work either. I'll revert back to that setup and post the log.

Actually, are you proposing i install Docker in Windows instead?

Thanks for the help.
 
Hi Village Guy,

I previously had An Ubuntu VM running docker and the :latest version of deepstack. that didn't work either. I'll revert back to that setup and post the log.

Actually, are you proposing i install Docker in Windows instead?

Thanks for the help.
Yes:thumb:
 

I tried to install docker for windows and got the following error:

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\Graham\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

I'm running this Win10 VM on Proxmox on a Dell R710. I know the CPUs support virtualization and it's turned on in the BIOS. is there something i need to do to "pass through" the Virtualization settings from the BIOS, through Proxmox, to the Win10 VM?

Perhaps I'll post that to r/homelab.

Thanks.
 
I tried to install docker for windows and got the following error:

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\Graham\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

I'm running this Win10 VM on Proxmox on a Dell R710. I know the CPUs support virtualization and it's turned on in the BIOS. is there something i need to do to "pass through" the Virtualization settings from the BIOS, through Proxmox, to the Win10 VM?

Perhaps I'll post that to r/homelab.

Thanks.
Google WSL requirements- it tells you the prerequisites you need to include checking/enabling the virtualization. I started here- Install Windows Subsystem for Linux (WSL) on Windows 10
 
Just a quick note that I got DeepStack up and running on my NVIDIA Jetson I had just sitting around. AI-Tool is on the BI server but all analysis is running on the little Jetson. I previously had AI-Tool run DeepStack on my BI box but it got a little crowded on the CPU since the current Windows version is not GPU accelerated.

A few thoughts:

  • Running Deepstack on Jetson with a fresh Jetpack microSD card
  • Updated all software (sudo apt update +upgrade)
  • Turned off desktop environment since I'm just going to access DeepStack and I can ssh into the box if I need to fix something (sudo systemctl set-default multi-user.target)
  • Installed latest deepstack in docker, in High mode, and asked it to restart after machine reboot:
sudo docker run --runtime nvidia --restart unless-stopped -e MODE=High -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-x3-beta

The DeepStack docker image defaults to using Medium as the MODE. This means that the default Jetson server is limited to processing images no larger than 320 pixels at Medium. I'm running 4K cameras but that resolution is lost and even suboptimal for DeepStack. You should resize images close to the target processing size in BI, and not ask little jetson to do that resizing before running the image recognition.

I ended up starting my docker at HIGH mode since my cameras are already pretty tuned with motion zones in BI. At that setting I'm getting about 300ms processing time per frame:

[GIN] 2020/12/18 - 20:23:15 | 200 | 324.133383ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:15 | 200 | 280.243879ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:17 | 200 | 287.85692ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:18 | 200 | 288.047127ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:19 | 200 | 293.305335ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:20 | 200 | 281.178667ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:21 | 200 | 274.997808ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:22 | 200 | 283.577667ms | 192.168.1.233 | POST /v1/vision/detection
[GIN] 2020/12/18 - 20:23:32 | 200 | 269.086322ms | 192.168.1.233 | POST /v1/vision/detection

With original size 4K images I was getting close to 900ms and with MEDIUM with scaled images I was getting around 200ms.

Also note that Jetson is running a smaller size object detection model compared to the desktop gpu and cpu so the accuracy will probably be a little worse on Jetson. You can see the DeepStack settings code below for reference:
deepstack/intelligencelayer/shared/shared.py line 61-90:

"desktop_cpu": Settings(
DETECTION_HIGH=640,
DETECTION_MEDIUM=416,
DETECTION_LOW=256,
DETECTION_MODEL="yolov5m.pt",
FACE_HIGH=416,
FACE_MEDIUM=320,
FACE_LOW=256,
FACE_MODEL="face.pt",
),
"desktop_gpu": Settings(
DETECTION_HIGH=640,
DETECTION_MEDIUM=416,
DETECTION_LOW=256,
DETECTION_MODEL="yolov5m.pt",
FACE_HIGH=416,
FACE_MEDIUM=320,
FACE_LOW=256,
FACE_MODEL="face.pt",
),
"jetson": Settings(
DETECTION_HIGH=416,
DETECTION_MEDIUM=320,
DETECTION_LOW=256,
DETECTION_MODEL="yolov5s.pt",
FACE_HIGH=384,
FACE_MEDIUM=256,
FACE_LOW=192,
FACE_MODEL="face_lite.pt",
),

The current code looks like it's using busy waiting on images so the CPU usage is a little high on the Jetson when Idling (~40%) but I'm guessing that will be fixed now that it's open source:

It would be simpler from a hardware setup to have the DeepStack running on same box as BI but I'm a little worried that the GPU h265 4K decoding of my cams are going to be crowded by DeepStack so a separate Jetson that does extra flagging looks like a good setup for now. If Jetson blows up then I just have more false detections.
 
I tried to install docker for windows and got the following error:

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\Graham\AppData\Local\Docker\wsl\distro: exit code: -1
stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

I'm running this Win10 VM on Proxmox on a Dell R710. I know the CPUs support virtualization and it's turned on in the BIOS. is there something i need to do to "pass through" the Virtualization settings from the BIOS, through Proxmox, to the Win10 VM?

Perhaps I'll post that to r/homelab.

Thanks.
All bets are off if you are not running within a windows 10 Native environment. There are just too many variables.
 
I have a BI in a pc with 10 cameras and cpu working under 12% Wenn i use The AI the cpu goes 40-80 to analyse the move and after that down to 8-12%. I dont have a gpu .The question is,if i use GPU will this help the AI not to uses CPU .Is there any tip to reduce cpu?

-try deepstackai on 'low' mode
-try lower frequency of images being anaylized
-try lower resolution images from a substream camera stream
-try less cameras triggering deepstack
-try latest deepstack versions
-try a jetson nano

gpu will definately help, its way more efficient at this type of computation (but you have to run the gpu deppstack version)
 
I have a BI in a pc with 10 cameras and cpu working under 12% Wenn i use The AI the cpu goes 40-80 to analyse the move and after that down to 8-12%. I dont have a gpu .The question is,if i use GPU will this help the AI not to uses CPU .Is there any tip to reduce cpu?

Are you reducing the size of your saved images? Make BI resize them to 640x480 or even 320x240 if you you're running on default Medium MODE.
 
sudo docker run --runtime nvidia --restart unless-stopped -e MODE=High -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack-x3-beta

Apologies, don't use the Jetson beta image btw, their release image is newer:

Code:
sudo docker run --runtime nvidia --restart unless-stopped -e MODE=High -e VISION-DETECTION=True -p 80:5000 deepquestai/deepstack:jetpack
 
Have a quick question... I have had AI tool running for a while now and would like to mask off a portion of the street and the houses across the street. I took a jpg, created a mask in gimp and saved it as a png with the same Cam1-xxxxxxx.png. Do I need to name it something special like Cam1-mask.png? I have restarted AI tool a couple times but it when I turn on "See Mask" it doesn't seem to load anything.

Can someone point me in the right direction please?

Thank you
 
Have a quick question... I have had AI tool running for a while now and would like to mask off a portion of the street and the houses across the street. I took a jpg, created a mask in gimp and saved it as a png with the same Cam1-xxxxxxx.png. Do I need to name it something special like Cam1-mask.png? I have restarted AI tool a couple times but it when I turn on "See Mask" it doesn't seem to load anything.

Can someone point me in the right direction please?

Thank you

Do a manual trigger in BI (right click on cam, trigger) to feed at least one image into AI-Tool. Then you can apply a mask. Note that masks in AI-Tool are the opposite (ignore) to what they are in BI (include).
 
Do a manual trigger in BI (right click on cam, trigger) to feed at least one image into AI-Tool. Then you can apply a mask. Note that masks in AI-Tool are the opposite (ignore) to what they are in BI (include).

Okay, so I manually trigger BI. Use that image to make the mask? I guess I am missing how I apply the mask (besides inside a photo editing software).
 
I do my masking inside BI, I find it easier and more powerful with multiple zones. Why even send the snapshot to the AI if the motion is in an area you don't want to monitor.