[tool] [tutorial] Free AI Person Detection for Blue Iris

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
I have 2 questions:

1) I installed everything again. (8 instances of Deepstack in Docker + settings in AItool) Same computer and same settings as previous installation. Somehow VMMEM consumes 90% of CPU of a Ryzen 5950X at 4.5Ghz. Even 5-10 images take a long time. (resolution 1024x2048) at almost 100% CPU. I can't remember this was the case previously. It was much faster and also less CPU intensive.

2) How can I use docker with Nvidia GPU?
Did you use this to create the containers?

docker pull deepquestai/deepstack:gpu

Do you really have 8 instances as in 8 ports for deepstack? What is the requirement for 8 deepstack instances may I ask?
 

whoami ™

Pulling my weight
Joined
Aug 4, 2019
Messages
230
Reaction score
224
Location
South Florida
I run 3 instances of native windows deepstack:gpu. In the later versions of AI tool there is an option to add more than one port which automatically runs separate instances of DS on those ports but it only works with native windows version of DS. Docker deepstack:gpu I couldn't tell you off the top of my head.
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
I run 3 instances of native windows deepstack:gpu. In the later versions of AI tool there is an option to add more than one port which automatically runs separate instances of DS on those ports but it only works with native windows version of DS. Docker deepstack:gpu I couldn't tell you off the top of my head.
Did you have all 8 in docker before? I can imagine the VMMEM cpu being high with 8 containers running deepstack.
I have just moved my instances to windows deepstack to close down my docker desktop due to VMMEM cpu usage.
Just 2 docker instances also. cpu now sits at around 5-7% idle instead of 15% or so.

Docker did have more advantages though through portainer, cpu and memory settings, env settings for each instance etc.

Update: WSl2 has no constraints and will use anything available - you may want to throttle it back.
You can adjust this with a .wslconfig in the user folder.



 
Last edited:

OccultMonk

Young grasshopper
Joined
Jul 25, 2020
Messages
72
Reaction score
13
Location
A Mountain hilltop
It's not just CPU load, but also slow processing, taking a few seconds per image for 1280x720 images. I have a 5950X16 core Ryzen CPU OC at 4.5ghz and 64GB memory. This is a really new and good computer. I can't imagine deepstack is that CPU intensive.

Yes, I had the 8 instances running before, and it resulted in much faster image recognition (it's a lot slower now for some reason).

Is Deepstack on docker still the best option, or can I better use the windows versions of CPU and GPU of deepstack?
 
Last edited:

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
It's not just CPU load, but also slow processing, taking a few seconds per image for 1280x720 images. I have a 5950X16 core Ryzen CPU OC at 4.5ghz and 64GB memory. This is a really new and good computer. I can't imagine deepstack is that CPU intensive.

Yes, I had the 8 instances running before, and it resulted in much faster image recognition (it's a lot slower now for some reason).

Is Deepstack on docker still the best option, or can I better use the windows versions of CPU and GPU of deepstack?
Did you have an intel before? I only have 4 cameras with low resolution on substreams.
I tried it out on a ryzen 5 3400g with integrated graphics. 8 cores. 32gm ram.

Muchh slower than my i7 7700 (8 virtual cores also). But your ryzen is far superior to my 5 3400g so I am surprised.
Moved my BI and deepstack back to the older intel. Intels are recommended over Ryzen.

I would try them on the windows deepstack just to compare.
Have you tried to limit the cores used on each deepstack instance? Command line or spin up portainer and just use the slider.
That helped mine on the ryzen. Well it helped the cpu from hitting max and bottlenecking. (At that time I had been using mainstreams on full HD and MODE high)
 

Senor Pibb

Getting the hang of it
Joined
May 22, 2020
Messages
77
Reaction score
36
Location
Greer, SC
Have you figured this out yet? I have the same issue. You can use the TEST button in AI tool and see the most recent alert in the "all alerts" option in BI mobile, but none of the other filters work (confirmed or flagged) until you TEST again.

Update:

What's actually happening is the "flagged" event that shows up in Blue Iris is one flagged event older than the event that triggered it. I realize that's hard to follow.

AI hits on a human at 7 PM. I immediately get a correct telegram alert with the correct picture, but the event doesn't show up in Flagged clips in Blue Iris.

AI hits on another human at 8 PM. I once again get a correct telegram alert and picture. At this point, the 7 PM AI hit now shows up in flagged clips in Blue Iris.

It's like AITool is flagging the previous event with each hit instead of the current. I'm not sure if the problem is with AITool or Blue Iris. Any ideas?
 

Schrodinger's Cat

Young grasshopper
Joined
Nov 17, 2020
Messages
42
Reaction score
16
Location
USA
Have you figured this out yet? I have the same issue. You can use the TEST button in AI tool and see the most recent alert in the "all alerts" option in BI mobile, but none of the other filters work (confirmed or flagged) until you TEST again.
Problem is fixed, but I don't know how I fixed it.

I ditched Docker Deepstack for Windows GPU and I also ditched AI Tool for a little while and spent a couple days configuring Blue Iris to handle my detections. In the end I didn't like how it worked with Blue Iris deepstack integration so I went back to AI Tool using Windows GPU deepstack and the problem somehow resolved itself.
 

Schrodinger's Cat

Young grasshopper
Joined
Nov 17, 2020
Messages
42
Reaction score
16
Location
USA
Also I want to go on record and say I'd gladly pay for for AI Tool if it meant a more refined process for installing and updating
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
Problem is fixed, but I don't know how I fixed it.

I ditched Docker Deepstack for Windows GPU and I also ditched AI Tool for a little while and spent a couple days configuring Blue Iris to handle my detections. In the end I didn't like how it worked with Blue Iris deepstack integration so I went back to AI Tool using Windows GPU deepstack and the problem somehow resolved itself.
That is my preference now also. Tried deepstack in docker, worked but it consumed memory.
AI tool has been used great with deepstack gpu windows

The only function I miss is the ability to have a different mode of level on different server's. It's 1 mode for all as far as I can see
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
Hi guys,

Look at the PCIe info - shouldn't this show @ 16 ? Maybe the card is not seated properly?
Changed the power management pci to Off so its not trying to save energy.
Maybe a limitation of my motherboard?



1 X PCI Express X16 Gen 3.0 slot(s)
X PCI Express X 1 Gen 2.0 slot(s)
1 X M.2 slot for SSD
For 2242/2260/2280 PCIE X2 SSD

1635511726552.png

Max supported 8.0? Maybe related with the motherboard then or because it is set to that as the ryzen 3400g has integrated graphics?

Going to try these suggestions


1635507884800.png
 
Last edited:

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
Looks like I'm stuck with @8 for now. Reseated the card after cleaning the pins carefully. Turned off the turbo boost on the ryzen cpu in bios as suggested in the article. Tried various windows power settings. Nothing changed it.
How much difference 8-16 will make for deepstack I do not know.
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
7,932
Reaction score
20,757
Location
USA
Looks like I'm stuck with @8 for now. Reseated the card after cleaning the pins carefully. Turned off the turbo boost on the ryzen cpu in bios as suggested in the article. Tried various windows power settings. Nothing changed it.
How much difference 8-16 will make for deepstack I do not know.
Just a thought, is anything else sharing your PCI-E resources? I know with M.2s this would be an issue...
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
Just a thought, is anything else sharing your PCI-E resources? I know with M.2s this would be an issue...
ah I did install a 1TB M.2 yes.
Anyway of unsharing this?

I have 1 TB M.2 and 2 x HDD spinners. Too many lanes? Board only has 1 M.2 slot (which has the OS etc)

My ECS AMD4 board specs:

1 X PCI Express X16 Gen 3.0 slot(s)
X PCI Express X 1 Gen 2.0 slot(s)
1 X M.2 slot for SSD
For 2242/2260/2280 PCIE X2 SSD
 
Last edited:

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
7,932
Reaction score
20,757
Location
USA
ah I did install a 1TB M.2 yes.
Anyway of unsharing this?

I have 1 TB M.2 and 2 x HDD spinners. Too many lanes? Board only has 1 M.2 slot (which has the OS etc)

My ECS AMD4 board specs:

1 X PCI Express X16 Gen 3.0 slot(s)
X PCI Express X 1 Gen 2.0 slot(s)
1 X M.2 slot for SSD
For 2242/2260/2280 PCIE X2 SSD
Do you have your O/S on the M.2? Boot from it?
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
Do you have your O/S on the M.2? Boot from it?
Yes, unfortunately, so not easy to remove it. I have a spare sata SSD 2.5 I could copy and image across, take out the m.2 and try that. But is it worth it?
Is enabling the 16 lanes on the gtx 970 worth it for just deepstack?
Will it make much of a difference?

Think I'm stuck with the sharing on chipset AMD® A320 unless I remove the M.2.
Basically because the SSD is PCIe-based it is sharing, isn't it?
So either remove the M.2 ssd or buy a better motherboard with one that does not share.

All a learning experience. My first ATX board and had no idea
 
Last edited:

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
7,932
Reaction score
20,757
Location
USA
Yes, unfortunately, so not easy to remove it. I have a spare sata SSD 2.5 I could copy and image across, take out the m.2 and try that. But is it worth it?
Is enabling the 16 lanes on the gtx 970 worth it for just deepstack?
Will it make much of a difference?

Think I'm stuck with the sharing on chipset AMD® A320 unless I remove the M.2.
So my Mammaboard only has one M.2 slot too, running my O/S. I have a 3080 in the 16x slot but have the same problem as you. Now this is not my BI machine. My BI PC does not even have a 16x slot and I do have a 970 card in it running GPU version of Deepstack, so far no issues but I do plan to add some cameras though.
Since 12th Gen Procs are coming, I will be building a new system where the 3080 will reside, mainly for Gaming. Looking, hopefully, to utilize the full Bus on the new M2s along with 16x for the 3080 at PCI-E 4.0
Man these new 980's are fast, 7000 read, wow, but I assume with no sharing of the slot...

1635529679209.png

1635530053695.png
 

David L

IPCT Contributor
Joined
Aug 2, 2019
Messages
7,932
Reaction score
20,757
Location
USA
Has anyone tried a Google Coral on DeepStack, or is that even possible?
 

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
I'll probably go with a new mother like this one
Asus Prime H 470m Plus 1200
Has 2 x 16 PCEi slots
2 x M.2 (sata or pcei mode so no sharing I believe)

My gpu is sli so could twin with another in the 2 x 16 slots.
hmmm
edit: this is intel. so need an amd board

Now the Asus B550 (amd) plus tuf gaming am4 seems to have the whole bundle. tons of slots
  • 1 x PCIe 4.0 x16 (x16 mode)
  • 1 x PCIe 3.0 x16 (x16 mode)
  • 1 x PCIe 3.0 x16 (x4 mode)
  • 3 x PCIe 3.0 x1
  • 1 x M.2 Socket 3 with M key, type 2242/2260/2280/22110 storage devices support(SATA & PCIe 4.0 x4 mode)
  • 1 x M.2 Socket 3 with M key, type 2242/2260/2280/22110 storage devices support (SATA & PCIE 3.0 x 4 mode)
  • 1 x M.2 with E key for Wi-Fi module
 
Last edited:

Pentagano

Getting comfortable
Joined
Dec 11, 2020
Messages
575
Reaction score
269
Location
Uruguay
Whats settings do people have the Link State Power Management (windows power settings) on for the PCI Express?
OFF
Moderate savings
Maximum power savings.

Will the deepstack work faster with no delay with this on OFF so its never in an idle state?
 

Senor Pibb

Getting the hang of it
Joined
May 22, 2020
Messages
77
Reaction score
36
Location
Greer, SC
What speeds are you guys getting with Deepstack GPU? I am getting 550 to 750 ms for both of my docker CPU instances running on unraid.
 
Top