[tool] [tutorial] Free AI Person Detection for Blue Iris

arg07

n3wb
Joined
Jun 16, 2020
Messages
8
Reaction score
0
Location
Toronto
It could be that the car is obscured by the timestamp so Deepstack doesn't recognise it. Can you disable the time stamp or move the camera a bit so it's not over that section?
I think that is it. I removed the timestamp and things seem to be getting recognized properly.
 
Joined
Jun 23, 2020
Messages
1
Reaction score
1
Location
NZ
Hi everyone,
First of all thank you @GentlePumpkin for creating this amazing tool.

I was following Robs (The Hook Up) video

and got it working on a Ubuntu 20.04 Desktop Laptop with Docker and Homeassistant.
I wanted to move it to my new server and no matter what I try I cannot get it to work.

1. Case Proxmox Server - Ubuntu Server - Docker - Deepstack
2. Case Proxmox Server - Homeassitant - Docker
3, Case Windows 10 (newest) where Blue Iris is installed as localhost

In all cases when I chose the wrong Ai I see a connection from the too to the Server (in all 3 cases) but when I try the right detection api

[24.06.2020, 00:51:13.299]: Starting analysis of D:\aiinput/drivewaysd.20200624_001903007.jpg ## local storage ##
[24.06.2020, 00:51:13.305]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 00:52:53.315]: System.Threading.Tasks.TaskCanceledException | A task was canceled. (code: -2146233029 )
[24.06.2020, 00:52:53.330]: ERROR: Processing the following image 'D:\aiinput/drivewaysd.20200624_001903007.jpg' failed. Can't reach DeepQuestAI Server at 192.168.0.81:5000

UPDATE: I found the error which might help others.

I tested this morning again a bit more, and I can install it on my Windows workstation and it works, so I assume it has something to do with the Virtualisation and Hardware Settings the Deepstack Server needs.

Solution:
For those who run Deepstack in a virtual enviroment like Proxmox.
When creating the machine for example Ubuntu with Docker and Deepstack in Docker need to choose the CPU they have or want to emulate and not just kvm64.
Deepstack seems to atart the analysis with hardware commands to the cpu and needs the architecture. When switched from kvm64 to Sandybridge (in my case the Intel CPU) it works.


Thanks
Arthur
 
Last edited:

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
When I looked it was at around 35% so much better but I’ve not managed to get it to auto start after which I’ll start doing some proper testing and monitoring.
Just wanted to see if your CPU usage has stayed down since switching to Docker? I’m considering changing mine to the same but I was waiting to see your findings.


Sent from my iPhone using Tapatalk
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
Just wanted to see if your CPU usage has stayed down since switching to Docker? I’m considering changing mine to the same but I was waiting to see your findings.


Sent from my iPhone using Tapatalk
Yes it is, it can spike to 40% occasionally but I still only have 3 of my cams configured at the mo. I’ve also realised yesterday that when I cloned the cams I for some reason decided to remove the sub stream settings so am planning on making my 3 cams live, they’ve been sort of running as a test up to now. So hopefully setting the sub stream will hopefully lower the CPU usage a bit more.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
Just configured 3 of my cams and switched them over to fully trigger on AI detection and switched all the clones to now use substreams.

Will monitor over the next few days to make sure that the cameras trigger as expected.

I have 2 queries, the first is that the file history.csv doesn’t appear to get written to, so if I restart the AITools service all previous history is lost and secondly, the Stats tab, how can I reset those stats back to zero so that I can monitor and reset over the next few days?
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
I think I’ve sorted the history.csv problem. When playing around I removed the file and then recreated it as History.csv so I suspect that using a cap H might be the problem. So I stopped the AITools service, deleted the file and the estates the service and it’s recreated it and is now writing to the file.

Another issue I’ve found is the my docker system is running an hour out, no idea if this will cause any issues, looks like the day light saving +1 hour isn’t set.

Trying to find out how to reset the stats page data now.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Yes it is, it can spike to 40% occasionally but I still only have 3 of my cams configured at the mo. I’ve also realised yesterday that when I cloned the cams I for some reason decided to remove the sub stream settings so am planning on making my 3 cams live, they’ve been sort of running as a test up to now. So hopefully setting the sub stream will hopefully lower the CPU usage a bit more.
That's great to hear. I look forward to your additional findings. I changed my clones to use the sub stream of each camera as well, which significantly helped response time and CPU load of Deepstack. I'm hoping to be able to ditch the old Dell PowerEdge 2950 that Deepstack is running to run in Docker on my BI machine.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
Can somebody please explain to an idiot how to create the mask file So I’ve copied an existing image, created a second layer, then set the opacity of that entire layer to 150. Then do I mask off the areas I want excluded or do I mask the area that I want scanned and is the masking/painting just a case of picking up a brush, choosing a colour and the painting?

I’ve also got another issue, namely that with BI using sub streams on the AI clone cams the picture it saves randomly changes resolution between the main and substream resolution so when I’ve been experimenting with masks I’ve had AITools complain that the resolution does not match.

Strange as I would have expected the jpg to be captured at full resolution, anybody else have or seen similar?
 
Joined
Jun 8, 2020
Messages
2
Reaction score
1
Location
USA
Can somebody please explain to an idiot how to create the mask file So I’ve copied an existing image, created a second layer, then set the opacity of that entire layer to 150. Then do I mask off the areas I want excluded or do I mask the area that I want scanned and is the masking/painting just a case of picking up a brush, choosing a colour and the painting?

I’ve also got another issue, namely that with BI using sub streams on the AI clone cams the picture it saves randomly changes resolution between the main and substream resolution so when I’ve been experimenting with masks I’ve had AITools complain that the resolution does not match.

Strange as I would have expected the jpg to be captured at full resolution, anybody else have or seen similar?
To make the mask;
  • Open a snapshot taken from the camera in a photo editor (one that can use layers)
  • Add a layer to the existing image
  • In the new layer, "mask" or draw white space over the areas you do NOT want the AI tool to trigger from
  • Set the new layer with the white mask you just made to 50% alpha
  • Hide or delete the first layer, this is the layer with the original snapshot from the camera.
  • Now you're left with just the white space you made at 50% alpha
  • Export that image as a PNG (we do not want the original picture in there, just the mask)
  • Place that PNG file you just made inside of "AI Tool\cameras" directory with the EXACT name of the camera.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
Ok cheers for that, similar to what I was doing except I made the entire new layers opacity 150 before I drew any masked areas.

Still can’t figure out why the JPEG’s are switching between the main and substreams, it must be something that I’m doing as I’d expect it to stay at one resolution or the other all the time, not randomly switch.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
Ok cheers for that, similar to what I was doing except I made the entire new layers opacity 150 before I drew any masked areas.

Still can’t figure out why the JPEG’s are switching between the main and substreams, it must be something that I’m doing as I’d expect it to stay at one resolution or the other all the time, not randomly switch.
Check your record tab on both cameras. Only one camera should be recording images into the folder that is monitored by AI Tool.


Sent from my iPhone using Tapatalk
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
I’ve checked and rechecked, have also cleared out the images folder hoping that as I played around with BI I could maybe spot a pattern as to when it switches, if I can establish a pattern then I can report it to support.

Would be good if somebody else here who has a cam with substreams enabled could clone a cam and test with saving JPEG’s on motion to see if they get the same as me. Should be easy enough to setup, clown an existing cam, then for trigger simply select save a JPEG exactly as if configuring it for use with AITools. Setup up simple motion detection and then monitor the output folder to see what resolution images are saved.
 

arg07

n3wb
Joined
Jun 16, 2020
Messages
8
Reaction score
0
Location
Toronto
I'm trying to figure out why the whole process from the time that the first trigger happens and a picture is taken and dropped in the folder to the point where the AI tool makes the determination that there is a person/car etc... takes so long.

From the log file I see that there is a 22 second timelapse from step 1 to step 2. Can I conclude from this that the Deepquest Server is taking 22 seconds to analyze the photo? If so, is that a function of the computer that I am running DeepQuest on or a function of the Deepquest service itself?

If it is my computer hardware I would not be surprised as I'm running it as a proof of concept on an old CPU (Acer Aspire Revo R3610) because I couldn't get it to work on my Main CPU that is running BI. I'm contemplating purchasing a used Dell Optiplex to run BI, AI Tool, Deepquest and some other applications (Plex, Sabbznd, Radarr, Sonarr) but don't want to make that investment if this Deepquest performance is the best that it gets. I'm intending on using this tool as a substitute/augmentation for some motion sensors in my house with my home automation system to manage lights, presence etc... For that to be feasible, I'd need the AI Tool/Deepquest process to be much more instantaneous. Can anyone weigh in on how fast their AI Tool/Deepquest is working? What hardware (chip, ram etc...) is it running on?



[24.06.2020, 08:35:12.785]: Starting analysis of C:\Users\Adam\Desktop\AIPics\aiKitchen.20200624_083512751.jpg
[24.06.2020, 08:35:12.792]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 08:35:34.084]: (2/6) Waiting for results
[24.06.2020, 08:35:34.102]: (3/6) Processing results:
[24.06.2020, 08:35:34.113]: Detected objects:person (91.83%), cup (64.49%), chair (46.46%), oven (88.95%), refrigerator (98.04%),
[24.06.2020, 08:35:34.124]: (4/6) Checking if detected object is relevant and within confidence limits:
[24.06.2020, 08:35:34.131]: person (91.83%):
[24.06.2020, 08:35:34.146]: Checking if object is outside privacy mask of aiKitchen:
[24.06.2020, 08:35:34.154]: Loading mask file...
[24.06.2020, 08:35:34.162]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[24.06.2020, 08:35:34.170]: person (91.83%) confirmed.
[24.06.2020, 08:35:34.178]: cup (64.49%):
[24.06.2020, 08:35:34.191]: cup (64.49%) is irrelevant.
[24.06.2020, 08:35:34.198]: chair (46.46%):
[24.06.2020, 08:35:34.212]: chair (46.46%) is irrelevant.
[24.06.2020, 08:35:34.223]: oven (88.95%):
[24.06.2020, 08:35:34.235]: oven (88.95%) is irrelevant.
[24.06.2020, 08:35:34.243]: refrigerator (98.04%):
[24.06.2020, 08:35:34.254]: refrigerator (98.04%) is irrelevant.
[24.06.2020, 08:35:34.262]: The summary:person (91.83%)
[24.06.2020, 08:35:34.268]: (5/6) Performing alert actions:
[24.06.2020, 08:35:34.275]: trigger url: !
[24.06.2020, 08:35:34.286]: -> Trigger URL called.
[24.06.2020, 08:35:34.306]: (6/6) SUCCESS.
[24.06.2020, 08:35:34.313]: Adding detection to history list.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
This is a log from my system, I have an i5-6500 with 12Gb RAM, takes total time of around around 1.5 seconds, 1.3 of which is waiting for DeepQuestAI.

[24.06.2020, 20:03:27.194]: Starting analysis of D:\BlueIris\AI-Input/AI-Patio_C.20200624_200327169.jpg
[24.06.2020, 20:03:27.206]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 20:03:28.519]: (2/6) Waiting for results
[24.06.2020, 20:03:28.527]: (3/6) Processing results:
[24.06.2020, 20:03:28.531]: Detected objects:person (90.14%),
[24.06.2020, 20:03:28.536]: (4/6) Checking if detected object is relevant and within confidence limits:
[24.06.2020, 20:03:28.542]: person (90.14%):
[24.06.2020, 20:03:28.554]: Checking if object is outside privacy mask of Patio:
[24.06.2020, 20:03:28.560]: Loading mask file...
[24.06.2020, 20:03:28.564]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[24.06.2020, 20:03:28.570]: person (90.14%) confirmed.
[24.06.2020, 20:03:28.576]: (5/6) Performing alert actions:
[24.06.2020, 20:03:28.584]: Camera Patio is still in cooldown. Trigger URL wasn't called and no image will be uploaded to Telegram.
[24.06.2020, 20:03:28.596]: (6/6) SUCCESS.
[24.06.2020, 20:03:28.602]: Adding detection to history list.
 

pmcross

Pulling my weight
Joined
Jan 16, 2017
Messages
371
Reaction score
185
Location
Pennsylvania
I'm trying to figure out why the whole process from the time that the first trigger happens and a picture is taken and dropped in the folder to the point where the AI tool makes the determination that there is a person/car etc... takes so long.

From the log file I see that there is a 22 second timelapse from step 1 to step 2. Can I conclude from this that the Deepquest Server is taking 22 seconds to analyze the photo? If so, is that a function of the computer that I am running DeepQuest on or a function of the Deepquest service itself?

If it is my computer hardware I would not be surprised as I'm running it as a proof of concept on an old CPU (Acer Aspire Revo R3610) because I couldn't get it to work on my Main CPU that is running BI. I'm contemplating purchasing a used Dell Optiplex to run BI, AI Tool, Deepquest and some other applications (Plex, Sabbznd, Radarr, Sonarr) but don't want to make that investment if this Deepquest performance is the best that it gets. I'm intending on using this tool as a substitute/augmentation for some motion sensors in my house with my home automation system to manage lights, presence etc... For that to be feasible, I'd need the AI Tool/Deepquest process to be much more instantaneous. Can anyone weigh in on how fast their AI Tool/Deepquest is working? What hardware (chip, ram etc...) is it running on?



[24.06.2020, 08:35:12.785]: Starting analysis of C:\Users\Adam\Desktop\AIPics\aiKitchen.20200624_083512751.jpg
[24.06.2020, 08:35:12.792]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 08:35:34.084]: (2/6) Waiting for results
[24.06.2020, 08:35:34.102]: (3/6) Processing results:
[24.06.2020, 08:35:34.113]: Detected objects:person (91.83%), cup (64.49%), chair (46.46%), oven (88.95%), refrigerator (98.04%),
[24.06.2020, 08:35:34.124]: (4/6) Checking if detected object is relevant and within confidence limits:
[24.06.2020, 08:35:34.131]: person (91.83%):
[24.06.2020, 08:35:34.146]: Checking if object is outside privacy mask of aiKitchen:
[24.06.2020, 08:35:34.154]: Loading mask file...
[24.06.2020, 08:35:34.162]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[24.06.2020, 08:35:34.170]: person (91.83%) confirmed.
[24.06.2020, 08:35:34.178]: cup (64.49%):
[24.06.2020, 08:35:34.191]: cup (64.49%) is irrelevant.
[24.06.2020, 08:35:34.198]: chair (46.46%):
[24.06.2020, 08:35:34.212]: chair (46.46%) is irrelevant.
[24.06.2020, 08:35:34.223]: oven (88.95%):
[24.06.2020, 08:35:34.235]: oven (88.95%) is irrelevant.
[24.06.2020, 08:35:34.243]: refrigerator (98.04%):
[24.06.2020, 08:35:34.254]: refrigerator (98.04%) is irrelevant.
[24.06.2020, 08:35:34.262]: The summary:person (91.83%)
[24.06.2020, 08:35:34.268]: (5/6) Performing alert actions:
[24.06.2020, 08:35:34.275]: trigger url: !
[24.06.2020, 08:35:34.286]: -> Trigger URL called.
[24.06.2020, 08:35:34.306]: (6/6) SUCCESS.
[24.06.2020, 08:35:34.313]: Adding detection to history list.
This sounds like the CPU on the machine that Deepstack is running on. Watch your CPU performance on the Deepstack machine to see if it spikes to 100% during motion events. This was what I was running into as well. It would take 20 to 30 seconds to process all of the images that were dropped into the folder, causing my alerts to be 45-60 seconds late AFTER motion had ceased. I have since moved Deekstack to another machine. I can't speak to how the Optiplex will run with all of the various roles that you are planning to run on it, but I would be cautious and I am going to presume that machine won’t be able to handle all of those roles. This of course depends on how many cameras you're running in BI as well as how many streams you're running simultaneously on Plex To put it into perspective, I am running BI, Sighthound (2 cameras), Home Assistant (in a VM) and pfsense (in a VM) on my HP Z420 which is a 10 core E5-2690 V2 (20 virtual cores) and I was hitting max CPU thresholds while running Deepstack on it. My CPU usage on my Z420 hovers around 40% and spikes to 50% or so without Deepstack. With Deepstack it was hovering around 65% and was peaking around 100% for 30-45 seconds.
 
Last edited:

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
276
Location
Sydney
This is excellent. I do constant recording, but use this just to send Telegram alerts and a pic to the phone just for 1 cam when people are detected. This has a very good hit rate!!
Find this a good alternative to the iOS phone notifications.

Also via the GUI on the AI tool, you can quickly select the filter for relevant alerts and scroll through all the match images.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
Found out how to reset the stats, needed to re-zero the counters within each cam text file, maybe worthy of an option in the future?

I’ve also found out the hard way not to create any extra files in the Cameras folder. I put a text file there with the command I used to create the DeplepQuest container and that caused AITools to stop working.
 

arg07

n3wb
Joined
Jun 16, 2020
Messages
8
Reaction score
0
Location
Toronto
This sounds like the CPU on the machine that Deepstack is running on. Watch your CPU performance on the Deepstack machine to see if it spikes to 100% during motion events. This was what I was running into as well. It would take 20 to 30 seconds to process all of the images that were dropped into the folder, causing my alerts to be 45-60 seconds late AFTER motion had ceased. I have since moved Deekstack to another machine. I can't speak to how the Optiplex will run with all of the various roles that you are planning to run on it, but I would be cautious and I am going to presume that machine won’t be able to handle all of those roles. This of course depends on how many cameras you're running in BI as well as how many streams you're running simultaneously on Plex To put it into perspective, I am running BI, Sighthound (2 cameras), Home Assistant (in a VM) and pfsense (in a VM) on my HP Z420 which is a 10 core E5-2690 V2 (20 virtual cores) and I was hitting max CPU thresholds while running Deepstack on it. My CPU usage on my Z420 hovers around 40% and spikes to 50% or so without Deepstack. With Deepstack it was hovering around 65% and was peaking around 100% for 30-45 seconds.
I'm currently running everything that I mentioned except for Deepstack on my home office computer which is from 2009 and runs an Intel i7-860 with 8GB of RAM. CPU typically runs under 50% and I've never had an issue watching Plex (Don't generally have more than 1 stream running at a time). My current computer has a Passmark CPU score of 2900. The Optiplex I am looking at has a score of a little over 8000 and I'd be putting in 16GB of RAM. I would have thought I'd be more than ok. Your CPU has a Passmark score of more than 13000 which is where I start to have a disconnect because I would have thought you'd have much lower usage based on what you describe you have on your system.

Anyone have a POV on using Passmark CPU scores as a method to roughly figure out if they'll have enough computing power. I'm using my current CPU performance and figuring that a score that is almost 3x higher would give me more than enough room to add Deepstack and still have lots of capacity to spare.



1593097820188.png
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,308
Reaction score
3,293
Location
United Kingdom
I’ve got rid of substreams on my clones so that I can get high res pictures.

Have set it all up on 4 of my cams with masks and will monitor it over the next few days.

Had to disable this on my 2 driveway cameras as they kept on missing a person walking up the driveway. I normally have 2 cars parked and the person would walk between the cars but DeepQuest was having trouble picking up that it was a person so no recording. Not too sure what can be done to resolve this but until then I’ve simply put the cams back to motion detection.
 

naidu

Young grasshopper
Joined
May 22, 2020
Messages
31
Reaction score
5
Location
USA
I’ve got rid of substreams on my clones so that I can get high res pictures.

Have set it all up on 4 of my cams with masks and will monitor it over the next few days.

Had to disable this on my 2 driveway cameras as they kept on missing a person walking up the driveway. I normally have 2 cars parked and the person would walk between the cars but DeepQuest was having trouble picking up that it was a person so no recording. Not too sure what can be done to resolve this but until then I’ve simply put the cams back to motion detection.
From what i understand these AI model sue set amount of resolution so using 4k doesn't do much other than increase CPU consumption. You better off staying at lower resolution and maxing out bitrate.
 
Top