[tool] [tutorial] Free AI Person Detection for Blue Iris

I think I’ve sorted the history.csv problem. When playing around I removed the file and then recreated it as History.csv so I suspect that using a cap H might be the problem. So I stopped the AITools service, deleted the file and the estates the service and it’s recreated it and is now writing to the file.

Another issue I’ve found is the my docker system is running an hour out, no idea if this will cause any issues, looks like the day light saving +1 hour isn’t set.

Trying to find out how to reset the stats page data now.
 
Yes it is, it can spike to 40% occasionally but I still only have 3 of my cams configured at the mo. I’ve also realised yesterday that when I cloned the cams I for some reason decided to remove the sub stream settings so am planning on making my 3 cams live, they’ve been sort of running as a test up to now. So hopefully setting the sub stream will hopefully lower the CPU usage a bit more.
That's great to hear. I look forward to your additional findings. I changed my clones to use the sub stream of each camera as well, which significantly helped response time and CPU load of Deepstack. I'm hoping to be able to ditch the old Dell PowerEdge 2950 that Deepstack is running to run in Docker on my BI machine.
 
Can somebody please explain to an idiot how to create the mask file So I’ve copied an existing image, created a second layer, then set the opacity of that entire layer to 150. Then do I mask off the areas I want excluded or do I mask the area that I want scanned and is the masking/painting just a case of picking up a brush, choosing a colour and the painting?

I’ve also got another issue, namely that with BI using sub streams on the AI clone cams the picture it saves randomly changes resolution between the main and substream resolution so when I’ve been experimenting with masks I’ve had AITools complain that the resolution does not match.

Strange as I would have expected the jpg to be captured at full resolution, anybody else have or seen similar?
 
Can somebody please explain to an idiot how to create the mask file So I’ve copied an existing image, created a second layer, then set the opacity of that entire layer to 150. Then do I mask off the areas I want excluded or do I mask the area that I want scanned and is the masking/painting just a case of picking up a brush, choosing a colour and the painting?

I’ve also got another issue, namely that with BI using sub streams on the AI clone cams the picture it saves randomly changes resolution between the main and substream resolution so when I’ve been experimenting with masks I’ve had AITools complain that the resolution does not match.

Strange as I would have expected the jpg to be captured at full resolution, anybody else have or seen similar?

To make the mask;
  • Open a snapshot taken from the camera in a photo editor (one that can use layers)
  • Add a layer to the existing image
  • In the new layer, "mask" or draw white space over the areas you do NOT want the AI tool to trigger from
  • Set the new layer with the white mask you just made to 50% alpha
  • Hide or delete the first layer, this is the layer with the original snapshot from the camera.
  • Now you're left with just the white space you made at 50% alpha
  • Export that image as a PNG (we do not want the original picture in there, just the mask)
  • Place that PNG file you just made inside of "AI Tool\cameras" directory with the EXACT name of the camera.
 
  • Like
Reactions: IAmATeaf
Ok cheers for that, similar to what I was doing except I made the entire new layers opacity 150 before I drew any masked areas.

Still can’t figure out why the JPEG’s are switching between the main and substreams, it must be something that I’m doing as I’d expect it to stay at one resolution or the other all the time, not randomly switch.
 
Ok cheers for that, similar to what I was doing except I made the entire new layers opacity 150 before I drew any masked areas.

Still can’t figure out why the JPEG’s are switching between the main and substreams, it must be something that I’m doing as I’d expect it to stay at one resolution or the other all the time, not randomly switch.

Check your record tab on both cameras. Only one camera should be recording images into the folder that is monitored by AI Tool.


Sent from my iPhone using Tapatalk
 
I’ve checked and rechecked, have also cleared out the images folder hoping that as I played around with BI I could maybe spot a pattern as to when it switches, if I can establish a pattern then I can report it to support.

Would be good if somebody else here who has a cam with substreams enabled could clone a cam and test with saving JPEG’s on motion to see if they get the same as me. Should be easy enough to setup, clown an existing cam, then for trigger simply select save a JPEG exactly as if configuring it for use with AITools. Setup up simple motion detection and then monitor the output folder to see what resolution images are saved.
 
I'm trying to figure out why the whole process from the time that the first trigger happens and a picture is taken and dropped in the folder to the point where the AI tool makes the determination that there is a person/car etc... takes so long.

From the log file I see that there is a 22 second timelapse from step 1 to step 2. Can I conclude from this that the Deepquest Server is taking 22 seconds to analyze the photo? If so, is that a function of the computer that I am running DeepQuest on or a function of the Deepquest service itself?

If it is my computer hardware I would not be surprised as I'm running it as a proof of concept on an old CPU (Acer Aspire Revo R3610) because I couldn't get it to work on my Main CPU that is running BI. I'm contemplating purchasing a used Dell Optiplex to run BI, AI Tool, Deepquest and some other applications (Plex, Sabbznd, Radarr, Sonarr) but don't want to make that investment if this Deepquest performance is the best that it gets. I'm intending on using this tool as a substitute/augmentation for some motion sensors in my house with my home automation system to manage lights, presence etc... For that to be feasible, I'd need the AI Tool/Deepquest process to be much more instantaneous. Can anyone weigh in on how fast their AI Tool/Deepquest is working? What hardware (chip, ram etc...) is it running on?



[24.06.2020, 08:35:12.785]: Starting analysis of C:\Users\Adam\Desktop\AIPics\aiKitchen.20200624_083512751.jpg
[24.06.2020, 08:35:12.792]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 08:35:34.084]: (2/6) Waiting for results
[24.06.2020, 08:35:34.102]: (3/6) Processing results:
[24.06.2020, 08:35:34.113]: Detected objects:person (91.83%), cup (64.49%), chair (46.46%), oven (88.95%), refrigerator (98.04%),
[24.06.2020, 08:35:34.124]: (4/6) Checking if detected object is relevant and within confidence limits:
[24.06.2020, 08:35:34.131]: person (91.83%):
[24.06.2020, 08:35:34.146]: Checking if object is outside privacy mask of aiKitchen:
[24.06.2020, 08:35:34.154]: Loading mask file...
[24.06.2020, 08:35:34.162]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[24.06.2020, 08:35:34.170]: person (91.83%) confirmed.
[24.06.2020, 08:35:34.178]: cup (64.49%):
[24.06.2020, 08:35:34.191]: cup (64.49%) is irrelevant.
[24.06.2020, 08:35:34.198]: chair (46.46%):
[24.06.2020, 08:35:34.212]: chair (46.46%) is irrelevant.
[24.06.2020, 08:35:34.223]: oven (88.95%):
[24.06.2020, 08:35:34.235]: oven (88.95%) is irrelevant.
[24.06.2020, 08:35:34.243]: refrigerator (98.04%):
[24.06.2020, 08:35:34.254]: refrigerator (98.04%) is irrelevant.
[24.06.2020, 08:35:34.262]: The summary:person (91.83%)
[24.06.2020, 08:35:34.268]: (5/6) Performing alert actions:
[24.06.2020, 08:35:34.275]: trigger url: !
[24.06.2020, 08:35:34.286]: -> Trigger URL called.
[24.06.2020, 08:35:34.306]: (6/6) SUCCESS.
[24.06.2020, 08:35:34.313]: Adding detection to history list.
 
This is a log from my system, I have an i5-6500 with 12Gb RAM, takes total time of around around 1.5 seconds, 1.3 of which is waiting for DeepQuestAI.

[24.06.2020, 20:03:27.194]: Starting analysis of D:\BlueIris\AI-Input/AI-Patio_C.20200624_200327169.jpg
[24.06.2020, 20:03:27.206]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 20:03:28.519]: (2/6) Waiting for results
[24.06.2020, 20:03:28.527]: (3/6) Processing results:
[24.06.2020, 20:03:28.531]: Detected objects:person (90.14%),
[24.06.2020, 20:03:28.536]: (4/6) Checking if detected object is relevant and within confidence limits:
[24.06.2020, 20:03:28.542]: person (90.14%):
[24.06.2020, 20:03:28.554]: Checking if object is outside privacy mask of Patio:
[24.06.2020, 20:03:28.560]: Loading mask file...
[24.06.2020, 20:03:28.564]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[24.06.2020, 20:03:28.570]: person (90.14%) confirmed.
[24.06.2020, 20:03:28.576]: (5/6) Performing alert actions:
[24.06.2020, 20:03:28.584]: Camera Patio is still in cooldown. Trigger URL wasn't called and no image will be uploaded to Telegram.
[24.06.2020, 20:03:28.596]: (6/6) SUCCESS.
[24.06.2020, 20:03:28.602]: Adding detection to history list.
 
I'm trying to figure out why the whole process from the time that the first trigger happens and a picture is taken and dropped in the folder to the point where the AI tool makes the determination that there is a person/car etc... takes so long.

From the log file I see that there is a 22 second timelapse from step 1 to step 2. Can I conclude from this that the Deepquest Server is taking 22 seconds to analyze the photo? If so, is that a function of the computer that I am running DeepQuest on or a function of the Deepquest service itself?

If it is my computer hardware I would not be surprised as I'm running it as a proof of concept on an old CPU (Acer Aspire Revo R3610) because I couldn't get it to work on my Main CPU that is running BI. I'm contemplating purchasing a used Dell Optiplex to run BI, AI Tool, Deepquest and some other applications (Plex, Sabbznd, Radarr, Sonarr) but don't want to make that investment if this Deepquest performance is the best that it gets. I'm intending on using this tool as a substitute/augmentation for some motion sensors in my house with my home automation system to manage lights, presence etc... For that to be feasible, I'd need the AI Tool/Deepquest process to be much more instantaneous. Can anyone weigh in on how fast their AI Tool/Deepquest is working? What hardware (chip, ram etc...) is it running on?



[24.06.2020, 08:35:12.785]: Starting analysis of C:\Users\Adam\Desktop\AIPics\aiKitchen.20200624_083512751.jpg
[24.06.2020, 08:35:12.792]: (1/6) Uploading image to DeepQuestAI Server
[24.06.2020, 08:35:34.084]: (2/6) Waiting for results
[24.06.2020, 08:35:34.102]: (3/6) Processing results:
[24.06.2020, 08:35:34.113]: Detected objects:person (91.83%), cup (64.49%), chair (46.46%), oven (88.95%), refrigerator (98.04%),
[24.06.2020, 08:35:34.124]: (4/6) Checking if detected object is relevant and within confidence limits:
[24.06.2020, 08:35:34.131]: person (91.83%):
[24.06.2020, 08:35:34.146]: Checking if object is outside privacy mask of aiKitchen:
[24.06.2020, 08:35:34.154]: Loading mask file...
[24.06.2020, 08:35:34.162]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[24.06.2020, 08:35:34.170]: person (91.83%) confirmed.
[24.06.2020, 08:35:34.178]: cup (64.49%):
[24.06.2020, 08:35:34.191]: cup (64.49%) is irrelevant.
[24.06.2020, 08:35:34.198]: chair (46.46%):
[24.06.2020, 08:35:34.212]: chair (46.46%) is irrelevant.
[24.06.2020, 08:35:34.223]: oven (88.95%):
[24.06.2020, 08:35:34.235]: oven (88.95%) is irrelevant.
[24.06.2020, 08:35:34.243]: refrigerator (98.04%):
[24.06.2020, 08:35:34.254]: refrigerator (98.04%) is irrelevant.
[24.06.2020, 08:35:34.262]: The summary:person (91.83%)
[24.06.2020, 08:35:34.268]: (5/6) Performing alert actions:
[24.06.2020, 08:35:34.275]: trigger url: !
[24.06.2020, 08:35:34.286]: -> Trigger URL called.
[24.06.2020, 08:35:34.306]: (6/6) SUCCESS.
[24.06.2020, 08:35:34.313]: Adding detection to history list.
This sounds like the CPU on the machine that Deepstack is running on. Watch your CPU performance on the Deepstack machine to see if it spikes to 100% during motion events. This was what I was running into as well. It would take 20 to 30 seconds to process all of the images that were dropped into the folder, causing my alerts to be 45-60 seconds late AFTER motion had ceased. I have since moved Deekstack to another machine. I can't speak to how the Optiplex will run with all of the various roles that you are planning to run on it, but I would be cautious and I am going to presume that machine won’t be able to handle all of those roles. This of course depends on how many cameras you're running in BI as well as how many streams you're running simultaneously on Plex To put it into perspective, I am running BI, Sighthound (2 cameras), Home Assistant (in a VM) and pfsense (in a VM) on my HP Z420 which is a 10 core E5-2690 V2 (20 virtual cores) and I was hitting max CPU thresholds while running Deepstack on it. My CPU usage on my Z420 hovers around 40% and spikes to 50% or so without Deepstack. With Deepstack it was hovering around 65% and was peaking around 100% for 30-45 seconds.
 
Last edited:
This is excellent. I do constant recording, but use this just to send Telegram alerts and a pic to the phone just for 1 cam when people are detected. This has a very good hit rate!!
Find this a good alternative to the iOS phone notifications.

Also via the GUI on the AI tool, you can quickly select the filter for relevant alerts and scroll through all the match images.
 
Found out how to reset the stats, needed to re-zero the counters within each cam text file, maybe worthy of an option in the future?

I’ve also found out the hard way not to create any extra files in the Cameras folder. I put a text file there with the command I used to create the DeplepQuest container and that caused AITools to stop working.
 
This sounds like the CPU on the machine that Deepstack is running on. Watch your CPU performance on the Deepstack machine to see if it spikes to 100% during motion events. This was what I was running into as well. It would take 20 to 30 seconds to process all of the images that were dropped into the folder, causing my alerts to be 45-60 seconds late AFTER motion had ceased. I have since moved Deekstack to another machine. I can't speak to how the Optiplex will run with all of the various roles that you are planning to run on it, but I would be cautious and I am going to presume that machine won’t be able to handle all of those roles. This of course depends on how many cameras you're running in BI as well as how many streams you're running simultaneously on Plex To put it into perspective, I am running BI, Sighthound (2 cameras), Home Assistant (in a VM) and pfsense (in a VM) on my HP Z420 which is a 10 core E5-2690 V2 (20 virtual cores) and I was hitting max CPU thresholds while running Deepstack on it. My CPU usage on my Z420 hovers around 40% and spikes to 50% or so without Deepstack. With Deepstack it was hovering around 65% and was peaking around 100% for 30-45 seconds.

I'm currently running everything that I mentioned except for Deepstack on my home office computer which is from 2009 and runs an Intel i7-860 with 8GB of RAM. CPU typically runs under 50% and I've never had an issue watching Plex (Don't generally have more than 1 stream running at a time). My current computer has a Passmark CPU score of 2900. The Optiplex I am looking at has a score of a little over 8000 and I'd be putting in 16GB of RAM. I would have thought I'd be more than ok. Your CPU has a Passmark score of more than 13000 which is where I start to have a disconnect because I would have thought you'd have much lower usage based on what you describe you have on your system.

Anyone have a POV on using Passmark CPU scores as a method to roughly figure out if they'll have enough computing power. I'm using my current CPU performance and figuring that a score that is almost 3x higher would give me more than enough room to add Deepstack and still have lots of capacity to spare.



1593097820188.png
 
I’ve got rid of substreams on my clones so that I can get high res pictures.

Have set it all up on 4 of my cams with masks and will monitor it over the next few days.

Had to disable this on my 2 driveway cameras as they kept on missing a person walking up the driveway. I normally have 2 cars parked and the person would walk between the cars but DeepQuest was having trouble picking up that it was a person so no recording. Not too sure what can be done to resolve this but until then I’ve simply put the cams back to motion detection.
 
I’ve got rid of substreams on my clones so that I can get high res pictures.

Have set it all up on 4 of my cams with masks and will monitor it over the next few days.

Had to disable this on my 2 driveway cameras as they kept on missing a person walking up the driveway. I normally have 2 cars parked and the person would walk between the cars but DeepQuest was having trouble picking up that it was a person so no recording. Not too sure what can be done to resolve this but until then I’ve simply put the cams back to motion detection.

From what i understand these AI model sue set amount of resolution so using 4k doesn't do much other than increase CPU consumption. You better off staying at lower resolution and maxing out bitrate.
 
I'm currently running everything that I mentioned except for Deepstack on my home office computer which is from 2009 and runs an Intel i7-860 with 8GB of RAM. CPU typically runs under 50% and I've never had an issue watching Plex (Don't generally have more than 1 stream running at a time). My current computer has a Passmark CPU score of 2900. The Optiplex I am looking at has a score of a little over 8000 and I'd be putting in 16GB of RAM. I would have thought I'd be more than ok. Your CPU has a Passmark score of more than 13000 which is where I start to have a disconnect because I would have thought you'd have much lower usage based on what you describe you have on your system.

Anyone have a POV on using Passmark CPU scores as a method to roughly figure out if they'll have enough computing power. I'm using my current CPU performance and figuring that a score that is almost 3x higher would give me more than enough room to add Deepstack and still have lots of capacity to spare.



View attachment 64646
I am running 15 cameras at around 1110 MP. Not sure how many you are running.
 
I’ve got rid of substreams on my clones so that I can get high res pictures.

Have set it all up on 4 of my cams with masks and will monitor it over the next few days.

Had to disable this on my 2 driveway cameras as they kept on missing a person walking up the driveway. I normally have 2 cars parked and the person would walk between the cars but DeepQuest was having trouble picking up that it was a person so no recording. Not too sure what can be done to resolve this but until then I’ve simply put the cams back to motion detection.

Others have created two clones, one for cars and one for people. That way you get alerted if someone walks between your cars.


Sent from my iPhone using Tapatalk
 
As I posted earlier, I really love this software to send "targeted alerts" to my Telegram app, but there is also other uses for people like myself who use Constant Recording and that is,

I also use this software to flag Clips with People in BlueIris. (okay so that is not the intent of Flags, but it works for me), adding in this trigger

http://localhost:[BI Port]/admin?camera=[short cam name]&flagalert=1&memo=[text to appear in BI]&user=[user]&pw=[password]

Not the most secure adding user and passwords in the tool, but I did create a new specific user for this trigger and locked down the access in BI as much as possible, restrict user to local lan, no viewing ect..

Hope this helps and or someone out, or someone has a much better approach to what I am doing. (ie: a quick way to see only events with people)

Note: Something has changed. It's no longer "Flagging the Alerts", you now need to "Trigger" first and then Flag. (I can't seem to get that working on one command anymore)
Flagging made it easier to find the Clips of Interest..:(
 
Last edited: