Need help understanding SenseAI.

wantafastz28

Getting comfortable
Joined
Nov 18, 2016
Messages
550
Reaction score
253
Location
Phoenix, az
I'm striking out on searching for these questions in the help file or here. Hopefully someone can line me out.

BI v.5.5.9.3 x64
SenseAI 1.4 beta

Under Blue Iris status, I get Alert cancelled(occupied). I saw in another thread that people need to see the .dat file, but when I go to the AI tab, it has been blank(even when i tried deepstack, this tab has always been blank). I obviously can drag a .dat file, but how do i correlate the dat file to the alert?

Performing a benchmark on SenseAI, i get 2 operations per second? Is that sufficient?

In BI settings > AI, it has the option of instances, is there anything to gain by doing more than 1?

When setting up triggers, do i need to turn off Motion sensor all together? It is current checked, and set with object detection and use zones/hot spots. Will the AI only work in those areas? Do they have priority over they AI, or are they nullified? From camera to camera, i have drastically different detections accuracies, and im not sure if it is because of my various zones/hot spot setups.

Is there an A-Z setup for SenseAI configurations with BI that I'm not seeing, or am i stuck sifting through the various pages about it?

I appreciate anyones time that can reply. I used to be active here, but once I had this up and running years ago, I walked away from it, apparently i missed quite a few events with BI's evolution.
 

IAmATeaf

Known around here
Joined
Jan 13, 2019
Messages
3,304
Reaction score
3,281
Location
United Kingdom
There’s a thread here already with lots of useful info

5.5.8 - June 13, 2022 - Code Project’s SenseAI,
 

wantafastz28

Getting comfortable
Joined
Nov 18, 2016
Messages
550
Reaction score
253
Location
Phoenix, az
I was seeing if there was a tutorial someone made, and asked in that thread. No dice. I got it running somewhat, im just stuck on these few bullet points.
There’s a thread here already with lots of useful info

5.5.8 - June 13, 2022 - Code Project’s SenseAI,
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,869
Reaction score
48,505
Location
USA
It is too new to have a tutorial made. It is trial by fire and learning as we go.
 

Swampledge

Getting comfortable
Joined
Apr 9, 2021
Messages
210
Reaction score
469
Location
Connecticut
@wantafastz28 (I didn’t know there were any slow ones, but…), IIRC, this is your first venture into AI integration with Blue Iris, so you have no experience with Deepstack. I just started with Deepstack, and have no SenseAI experience- I’m waiting for it to get a bit more mature. But I’ll explain the AI relationship with motion triggers and alerts as I understand it from my experience.

Assuming you are using a camera without its own AI, you set up motion detection just as you did before Deepstack/Sense AI. You can use the object detection or not when setting up trigger sensitivity. Once the camera triggers, it sends images (as defined in the AI settings) to Deepstack or SenseAI for analysis. If that AI doesn’t “see” an object you have a targeted, the trigger is canceled, and there is no alert. You can see these by viewing Cancelled Alerts. If get a cancelled alert that says “Occupied”, it means that it found an object you had targeted, but it is not moving. This allows the AI to differentiate between moving and parked cars, or people walking and statues. The image that is sent to Deepstack/SenseAI is based on the hotspots/zones you specified when you set up the motion detection. So, if there is a car moving outside the zone of interest, Deepstack/Sense AI won’t see it. See below:

If you want to see the .dat file for an alert, it is easiest to Ctrl-dbl click that alert icon in the alerts view window. This will open up the BI AI status window showing what object models were run and what was found.

Hope this is helpful to you.
 

wantafastz28

Getting comfortable
Joined
Nov 18, 2016
Messages
550
Reaction score
253
Location
Phoenix, az
@wantafastz28 (I didn’t know there were any slow ones, but…), IIRC, this is your first venture into AI integration with Blue Iris, so you have no experience with Deepstack. I just started with Deepstack, and have no SenseAI experience- I’m waiting for it to get a bit more mature. But I’ll explain the AI relationship with motion triggers and alerts as I understand it from my experience.

Assuming you are using a camera without its own AI, you set up motion detection just as you did before Deepstack/Sense AI. You can use the object detection or not when setting up trigger sensitivity. Once the camera triggers, it sends images (as defined in the AI settings) to Deepstack or SenseAI for analysis. If that AI doesn’t “see” an object you have a targeted, the trigger is canceled, and there is no alert. You can see these by viewing Cancelled Alerts. If get a cancelled alert that says “Occupied”, it means that it found an object you had targeted, but it is not moving. This allows the AI to differentiate between moving and parked cars, or people walking and statues. The image that is sent to Deepstack/SenseAI is based on the hotspots/zones you specified when you set up the motion detection. So, if there is a car moving outside the zone of interest, Deepstack/Sense AI won’t see it. See below:

If you want to see the .dat file for an alert, it is easiest to Ctrl-dbl click that alert icon in the alerts view window. This will open up the BI AI status window showing what object models were run and what was found.

Hope this is helpful to you.
Thank you! Yes this is very helpful. Looks like i need to do some more fine tuning. Is there any chance you know if adding instances is beneficial or not? CPU usage is 20-27% with 8 cameras.
 

Swampledge

Getting comfortable
Joined
Apr 9, 2021
Messages
210
Reaction score
469
Location
Connecticut
Thank you! Yes this is very helpful. Looks like i need to do some more fine tuning. Is there any chance you know if adding instances is beneficial or not? CPU usage is 20-27% with 8 cameras.
Sorry, I haven’t experimented with that. I‘m only using Deepstack on 2 cameras, one during daylight, and the other at night.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,691
Location
New Jersey
It's more involved than just a number of instances. I use two instances but am running on the GPU version of DeepStack so the CPU barely sees anything. How many images and at what intervals are as important as well. Incidentally, with DeepStack the images sent from BI for analysis are reduced resolution, as in 720P. This makes using "main stream" in the AI setting counter productive since additional CPU is needed to process the snapshot before it is sent to DeepStack. I'm pretty sure the same thing happens with SenseAI.
 

wantafastz28

Getting comfortable
Joined
Nov 18, 2016
Messages
550
Reaction score
253
Location
Phoenix, az
Gotcha. I did get my substreams up and running when i gave deepstack a try. I remember reading it being more efficient that way. I have mine set for 1 second analyzing and 10 images so far.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,691
Location
New Jersey
When you say 1 second analyzing that actually means analyze a frame/snapshot once every second. For 10 images that total out to 10 seconds of analysis which is kind of long. The interval really depends on how fast the motion usually is from the particular camera and the number of images can vary depending on the "level of difficulty" of the view. I use, typically, 6-10 images but with analysis times of 500ms for foot traffic areas and 250ms for street traffic areas.
 

Swampledge

Getting comfortable
Joined
Apr 9, 2021
Messages
210
Reaction score
469
Location
Connecticut
Blue Iris will send Deepstack (and I presume, SenseAI) images starting with your pretrigger buffer start. So, if you have a long pretrigger, it could easily be looking at images where the target hasn’t yet appeared, and there’s nothing for it to find. I record continuous, so I‘m thinking of eliminating the pretrigger because I have full stream anyway. That would reduce number of images to process and consequently, processing time. I just use alerts to tell me what I should review.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,691
Location
New Jersey
I use clone cameras for AI detection. That way I can have the continuous/trigger from the original camera and set the AI clone version for zero pre and post trigger. Yes, it does create some additional disk utilization but it's well worth it to me, plus disk space is actually kind of inexpensive. I write those alerts to an older 500GB drive I had hanging around doing nothing.
 

wantafastz28

Getting comfortable
Joined
Nov 18, 2016
Messages
550
Reaction score
253
Location
Phoenix, az
Blue Iris will send Deepstack (and I presume, SenseAI) images starting with your pretrigger buffer start. So, if you have a long pretrigger, it could easily be looking at images where the target hasn’t yet appeared, and there’s nothing for it to find. I record continuous, so I‘m thinking of eliminating the pretrigger because I have full stream anyway. That would reduce number of images to process and consequently, processing time. I just use alerts to tell me what I should review.
I'm currently running into this now. I'm getting close to just setting up a camera in the office so i dont have to keep walking outside lol.
 

wantafastz28

Getting comfortable
Joined
Nov 18, 2016
Messages
550
Reaction score
253
Location
Phoenix, az
I use clone cameras for AI detection. That way I can have the continuous/trigger from the original camera and set the AI clone version for zero pre and post trigger. Yes, it does create some additional disk utilization but it's well worth it to me, plus disk space is actually kind of inexpensive. I write those alerts to an older 500GB drive I had hanging around doing nothing.
Yeah, i only have a 4tb drive. It looks like i might have to upgrade it. I'm only getting about a week or so worth of continuous, plus alerts.
 
Top