Blue Iris with AI but which one though?

toastie

Getting comfortable
Joined
Sep 30, 2018
Messages
254
Reaction score
82
Location
UK
With DeepStack apparently now deprecated in Blue Iris, leading to some users of DeepStack being reluctant to update to newer versions of BI, what then is the received wisdom on IPCT about what AI software might be reasonable to replace DeepStack with in Blue Iris?

It seems like there is a beauty contest going on, will it be SenseAI or something else which will eventually win over from the other contenders. Perhaps it all depends on how committed or not the developers are, plus the their ability to recruit a group of followers who will move the project forwards.
I make no excuses, I am a follower of those in the vanguard who through their skill and devotion produce AI solutions, and that includes many skillful folk on this highly helpful Forum.

What do others think about future AI trends in Blue Iris?
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,690
Location
New Jersey
My personal opinion, for whatever it's worth, is that Ken kind of jumped the gun with the switch to SenseAI. While DS isn't the most user friendly install, it is fairly easy once you get the right installers. SenseAI, when I installed it on an i7-6700K, brought the machine to its knees. I had both installed with the idea of seeing which was better and there was zero guidance regarding removal of DS prior to installing SenseAI.

Additionally SenseAI comes with all the bells and whistles enabled, none of which are particularly important or even needed, in a typical video surveillance situation. To shut those features down the only way to do it is to edit a .json file. I have no problem doing that, but too many users do. The added loading in CPU and memory is just a big waste of CPU, memory and maybe GPU as a result.

I will say the SenseAI team has been working to get things working better, but from my viewpoint it really isn't ready for primetime just yet. It is getting closer though. For now, I'll stick with DS. I thought once SenseAI got a GPU version going I'd switch over, but the detection times don't seem that much better whether using CPU or GPU and there's still the lack of control regarding what modules get loaded. On top of that there are too many anecdotal incidents of all kinds of problems, both with the install and with it running successfully.

What I'd like to see is something that's reliable, fast and relatively light on resources. Another problem I've noticed is that all the built-in models, DS and SenseAI, are trained using very high resolution stills that a surveillance camera can't even approach. I feel that effects accuracy of detection pretty severely at times, especially under bad contrast/brightness/lighting situations. For my purposes, eliminating false triggers, DS is doing an excellent job so if it ain't broke I'm not fixing it, yet.
 
Last edited:

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,428
Reaction score
47,549
Location
USA
^+1

We had some initial growing pains when DeepStack became "integrated" with BI, but I do not believe they were as close to the problems people are experiencing with SenseAI.

I found Deepstack relatively easy to get going - download the .exe file and run it and check the box you were using DeepStack. Then go into the cameras you wanted to use it on and select it.

Initially there were some issues with custom models and getting those to work, but they do now and the BI updates were frequent to address those.

I am firmly in the "If it ain't broke don't fix it" camp on this one. Deepstack works flawlessly for me with sufficient make times and see no reason to change. I hope that it will continue to work in future BI updates.

If I wanted the bells and whistles, I would have gone with one of the 3rd party like AI tools that provided that granular level adjustments, but also with a lot of manipulation.

Maybe SenseAI will get to where Deepstack was when the switch was made, but I felt for any NOOB, getting Deepstack going was fairly painless.
 

Swampledge

Getting comfortable
Joined
Apr 9, 2021
Messages
210
Reaction score
469
Location
Connecticut
As a relative NOOB, who started with Blue Iris one year ago, and with DS about 2 months ago, I was pleased with how easy it was to get DS running to eliminate false triggers on my problematic scenes. I‘ve been watching the SenseAI news carefully, and expect to switch once it appears more stable to me. Right now, DS does what I need, along with an earlier version of BI. I‘m never anxious to fix something that isn’t broke.
 

Fubduck

Getting the hang of it
Joined
Jul 10, 2018
Messages
109
Reaction score
72
Location
colorado
I run DS on a Nvidia Jetson Nano for two LPR cameras. I just upgraded to BI 5.5.9.6 and DS is still working just fine. I only use DS to remove the tree shadows that cause unwanted triggers.
Does anyone know if SenseAI ever going to support the Jetson Nano?
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,428
Reaction score
47,549
Location
USA
Lets see if they can get the GPU working right first LOL. Early reports are response times are the same whether CPU or GPU. Could just imagine what it would be on a Jeston.
 

CAL7

Getting the hang of it
Joined
Nov 26, 2020
Messages
64
Reaction score
26
Location
Florida
I have the Quadro P400. Does this card meet the specs?

From the SenseAI web page:

Screenshot_20220908_181856_Opera.jpg
 

kaltertod

Getting the hang of it
Joined
Jul 30, 2022
Messages
65
Reaction score
46
Location
BFE
I have the Quadro P400. Does this card meet the specs?

From the SenseAI web page:

View attachment 139328
The P400 is a Pascal based gpu (it is basically a GTX 1030 equivalent) so it should be fine, however the low cuda core count and the low amount of gpu memory onboard would make that card somewhat prohibitive.

I am running a 2060ti and have 11 5mp cameras running with hw decode and codeproject.ai also running and it will at times max out the 12gb available memory on the card.

I believe that the gpu memory is more of a bottle neck than the cuda core count on the card. However as they say there is only one way to find out and that is give it a shot.
 
Last edited:

kaltertod

Getting the hang of it
Joined
Jul 30, 2022
Messages
65
Reaction score
46
Location
BFE
@kaltertod If you use sub streams on your cameras you can shut of hardware acceleration on the GPU and cut CPU utilization dramatically. That will also free up GPU memory for AI processing.
I am using the the overlay for time on the cameras so to my understanding I cannot use h264 passthrough encoding as it will not "burn" the overlay into the image, Therefore I am using the decode/encode functions of the gpu to minimize the cpu usage in that way. The Substream solution will drop the gpu usage but increase the cpu usage. As the server I am running has more functions that just BI/Codeproject I am trying to offload more of the cpu load to the gpu.

The way that I look at it is that I can always add another gpu to make up for the gpu ram usage and use multi gpu to load balance between the 2 cards.

In my testing decode/encode uses just under 6gb of vram and codeproject uses the remaining memory. This is working fine for now as I have not had the ai portion dump out on low memory. However if I fire up handbrake or another gpu based function then hand brake will throw out an error stating that the gpu is out of memory.

Hopefully codeproject will support multi gpu at some point. As for now I have put a 1650 in the machine along with the 2060 and force the other gpu based apps to use the 1650 for all other gpu related processing in the machine and it seems to work just fine.

I am also toying with the idea of offloading the decode/encode on the 1650 and using the 2060 for ai only but I believe that would be overkill for BI.
 
Last edited:

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
24,428
Reaction score
47,549
Location
USA
Around the time AI was introduced in BI, many here had their system become unstable with hardware acceleration on (even if not using DeepStack or CodeProject). Some have also been fine. I started to see that error when I was using hardware acceleration.

This hits everyone at a different point. Some had their system go wonky immediately, some it was after a specific update, and some still don't have a problem, yet the trend is showing running hardware acceleration will result in a problem at some point.

However, with substreams being introduced, the CPU% needed to offload video to a GPU is more than the CPU% savings seen by offloading to a GPU. Especially after about 12 cameras, the CPU goes up by using a GPU and hardware acceleration for BI.

It is best to just use the GPU now for AI.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,690
Location
New Jersey
I only use the time stamp in the cameras, direct to disk BVR recording, and have no problems with time stamps on recordings. If you're using sub streams and hardware acceleration shutting off hardware acceleration will actually save a little bit of CPU cycles. It does take CPU to pass the video on to the GPU so shutting of hardware acceleration can actually help a little in that regard as well as saving GPU for AI uses.

If you've got a 2060, use that for the AI. It will hardly even notice it's being used for AI between speed, cores and increased memory capacity IMHO. GPUs are pretty expensive compared to what they used to be so unless you've got GPUs hanging around and don't mind the power loads they cause stick with what you have. There's really no need to load balance with a 2060 and something else.
 

snakpak

n3wb
Joined
Nov 30, 2021
Messages
10
Reaction score
17
Location
rural cabin, Columbia Valley, BC, Canada
I like SenseAI for the wider selection of animals detected. Which AI is better for you might depend on where you live and what you want to get alerted about. I'm in the rocky mountains and don't have the elephants and girafes that DS seems to be looking for.

Both DS and SenseAI do a very poor job detecting bears. SenseAI can't identify them very well and DS has way too many false positives.
 

CCTVCam

Known around here
Joined
Sep 25, 2017
Messages
2,660
Reaction score
3,480
TBH unless you're buying the camera for wildlife watching, you don't need to detect any animals.

Even then, I think most people would be satisfied with dog, cat, Squirrel, fox / coyote. Unless you're running a zoo, I doubt you need anything more.

It's a bit like the whole list. You basically need a person, car, van, truck, bike. I doubt anyone is going to rob your home in a school bus or camper van or you'd even care if they did as they'd probably be detected as truck or van. Nor is a dog, coyote, fox, squirel, rabbit etc likely to steal your car or tv.

Sometimes it's easy to get caught up in what something can do instead of what you need it to do.

K.I.S.S.

The less items it's looking for, the quicker it will respond and the lower the load.
 

weswitt

n3wb
Joined
Oct 17, 2022
Messages
2
Reaction score
5
Location
Greenacres, WA
Been using CP.AI for a while. I'm using CPU only on a i7-10700K with 20 cameras. My CPU usage sits at about 10%. The object detection seems to be working really well, but as you can expect the AI detection times are pretty slow. Even with the slow detection times the system is working fine. I'd like to improve the times but i really don't want to buy an expensive GPU just for this purpose. I'm hoping that CP.AI gets support for the Coral TPU.

1666123854113.png
 
Top