Look at nxwitness/ dw ipvms in North America. $70 per camera free upgrades for life.I'll have to see if I can find a calculator to estimate CPU requirement BI. I currently have 8 8mp 4k Camera's and have plans to add 4 more 12mp camera's and 2 of the 7k avigilon camera's. I too also record 24x7 and not just when an event happens. Not sure if my brain will let me do a desktop PC. It has to run Vsphere so I can back it up to AWS for backup and disaster recover. My other issue would be I also store camera feeds onto an ISCSI array that is hidden in another part of the house. That way if they steal the camera's and rack server I still have footage of what happened. Another factor for me is I also have several axis camera's and the intel acceleration doesn't work on them so I'd either need more powerful CPU or go with a NVIDIA card. The server platform ones aren't limited to number of streams like desktop ones. I first had this on a 2 year old Dell i7 and it was hitting 98% CPU usage is why I moved it over to my server and it went down to 80% on 4 cores. The desktop dropped the ISCSI a couple times and it stopped recording video is another reason. I've been up and running on the DL380 for about 2 months with zero issue. Just not sure about CPU once I start expanding to more camera's. I have a comparable server at the office with 6 axis PTZ's and the CPU usage is around 28% but that is on Aimetis and those are running a couple different analytics also.
The more I talk the more I'm thinking I may be outgrowing the use case for BI and may just need to go ahead and migrate to either aimetis or milestone. As it seems the use case repeated on the forum is Intel desktop CPU only and no Nvidia GPU. If I have to use more than one PC I'm not sure if the the usage would be clean. I don't even know if the iOS app supports multiple servers.
How exactly?
You're a MacGyver kind of guySent from my Etch-A-Sketch using a lemon, nickle and penny for power and a gum wrapper antenna.
I've updated my post above several times, adding new information.
Some conclusions I am able to draw from this are:
1) Both types of hardware acceleration (Intel / Nvidia) reduce CPU usage by a similar amount.
2) The GT 1030 (2GB GDDR5) card could only handle about half of my cameras.
3) Nvidia CUDA acceleration raised memory usage more than Intel Quick Sync.
Maybe a faster graphics card would be able to handle more video before maxing out. However faster GPUs also consume a lot of power so it could end up costing more to run the GPU than to run without it. Modern GPUs are not especially cheap, either, so based on what I've seen today, it is not a good option for a low-budget Blue Iris build.
I have multiple Cuda cards and do load balancing very easily manually. All my cameras run the same frame rates, same resolution and bit rates which make it very easy to do.
I haven't thought of that, so it makes use of both or can if configured?