SSD drive vs conventional hard drive

A strip is a bad idea for video.

a striped drive increase the chance of a failure. If a drive has a failure rate one every 4 years. Then a strip for two of these drives has a failure rate of 2 years.
Also if the stripe dies then all the video is lost on that strip. Put the cameras on separate drives, then when a drive failures you loose only part of your cameras video for that time period. If I had 40 cameras I would use 4 drives, 10 cameras per drive.
While I would not recommend striping, it does NOT increase drive wear at all. Striping Raid 0 distributes writes evenly across all disks in the stripe, in many ways extending the life span of individual disks. Lose1, lose all the data.
Raid1 mirroring, doubles writes, but that isn't striping and wasteful. Other forms of raid, eg: 5 and 6 do increase writes but it's no where near double and depending on the size of the array the overhead is quite small. Downside with raid 5\6 is the wasted space and poor performance when a drive fails, but at least you can tolerate failure. There are many other blends of raids, but I don't see the use case for those either for NVR.
Your stats may have come from the tremendous load placed on disks during the re-build of a failed drive in a raid 5 (which is often when a second drive will fail), hence why raid 6 is becoming more popular as it can tolerate 2 failures without the delay of a hot spare re-building.

So what recommendation do you all recommend instead of using stripped? I have a few new drives coming in to expand storage and can use them to offload/ set up the new way...
 
My standard way is to select cameras from different location and place then on a drive, then some more cameras on a different drive. Balance the load on each drive so they fill up at about the same time. Put cameras for the same area on different drives. For example If you have 3 cameras for the front door they go on three different drives. So if a drive fails you still have some video of the front door.
 
  • Like
Reactions: sebastiantombs
My standard way is to select cameras from different location and place then on a drive, then some more cameras on a different drive. Balance the load on each drive so they fill up at about the same time. Put cameras for the same area on different drives. For example If you have 3 cameras for the front door they go on three different drives. So if a drive fails you still have some video of the front door.
That makes sense.. Just seems like a lot of writing to all these drives with no breaks.. But I gotcha.. I will give it a try
 
What you're missing is that recording video requires constant writing somewhere if you want to preserve the footage. It's much less CPU intensive to write it once to the actual storage location rather than move it from A to B to C.
 
  • Like
Reactions: SouthernYankee
My standard way is to select cameras from different location and place then on a drive, then some more cameras on a different drive. Balance the load on each drive so they fill up at about the same time. Put cameras for the same area on different drives. For example If you have 3 cameras for the front door they go on three different drives. So if a drive fails you still have some video of the front door.
For general use cases, I absolutely agree with this approach and this is the approach I use and believe it would be applicable to most people.
If you do have a high resiliency requirement, raid 5 is suitable for CCTV providing you use a dedicated raid hardware controller. No way would I rely on the OS for Raid 5.
I re-read SouthernYankee’s post and yes what he was saying is correct in that a stripe (raid 0) does increase your chance of failure as you add each drive.
 
Last edited:
If the data is that important i'd rather go for 2 recording devices. I have an NVR5216 as second recording to my BI machine but I only switch it on if i need to power down BI.
 
As a side note I had a 4 drive raid 5, with 4TB drives, set up for video recording. One of the drives failed, I replaced it an started a recovery. The recovery ran for about 3 days before I shut it down. Remember that during recovery the system is still writing video. I no longer use a raid.

If you use a raid, test it. After the drives are more than 80% full. Shut the system down , pull one of the drives out, do a full format. (i would use a USB docking) Stick it back in and start a recovery while still recording. If it works, you are good to go. If not you learned something.

Test do not guess
 
  • Like
Reactions: Flintstone61
storing data from that far in the past isn't that big of a deal... Which now gets me to thinking of even better ways to optimize storage.
 
That's the rub with parity RAID on mechanical drives. Slow as crap on rebuilds and writes. If another drive takes a shit during a RAID 5 rebuild, then the whole array is toast.

I've used primarily RAID 1 and RAID 10 for the last several years. RAID 1 mostly now since everything has gone to SSD (flash) as its the simplest and probably safest since it uses the least amount of drives.
 
  • Like
Reactions: Flintstone61
Curious about my setup after reading about SSD concerns...

I'm running on a Dell R730 with 8 Western Digital SA500 1TB drives in a RAID 6 configuration on the hardware raid controller. I run Proxmox and have a Windows Server 2019 VM running with 1.5 TB allocated to it from the Proxmox LVM which pulls from the 6TB available from the RAID 10 configuration above.

Blue Iris is running on that VM and works and runs great. I only have 4 cameras right now, running in continuous mode, it's about 150 GB - 175 GB a day. I let it go up to 900 GB and then it will delete. I'm not moving anything to Stored right now, I just Delete on New when it reaches the 900 GB.

I don't want to get another machine or change my setup right now. Should I be concerned and switch over to record only on alert and not continuous?

I think the endurance is decent on the SA500, not the best enterprise drive, but it's not consumer. I also don't write a ton with only 4 cameras.

I do have some concern because I run a lot of other things on that Proxmox instance. It has VM's for all my containers, Home Assistant, Plex, Unifi Controller, Sonarr, Radarr, etc... All the other things running on Proxmox are for OS only mainly, so Plex for example does all it's work over iSCSI.

I have a second machine on the rack, an R720, it is all platter drives, WD Red Pro NAS drives, 36 TB. It runs TrueNAS, also in a RAID 6 style configuration but with ZFS. It has the iSCSI target over to the Proxmox cluster and it's where I do storage for Plex, NextCloud, etc...

Thanks for any feedback
 
Last edited:
  • Wow
Reactions: Flintstone61
It will be fine
 
Was that in English? :DOr Nerdese. Thats a lot of Jargon to keep up with.
 
  • Haha
Reactions: sebastiantombs
Curious about my setup after reading about SSD concerns...

I'm running on a Dell R730 with 8 Western Digital SA500 1TB drives in a RAID 10 configuration on the hardware raid controller. I run Proxmox and have a Windows Server 2019 VM running with 1.5 TB allocated to it from the Proxmox LVM which pulls from the 6TB available from the RAID 10 configuration above.

Blue Iris is running on that VM and works and runs great. I only have 4 cameras right now, running in continuous mode, it's about 150 GB - 175 GB a day. I let it go up to 900 GB and then it will delete. I'm not moving anything to Stored right now, I just Delete on New when it reaches the 900 GB.

I don't want to get another machine or change my setup right now. Should I be concerned and switch over to record only on alert and not continuous?

I think the endurance is decent on the SA500, not the best enterprise drive, but it's not consumer. I also don't write a ton with only 4 cameras.

I do have some concern because I run a lot of other things on that Proxmox instance. It has VM's for all my containers, Home Assistant, Plex, Unifi Controller, Sonarr, Radarr, etc... All the other things running on Proxmox are for OS only mainly, so Plex for example does all it's work over iSCSI.

I have a second machine on the rack, an R720, it is all platter drives, WD Red Pro NAS drives, 36 TB. It runs TrueNAS, also in a RAID 10 style configuration but with ZFS. It has the iSCSI target over to the Proxmox cluster and it's where I do storage for Plex, NextCloud, etc...

Thanks for any feedback
The endurance of those drives are 600TBW . So 600TB / .2TB (200GB) = 3000 days = 8.2yrs worth of writing 200GB a day until you hit 600TB limit.

They aren't Enterprise drives and aren't really designed to be written to daily like that, but I wouldn't replace them unless they start failing.
 
  • Like
Reactions: spammenotinoz
The endurance of those drives are 600TBW . So 600TB / .2TB (200GB) = 3000 days = 8.2yrs worth of writing 200GB a day until you hit 600TB limit.

They aren't Enterprise drives and aren't really designed to be written to daily like that, but I wouldn't replace them unless they start failing.

not bad if that holds up, if I factor in other writes to the drive from the Proxmox cluster then they should last within my 5 year warranty where I would start to replace them anyway...

Next time I will go more enterprise...

thanks!
 
  • Like
Reactions: biggen
I will most likely just run continous for long enough to tweak my motion settings and trust deepstack settings then switch it to a buffer with record on alert
 
not bad if that holds up, if I factor in other writes to the drive from the Proxmox cluster then they should last within my 5 year warranty where I would start to replace them anyway...

Next time I will go more enterprise...

thanks!
Here is what I use for whitebox enterprise builds for heavy write intensive applications if I want SATA: Intel® SSD D3-S4610 Series I have two in my Proxmox host (ZFS RAID 1) sitting right beside me. Its where I run my BI VM in my house.

The 1TB version of that drive has a write endurance of 5.8PBW. Or about ~10x the write endurance of that WD drive.
 
Here is what I use for whitebox enterprise builds for heavy write intensive applications if I want SATA: Intel® SSD D3-S4610 Series I have two in my Proxmox host (ZFS RAID 1) sitting right beside me. Its where I run my BI VM in my house.

The 1TB version of that drive has a write endurance of 5.8PBW. Or about ~10x the write endurance of that WD drive.
I use the same Intel D3-S4610 endurance is fine, I use it to host vm’s and now chia plotting, but still not designed for constant continuous writes that a cheap HDD can handle all day, every day.
 
I use the same Intel D3-S4610 endurance is fine, I use it to host vm’s and now chia plotting, but still not designed for constant continuous writes that a cheap HDD can handle all day, every day.
You’re saying a cheap HDD can handle constant writing better than the D3? I don’t agree with that. That drive is made for mixed Enterprise workloads in a 24x7 environment.