SSD drive vs conventional hard drive

SLC has mostly been supplanted by MLC even at the Enterprise level because of advancements in technology. Other that write cache being SLC as mentioned above, MLC is used nowadays mostly in Enterprise while TLC is used for consumer. Eventually TLC will be used for Enterprise as the technology advances. QLC is the next “big thing” for consumer loads because of the amount of TB those things will be capable of holding.
 
One thing about SSDs is that they are SILENT. I have several surveillance HDDs both WD and Seagate and they can get pretty noisy when seeking.

Generally they are not seeking during normal operation because video tends to be written sequentially. But I also run SightHound on the same rig and with that writing to a surveillance HDD my machine was NOISY. It’s in the floor of my home office so the noise was an issue.

YMMV.


Sent from my iPhone using Tapatalk
 
Surveillance Videos do not written sequentially. Do an analysis of the file locations on a Surveillance drive with more then a few cameras after a year of use. The files a fragmented all over the place, so is the free space.
 
Surveillance Videos do not written sequentially. Do an analysis of the file locations on a Surveillance drive with more then a few cameras after a year of use. The files a fragmented all over the place, so is the free space.
I think the stream itself is sequential in nature. The issue is that you have numerous streams (different cameras) all writing at the same time to different locations on the disk. I think that, in turn, creates the randomness. I'm not an expert on block devices though.
 
One of the consideration for me was depending on make / model / of SSD. There were measurable savings in terms of energy consumption and heat. These are benefits to seriously consider for anything that must operate 24.7.365 where electricity & climate is high. Hard to justify milk for your kids just for the sake of video storage.

Ever see a kids teeth after eating a HDD?!? :lmao:
 
Do the SSD's actually save power though? Idle power on an SSD beats a hard drive any day of the week, but the consumption while active is often more than a hard drive

Given it would be writing all the time, I have a feeling you may use more power
 
Do the SSD's actually save power though? Idle power on an SSD beats a hard drive any day of the week, but the consumption while active is often more than a hard drive

Given it would be writing all the time, I have a feeling you may use more power.
Yes, but it is negligible depending on the HDD brand. Much bigger power savings elsewhere,
not to mention the initial $ savings would offset any potential power savings down the track.
 
Do the SSD's actually save power though? Idle power on an SSD beats a hard drive any day of the week, but the consumption while active is often more than a hard drive

Given it would be writing all the time, I have a feeling you may use more power

Most definitely comes down to brand, make, model, and use case for sure. Just a quick glance of a WD 1TB HDD vs 1TB SSD are very close in terms of maximum power consumption during writes. But, if one looks at the big picture from idle, standby, read, the consumption is lower still.

The Average power consumption during read / write for the same 1TB SSD is lower.

For the average consumer the power consumption may be too irrelevant to matter. This obviously depends upon their local ToU rates, number of components in use, personal goals. Speaking for myself only I always try to balance performance, energy, and value, given this is over the long term. There are always other key areas to save energy but this is just one of the components to get a person closer to a set target / goal.

Can't be buying a low power SSD / HDD only to be using a ten year old CPU, Memory, PSU that consumes 20 times the power! :facepalm:
 

Attachments

Good information here and I did some research on a solution for replacing my 2 tb BI clips drive in my QNAP NAS with a 4 tb Seagate Skyhawk surveillance hd. I noticed in the BI recommendations (that BP2008 wrote up) that RAID isn't generally recommended but the article does mention if using RAID use RAID 1, 5, 6 or 10. I am replacing a 2TB drive with a 4TB drive so should I use RAID or not bother?
FWIW, I would be happy with just 2TB of clips storage if that meant using RAID would be best. Also my clips database is on my BI computer's SSD drive.
Thoughts or opinions on the clips drive?
 
I can't see any reason how RAID would negatively impact BI, it should only have positives
 
I can't see any reason how RAID would negatively impact BI, it should only have positives
I read in the BI suggested Hardware that, because most of the data on a surveillance drive is useless, RAID isn't necessary. I agree with that I guess because when you need that 2-3 clips it's only then that it becomes real important. Still I'm curious as to whether or not some type of RAID is practical outside of what I just mentioned.
 
Randomly started thinking about this.. I have a Blue Iris setup with close to 40 cams now... but it only stores the past hour on the ssd and then moves it to other locations..
now I’m worried about that ssd dying.. just wonder if the mechanical drives can handle being directly written too by 40 cameras at the same time..
 
What model SSD? Chances are it will be fine
 
Randomly started thinking about this.. I have a Blue Iris setup with close to 40 cams now... but it only stores the past hour on the ssd and then moves it to other locations..
now I’m worried about that ssd dying.. just wonder if the mechanical drives can handle being directly written too by 40 cameras at the same time..

If you are using a drive made for surveillance, then it should be fine.

But you are using up resources moving video after an hour. Simply put it to the final destination and save CPU and drive life (unless using NAS then have it go at that scheduled time). But with 40 cameras, you really should have two or more HDD so if one goes down, you don't lose everything.
 
====================================
My Standard allocation post.

1) Do not use time (limit clip age)to determine when BI video files are moved or deleted, only use space. Using time wastes disk space.
2) If New and stored are on the same disk drive do not used stored, set the stored size to zero, set the new folder to delete, not move. All it does is waste CPU time and increase the number of disk writes. You can leave the stored folder on the drive just do not use it.
3) Never allocate over 90% of the total disk drive to BI.
4) if using continuous recording on the BI camera settings, record tab, set the combine and cut video to 1 hour or 3 GB. Really big files are difficult to transfer.
5) it is recommend to NOT store video on an SSD (the C: drive).
6) Do not run the disk defragmenter on the video storage disk drives.
7) Do not run virus scanners on BI folders
8) an alternate way to allocate space on multiple drives is to assign different cameras to different drives, so there is no file movement between new and stored.
9) Never use an External USB drive for the NEW folder. Never use a network drive for the NEW folder.

Advanced storage:
If you are using a complete disk for large video file storage (BVR) continuous recording, I recommend formatting the disk, with a windows cluster size of 1024K (1 Megabyte). This is a increase from the 4K default. This will reduce the physical number of disk write, decrease the disk fragmentation, speed up access.

Hint:
On the Blue iris status (lighting bolt graph) clip storage tab, if there is any red on the bars you have a allocation problem. If there is no Green, you have no free space, this is bad.
 
If you are using a drive made for surveillance, then it should be fine.

But you are using up resources moving video after an hour. Simply put it to the final destination and save CPU and drive life (unless using NAS then have it go at that scheduled time). But with 40 cameras, you really should have two or more HDD so if one goes down, you don't lose everything.
It’s not made for storage. I believe it’s Samsung evo.. bought maybe 14 months ago..
My storage use continually expanded. I have a total of 8 nas drives. 3 in my synology and 5 in the computer itself stripped. Currently able to store data for about 7 weeks. It writes to my ssd. This allows for quick and fast playback when needed. After about an hour. It moves it over to the stripped portion. Once that gets low of space.. I have it moved to the synology nas. Now that I read this out loud. I can see how inefficient this is
 
  • Like
Reactions: looney2ns
That is what I mean, no need to put it on SSD for an hour and then move it - it is wasting CPU moving and using the SSD more than you need to - simply put it on the stripped portion and then let it move to synology at that time those drives fill.
 
  • Like
Reactions: sebastiantombs
A strip is a bad idea for video.

a striped drive increase the chance of a failure. If a drive has a failure rate one every 4 years. Then a strip for two of these drives has a failure rate of 2 years.
Also if the stripe dies then all the video is lost on that strip. Put the cameras on separate drives, then when a drive failures you loose only part of your cameras video for that time period. If I had 40 cameras I would use 4 drives, 10 cameras per drive.
 
A strip is a bad idea for video.

a striped drive increase the chance of a failure. If a drive has a failure rate one every 4 years. Then a strip for two of these drives has a failure rate of 2 years.
Also if the stripe dies then all the video is lost on that strip. Put the cameras on separate drives, then when a drive failures you loose only part of your cameras video for that time period. If I had 40 cameras I would use 4 drives, 10 cameras per drive.
While I would not recommend striping, it does NOT increase drive wear at all. Striping Raid 0 distributes writes evenly across all disks in the stripe, in many ways extending the life span of individual disks. Lose1, lose all the data.
Raid1 mirroring, doubles writes, but that isn't striping and wasteful. Other forms of raid, eg: 5 and 6 do increase writes but it's no where near double and depending on the size of the array the overhead is quite small. Downside with raid 5\6 is the wasted space and poor performance when a drive fails, but at least you can tolerate failure. There are many other blends of raids, but I don't see the use case for those either for NVR.
Your stats may have come from the tremendous load placed on disks during the re-build of a failed drive in a raid 5 (which is often when a second drive will fail), hence why raid 6 is becoming more popular as it can tolerate 2 failures without the delay of a hot spare re-building.
 
I did not say it increase drive wear. I said it increase the risk of failure. Those are two completely different things. If a failure rate of a drive is 4 year The failure rate of 2 drives is 2 years. Simple failure math.