SSD Life

ZeeOSix

$100 site donor 2022
Joined
Jul 22, 2010
Messages
40,408
Location
PNW
I downloaded and installed "Crystal Disk Info" a few days ago to monitor my SSD. The SSD showed 100% life until today when it clicked down to 99% at the measured parameters shown.

Anyone know what kind of algorithm is used to determine the remaining SSD life? If a SSD was used a lot on nearly a daily basis it seems the life could decrease around 5~6% per year (?). If it was 5% a year, it would still take many years to hit say 25% life left. But do SSDs actually run reliably down to or very near 0% life remaining? Or should they be replaced at some level like 20~25% life left?

So what do people do when their SSD is getting near the "end of life" ... or goes bad prematurely and it needs to be replaced? Make an ISO image of their old SSD on an external drive, then re-image that onto the newly installed SSD?

Not sure if the "Power On Counts" includes restarts (reboots without a full power off) or not. It must since I haven't had this thing long enough to do that many true power down and on cycles.

[Linked Image]
 
The life is based on writes, Most drives have a life cycle of several hundred TeraBytes written, you've done about 1.5TB of writes which would be between .5-1% of the maximum rated write cycles for most consumer 500gb drives that I know of, If I bought a new drive I'd just image the old drive directly to a new drive, either with an external enclosure or using another SATA port if available. Many computers will spin down disks to save power which increases the power on count, or in an SSDs case they simply shut off the power since there's no platter to spin down.
 
Originally Posted by blufeb95
Many computers will spin down disks to save power which increases the power on count, or in an SSDs case they simply shut off the power since there's no platter to spin down.


My laptop goes to sleep many times a day, so that must be where the high "Power On Counts" is coming from.
 
To continue what blufeb95 said, most drives implement a method of write leveling across the entire disk, so particular cells in the flash memory are not overstressed. Although made up of billions or trillions of cells, each memory cell has a lifespan of a few thousand writes, depending on the memory technology. Also, some vendors offer multiple levels of SSD from entry level consumer grade to data center quality drives. Intel tends to sell to higher end solutions that require extreme reliability, while they downplay the consumer end. Samsung has their low end consumer grade QVO, mid level EVO and prosumer Pro line, which differ in cost, performance and reliability. I think the Pro are 2-3x the cost of the QVO line per GB. If you want to learn about the methods of determining drive lifespan, look into the SMART predictive analysis feature, which was originally for spinning rust drives but carried over to SSD. I like to think of SMART as similar to the Oil Life Monitor in many cars- some vendors do it better than others.

There are a few things you can do to increase the SSD lifespan: turn off disk optimization at the OS level, make sure TRIM is turned on, use a good quality synthetic 5w30 rather than the dino electrons that is the factory fill. Depending on the vendor, they supply software that can migrate from the old drive to the new drive for Windows, and there are third party vendors that can supply this software cheaply. Myself, I would not put too much confidence in the SMART numbers reported by Crystal products: Seagate specifically has disclaimers about third party reporting of SMART numbers to determine problems and lifespan, and they only support their own tools. I am not sure where I fall on that position- what they state has some merit, but it might be like saying "Only use Mercedes brand oil filters in your Mercedes", ignoring the fact that Merc does not make filters, and there may be better filter out there.

I worked for a company that made telecom equipment, and we were one of the first to implement SSD into the control system. However, somebody forgot about the "wear-leveling filesystem" features to spread out the write load, so we started to get a huge number of SSD failures after about a year in service. It cost us a huge amount of money to fix, not for the hardware but for the downtime, software changes, labor etc. At that time SSD were rare and did not have the built in features that are standard today, so the engineers were responsible to make sure the software would not mistreat the drives. Oopsy, $25 million mistake.

Good article on SMART for SSD by Crucial- https://www.crucial.com/articles/about-ssd/smart-and-ssds
Nice video on SMART- https://www.youtube.com/watch?v=YNGUP1t8MYA
 
  • Like
Reactions: Y_K
Adding to what rubberchicken said, enterprise SSDs can be purchased with "write", "balanced" or "read" configurations; the drive is engineered specifically for the workload.

These are regular SSDs in a server, not in a SAN or NAS;array; those usually have dedicated controllers to handle the above mentioned conditions.

At my job we have about 20Tb of SSDs in a RAID 50 collecting realtime event data from 15,000 PCs. Kind of a write intensive operation lol
 
Last edited:
For regular desktop use, you really don't need to monitor them unless you're trying to get extreme lifespan. They just kept dropping in price:capacity, and now with anything old enough to worry about, increased in performance too, so it makes more sense to replace if this is a highly used system where there's a lot of data I/O.

There are a couple things you should still do.

1) Leave a fair amount of free space on the SSD. The wear leveling can only use what it has. Say you have a mere 2GB of free space (I know, extreme example). That will be written to 10X as often as having 20GB of free space, so those cells will have 1/10th the lifespan, then data loss and the SSD should map away those and use spares. It is not done working at end of life except for those worn out cells. The software cannot figure this out.

2) Make a full partition backup. It helps to recover from multiple different faults including SSD failure or OS failure, data corruption, malware infestation, etc. Remember that SSDs don't necessarily fail based on a prediction made by software. They can fail all at once, POOF they are gone and it has nothing to do with exhausting the write cycle limit.

I'd just not bother using that software at all for a desktop application, until you suspect you have a problem. Odds are the SSD will outlast the current system and OS installation, or you'll just want a capacity upgrade since data keeps growing.
 
Last edited:
Life depends on a few factors: the nand cell type they use (most use TLC these days), the size of the cell and type (most use 3D nand now), how many spare blocks they have (rule of thumb in the industry is every 7% extra spare will add an extra drive write per day durability), how strong the ECC engine the controller has (you don't just throw away a block because of a few bits of errors, you throw it away when it is almost not recoverable), the temperature you use it at, do you turn it off for a month or a year at a time so the background data refresh cannot run (power it on at least once a month at room temp, power on less than once a year will give you a guaranteed data loss), etc.

Typical enterprise drives are rated for how many drive writes per day, consumer usually don't write the whole drive each day so the demand is much lower. Enterprise drive also has multiple users so they focus more on the worst case scenario performance rather than desktop / laptop drive that focus on peak performance in burst.

Regarding to how long they last, the incoming nand screening, the controller's ECC engine strength, the firmware stability (big reputable company or small no name one), etc all matters. Once you bought the drive and use it normally you don't really have much in control. Buy from a company / brand / model that has a good reputation on the internet. OCZ suddenly went down hill because they ran into problem with nand screening and in the end doesn't even screen them at all, and as a result eventually all of them die prematurely and they cannot replace them all under warranty, and people stopped buying them.

SSD can die suddenly due to failures, you still need to backup instead of expecting a gradually slow death. If you have backup, that 99% life remains is fine. Use it till it dies if you backup frequently.
 
Last edited:
Originally Posted by simple_gifts
Adding to what rubberchicken said, enterprise SSDs can be purchased with "write", "balanced" or "read" configurations; the drive is engineered specifically for the workload.

These are regular SSDs in a server, not in a SAN or NAS;array; those usually have dedicated controllers to handle the above mentioned conditions.

At my job we have about 20Tb of SSDs in a RAID 50 collecting realtime event data from 15,000 PCs. Kind of a write intensive operation lol


Interesting, I did not know the drives themselves can be balanced for read vs write bias. I assume that this is changing the onboard cache- a similar feature is available in most Raid cards and SAN / NAS solutions but it happens above the drive level.
My team just finished building a large system for a Fortune 100 company with 6000 TB of pure SSD across 4 closely coupled SAN arrays.
 
You got me curious about my own system. I have about 8 drives in my monster PC, 3 SSD and the rest are spinning rust. Some of them run in a Raid config that also uses a 512 gb SSD as a cache. I constantly upgrade SSD and just pulled a Samsun 840 Pro 256 gb SSD out of service, although its perfectly fine. I wanted to check its power on hours, which could be close to 50,000 hrs. But here is a Seagate 2 TB drive with 49609 hours !!

[Linked Image]
 
^^^ In Crystal Disk Info you can go into the menu path: Function > Advanced Features > Raw Values, then you can choose "10 [DEC]" so you can see the decimal values of all the SMART data.

That's a ton of hours ... interesting.
 
  • Like
Reactions: Y_K
As others have suggested, for most consumer use, as long as the drive has ample free space, it will last a long time.

Just looked this AM at a CNET article https://www.cnet.com/how-to/find-how-how-much-longer-your-ssd-will-last/

It suggested that 100gb writes/day would give you about 27 years of life.

I took a gander at mine:

----------------------------------------------------------------------------
(3) Samsung SSD 850 PRO 512GB
----------------------------------------------------------------------------
Model : Samsung SSD 850 PRO 512GB
Firmware : EXM04B6Q
Serial Number : ***************
Disk Size : 512.1 GB (8.4/137.4/512.1/512.1)
Buffer Size : Unknown
Queue Depth : 32
# of Sectors : 1000215216
Rotation Rate : ---- (SSD)
Interface : Serial ATA
Major Version : ACS-2
Minor Version : ATA8-ACS version 4c
Transfer Mode : SATA/600 | SATA/600
Power On Hours : 36875 hours
Power On Count : 9843 count
Host Writes : 107517 GB
Wear Level Count : 369
Temperature : 37 C (98 F)
Health Status : Good (100 %)
Features : S.M.A.R.T., 48bit LBA, NCQ, TRIM, DevSleep
APM Level : ----
AAM Level : ----
Drive Letter : C:


----------------------------------------------------------------------------

It's on it's second computer as it was in my previous machine for about 18 months as I was trying to use it a bit longer, so I re-installed the OS on this 512GB SSD. When I replaced it in 2017, this drive went into that machine as the boot drive. I believe SAMSUNG designed these for 300TB written, and in 4.2 years use, I'm at what, 107TB, so probably 1/3rd of the way through the drive.

This drive probably won't make it's way into a 3rd machine at this rate.

It's my boot drive. I still have the original disk from the machine stashed away, as well as monthly OS backups for recovery.

Data goes on one of the two 4TB spinning drives.
 
And I just looked at the Samsung 860 Pro and it has double the writes/life as the 850 I currently have, rated at 600TBW for the 512G drive.

Technology has advanced in the past 4+ years.
 
Im running an SSD built in to my 2010 MacBook Air. Gets used regularly. No issues that Im aware of (knocking on wood).
 
Just found this in a search. I would just add that endurance isn't really a matter of the number of writes. It's really about the number of erases. I remember back in the days of single-cell EPROMs and EEPROMs, all cells erased to a 1. But an erase caused wear to the "floating gate" regardless of whether or not it was previously written to or not. But you could have part of a block that was never written to, but if that block is erased, every cell in that block has received wear.

And you guys don't really really want to know how the sausage is made regarding NAND flash. It's remarkable that it works at all.
 
I can only tell you from experiance that SSD's which are name brand pretty much outlast the upgrade cycle of the computer.

Platter based hard disks also last a very long time when they are ran 24/7/365, but over time they wear out. Bearings and head to disk clearance issues.
 
If you install an SSD in a Mac,

Go to “Terminal”, and type in “sudo trimforce enable”.

The advantage of the TRIM command is that it enables the SSD’s GC (garbage collection) to skip the invalid data rather than moving it, thus saving time not rewriting the invalid data. This results in a reduction of the number of erase cycles on the flash memory and enables higher performance during writes. The SSD doesn’t need to immediately delete or garbage collect these locations it just marks them as no longer valid. This helps ensure that all storage cells are aged uniformly and maximum lifetime achieved.
 
I can only tell you from experiance that SSD's which are name brand pretty much outlast the upgrade cycle of the computer.

Platter based hard disks also last a very long time when they are ran 24/7/365, but over time they wear out. Bearings and head to disk clearance issues.

I’ve had a hard drive that was still working but the bearings were clearly grinding. It was on a Mac notebook computer where the replacement is just insane because it’s buried deep. Newer ones (at least when they used hard drives) are easy.

I’ve also had a hard drive fail randomly. Originally I thought that it was just corrupted and kept it around in hopes that I could later recover a bit of the data that I hadn’t kept on a backup. But later I figured I’d never get it and reformatted. It would reformat, but I kept on eventually getting the click of death.

SSDs should last ridiculously long. I’m looking at my WD Blue 1 TB SSD and I’m thinking at my current usage rate it would probably outlast me. I’ve had it for 3.5 years and reporting tools say it’s still at 100% wear health. But the biggest problem isn’t going to be wear but some kind of corruption. If the wear level tables or the firmware borks, that’s pretty much impossible to recover.
 
Back
Top