Steady state theory and hard drives

Status
Not open for further replies.
Joined
Oct 24, 2011
Messages
6,170
Location
North Coast
About 15 years ago I contacted 2 HD manufactures and asked which would last longer:

1. turning your computer on and off daily
2. turning your computer on and leaving it on until the HD fails.

They both said #2 would be best because the HD spindle will last longer due to steady state theory and suffer less mechanical wear due to less cold starts.

It has now been 14 years since we started our Dell servers at work. When we put them into service we bought an additional set of replica hard drives that we swapped out at 12 years service and used Norton Ghost to transfer all of the data and related programs. Both run like they did at startup and are going to be retired in 2017 when we move the office.

But no components have failed. We re-boot the Windows 2000 Server OS every 3 months and blow out the case once a year with compressed air.

Note: Both have commercial grade APC battery backup and surge protection and power monitoring.

Does anyone else do this?
21.gif
 
Last edited:
My tower stays on all the time. My laptops which aren't used as often I turn off. The only hard drives I've lost were new DOA or died shortly after installation, approximately 2 weeks. I had one External HDD die.
 
I doubt many businesses keep computers 14 years. In fact in our office, we do have a 12 year old server that has become a footstool and a loud room heater.

As for HDDs, I doubt it leaving on vs. on/off really matters much in life. Realistically computing is moving to SSDs and HDDs that fail fail, because they were going to fail anyway and that's largely because the density is so high that rays coming from the sun and the universe can affect it.
 
My understanding is that consumer drives will tolerate more power cycles if it means a lot less uptime, whereas server drives prefer the kind of usage you describe.
 
With modern production, always-on is only possible with Enterprise-class or Performance HDD, such as the RAID firmware and/or 10K RPM types. Desktop and laptop drives will self-park and spin-down if not accessed in order to be Green/ECO. The only way to defeat that is access them often, typically no more than at 29.9 second intervals.
 
Someone who claimed insider knowldge told me that modern HD's in consumer PC's last on average at least 300,000 on/off cycles. I was also told that the newer HD's more gently "ramp up" the head, reducing wear and tear greatly. Heat cycling may still be an issue. I used to be in the camp favoring equilibrium, with my PC running all the time. For the past few years, I have generally put the HD to sleep over night and any time I know I won't be using the computer for more than a few hours. I have never had a hard drive failure in a PC, only in external drives. On the PC, I have had motherboards and power supplies go bad.

hotwheels
 
Originally Posted By: PhillipM
Running on SSD's, don't care any more
laugh.gif



Too bad SSDs are still rather costly. How much is a 2TB SSD now? $4,000?

hotwheels
 
I left my last build on continuously for eleven years. I replaced the hard drive at seven as a preventive measure. The component that ultimately failed was the power supply. Not a difficult fix. I built a new system in 2013 using the same case/power supply. I went with an SSD and, because Windows 7 starts so quickly, I turn it off when I'm not using it now to save the electricity.

But, to address the OP, I asked a lot of different I.T. people their opinions and most were split while some said it made little difference either way.
 
Turn them on and leave them on. I've been repairing computers and print servers since Novell server ran on IBM DOS. The drives that get replaced most often are the ones that are not UPS protected and have abrupt power loss,(the customer should supply a UPS, but the company I work for does not require one) second up are those that are power cycled regularly. Least frequent are those that are left alone.
 
Last edited:
I work for a major Amusement park . We have several 15 yr old PC's that have never been turned off. Clunking along on windows 95.
 
We have many test stands at work that have been running nearly continuously for 20+ years, some still running windows 3.1. We get nervous when the power does get interrupted.
 
Last edited:
Originally Posted By: Doog
About 15 years ago I contacted 2 HD manufactures and asked which would last longer:

1. turning your computer on and off daily
2. turning your computer on and leaving it on until the HD fails.

They both said #2 would be best because the HD spindle will last longer due to steady state theory and suffer less mechanical wear due to less cold starts.

It has now been 14 years since we started our Dell servers at work. When we put them into service we bought an additional set of replica hard drives that we swapped out at 12 years service and used Norton Ghost to transfer all of the data and related programs. Both run like they did at startup and are going to be retired in 2017 when we move the office.

But no components have failed. We re-boot the Windows 2000 Server OS every 3 months and blow out the case once a year with compressed air.

Note: Both have commercial grade APC battery backup and surge protection and power monitoring.

Does anyone else do this?
21.gif


why in the world would you need to ghost hard drives in a server?
 
Originally Posted By: razel
As for HDDs, I doubt it leaving on vs. on/off really matters much in life.


Almost all our server hard drive failures happen after they're power-cycled. Otherwise they run 24/7 for years without any problems; they're usually replaced because we need more space, or because we replace the server, not because they fail from old age.

At home, I've got over 40,000 hours (~5 years?) on some of the disks in my 24/7 server, and they're just cheap Western Digital Greenies.
 
Originally Posted By: PhillipM
Running on SSD's, don't care any more
laugh.gif



Note that a power cycle is the most likely time for many SSDs to fail, due to some manufacturers cheaping out and not including capacitors so the SSD can write data from RAM back to Flash when the power is cut.
 
Originally Posted By: emg
Note that a power cycle is the most likely time for many SSDs to fail, due to some manufacturers cheaping out and not including capacitors so the SSD can write data from RAM back to Flash when the power is cut.

Won't that just cause data corruption? Doesn't sound like something that'd lead to drive failure.
 
Originally Posted By: d00df00d
Won't that just cause data corruption? Doesn't sound like something that'd lead to drive failure.


Same thing, from the user's viewpoint. When the drive loses all your data because it didn't write the block mapping tables back to Flash, you don't much care whether it still works afterwards.

Intel, for example, had a bug a few years ago where the drive would sometimes claim to be 8MB when you rebooted after a power failure, and you could only recover it by doing a complete wipe. Though I believe that was a firmware bug rather than lack of capacitors.
 
Originally Posted By: Subdued
why in the world would you need to ghost hard drives in a server?


Why on earth would you, on HDD failure:
1. Install an O/S
2. Install the backup software
3. Restore the backup catalog
4. Restore most recent full backup
5. Restore incremental backup

When instead you can just:
1. Image the new HDD
2. Restore incremental backup
 
My desktop with it's 2SSDs and 3HDDs, my Windows 2012 server with it's SSD and 5 HDDs, and my pfsense box with 1 2.5" HDD all run 24/7, and I do so for the same reason.
 
Originally Posted By: HangFire
Originally Posted By: Subdued
why in the world would you need to ghost hard drives in a server?


Why on earth would you, on HDD failure:
1. Install an O/S
2. Install the backup software
3. Restore the backup catalog
4. Restore most recent full backup
5. Restore incremental backup

When instead you can just:
1. Image the new HDD
2. Restore incremental backup


We're not talking about a desktop "server" here

Even 14 years ago Dell's PERC was a pretty darn good RAID controller.

Replace one drive, wait for rebuild, replace another drive, etc. and there would be literally no downtime

So I'm a little confused why someone would take an image on a server just to proactively swap out some identical drives.

Granted I am assuming RAID, but IMO if you're not running some kind of RAID1/5/6 you don't really have a reliable server...
 
Status
Not open for further replies.
Back
Top