Originally Posted By: nleksan
True, it does decrease load times, but the amount of time shaved is highly variable. I have seen anywhere from as much as a 45 percent decrease to as little as zero.
Yes, that variability comes from the size of data it is loading and how it is being accessed. If it is able to pull the whole map into RAM, it will max out the read speed, ergo, the performance increase will be very noticeable in this case. Swapping at this point would of course be counter productive. That being said, gaming rigs tend to have lots of RAM, so as I mentioned, usually, with things like games, and large data chunks being pulled from the drives, a stripe set will yield a very noticeable decrease in load times.
Quote:
Also, RAID0 is fine if it's for unimportant data, but I always recommend a consistent and significant backup regiment be in place.
With the obsolescence of RAID5 and soon 6, I have found nested RAID (10) to be the "ultimate" in terms of performance and acceptable data loss prevention.
Data being unimportant or backed up regularly doesn't change the inconvenient aspect of it though! A failed RAID 0 array is a PITA whether what is on it is important or not
I'm with you on RAID 10, as I mentioned, that's what I use on my own systems.
I'm curious as to why you think RAID 5 and RAID 6 are being obsoleted? I still see them used regularly on Enterprise hardware with a couple hot spares. I know many in *NIX circles push software redundancy over hardware, but I don't see as much of that leveraged as the hype would have one believe.
Quote:
Just remember, if the data doesn't exist in more than two places, it doesn't exist.
Yup.
Quote:
Also, when you setup your array, DO NOT use a 3rd party on board drive controller, use the Intel SATA2 ports! Otherwise, transferring motherboards is guaranteed to break your array.
Oh, you mean like the old HighPoint controllers or the JMicron controllers or the *insert name of junky 3rd party on-board chipset manufacturer here*.... Yeah the arrays created on them are usually not only guaranteed to not be transferrable to any future hardware, but the controllers themselves often have [censored] flaky driver support, lack of firmware update from the manufacturers (who only produce updates for the controller when sold on a stand-alone card) and poor support from the bloody OEM who manufactured the board to boot! And they are also usually much slower than the Intel ones.
So yeah, I agree with that.
Quote:
I also would not run anything more than a simple 2-drive array off of the on board Intel controller.
I've had no issues with 4 and 6 drive arrays (always relatively simple arrays like RAID 1, RAID 0 or RAID 10) on MANY boards with the Intel (software) RAID controller. Why do you say this?
Quote:
RAID cards exist for a reason, and while I was hesitant to spend the money for a while, now I realize just how powerful of a tool they are, and have 6 running in my house (main PC, 2nd PC, media center PC, home server, NAS).
Of course, they've been the primary basis of RAID arrays for as long as the technology existed. I cut my teeth on Adaptec SCSI controllers "back in the day" where you were setting the drive ID's with jumpers. Had a number of Compaq SmartRAID controllers.....etc. Used to love those 4ft long SCSI cables, LOL!!!
SAS of course is the latest incarnation, but most people are just using SATA due to the price of SAS hardware. Even in lower-tier enterprise and SMB operations, you'll sometimes see a mix of SATA and SAS.
Quote:
Be sure to disable all drive power saving modes, as keeping the drives spinning constantly when the PC is on not only dramatically increases life expectancy, it also drastically reduces the chance of a drive timing out and breaking the array (trashing the data).
Yes. And of course when you are using a hardware-based controller card, power management for that stuff is controlled by the card, not the OS (thankfully), which often means that drive power-downs and the like are disabled by default and may not even be an option.