In recent data centres, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level.There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures.These concerns were not addressed in prior quantitative reliability studies based on realistic settings.
In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed storage areas from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamically changing amount of storage at each redundancy level due to multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, we found that the probability of data loss decreased by two orders of magnitude for systems with 384 or more drives compared to normal RAID. This technique turned out to scale well, and a system with 1536 inexpensive drives attained lower data loss probability than RAID 6 with 16 enterprise-class drives.