Garth A. Gibson, L. Hellerstein, R. Karp, R. Katz, D. Patterson
{"title":"大型磁盘阵列的故障校正技术","authors":"Garth A. Gibson, L. Hellerstein, R. Karp, R. Katz, D. Patterson","doi":"10.1145/70082.68194","DOIUrl":null,"url":null,"abstract":"The ever increasing need for I/O bandwidth will be met with ever larger arrays of disks. These arrays require redundancy to protect against data loss. This paper examines alternative choices for encodings, or codes, that reliably store information in disk arrays. Codes are selected to maximize mean time to data loss or minimize disks containing redundant data, but are all constrained to minimize performance penalties associated with updating information or recovering from catastrophic disk failures. We also codes that give highly reliable data storage with low redundant data overhead for arrays of 1000 information disks.","PeriodicalId":359206,"journal":{"name":"ASPLOS III","volume":"32 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"76","resultStr":"{\"title\":\"Failure correction techniques for large disk arrays\",\"authors\":\"Garth A. Gibson, L. Hellerstein, R. Karp, R. Katz, D. Patterson\",\"doi\":\"10.1145/70082.68194\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ever increasing need for I/O bandwidth will be met with ever larger arrays of disks. These arrays require redundancy to protect against data loss. This paper examines alternative choices for encodings, or codes, that reliably store information in disk arrays. Codes are selected to maximize mean time to data loss or minimize disks containing redundant data, but are all constrained to minimize performance penalties associated with updating information or recovering from catastrophic disk failures. We also codes that give highly reliable data storage with low redundant data overhead for arrays of 1000 information disks.\",\"PeriodicalId\":359206,\"journal\":{\"name\":\"ASPLOS III\",\"volume\":\"32 6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1989-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"76\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ASPLOS III\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/70082.68194\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ASPLOS III","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/70082.68194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Failure correction techniques for large disk arrays
The ever increasing need for I/O bandwidth will be met with ever larger arrays of disks. These arrays require redundancy to protect against data loss. This paper examines alternative choices for encodings, or codes, that reliably store information in disk arrays. Codes are selected to maximize mean time to data loss or minimize disks containing redundant data, but are all constrained to minimize performance penalties associated with updating information or recovering from catastrophic disk failures. We also codes that give highly reliable data storage with low redundant data overhead for arrays of 1000 information disks.