Computer Operating System - Chapter 11: Mass-Storage Systems
Overview of Mass Storage Structure
■ HDD Scheduling
■ NVM Scheduling
■ Error Detection and Correction
■ Storage Device Management
■ Swap-Space Management
■ Storage Attachment
■ RAID Structure
■ HDD Scheduling
■ NVM Scheduling
■ Error Detection and Correction
■ Storage Device Management
■ Swap-Space Management
■ Storage Attachment
■ RAID Structure
Bạn đang xem 20 trang mẫu của tài liệu "Computer Operating System - Chapter 11: Mass-Storage Systems", để tải tài liệu gốc về máy hãy click vào nút Download ở trên.
File đính kèm:
- computer_operating_system_chapter_11_mass_storage_systems.pdf
Nội dung text: Computer Operating System - Chapter 11: Mass-Storage Systems
- Chapter 11: Mass-Storage Systems ■ Overview of Mass Storage Structure ■ HDD Scheduling ■ NVM Scheduling ■ Error Detection and Correction ■ Storage Device Management ■ Swap-Space Management ■ Storage Attachment ■ RAID Structure Operating System Concepts 11.2 Silberschatz, Galvin and Gagne ©2018
- Overview of Mass Storage Structure ■ Bulk of secondary storage for modern computers is hard disk drives (HDDs) and nonvolatile memory (NVM) devices ■ HDDs spin platters of magnetically-coated material under moving read-write heads ● Disks can be removable ● Head crash results from disk head making contact with the disk surface 4 That’s bad ■ Performance ● Drive rotation is at 60 to 250 times per second ● Transfer rate is rate at which data flow between drive and computer ● Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency) Operating System Concepts 11.4 Silberschatz, Galvin and Gagne ©2018
- Example of Hard Disk Drives ■ Platters range from .85” to 14” (historically) ● Commonly 3.5”, 2.5”, and 1.8” ■ Range from 30GB to 3TB per drive ■ Performance ● Transfer Rate – theoretical – 6 Gb/sec 4 Effective Transfer Rate – real – 1Gb/sec ● Seek time from 3ms to 12ms (e.g., 9ms is common for desktop drives) 4 Average seek time is measured or calculated based on 1/3 of tracks ● Latency based on spindle speed 4 1 / (RPM / 60) = 60 / RPM 4 Average latency = ½ latency Operating System Concepts 11.6 Silberschatz, Galvin and Gagne ©2018
- The First Commercial Disk Drive ■ 1956 ■ IBM RAMDAC computer included the IBM Model 350 disk storage system ■ 5M (7 bit) characters ■ 50 x 24” platters ■ Access time < 1 second Operating System Concepts 11.8 Silberschatz, Galvin and Gagne ©2018
- Nonvolatile Memory Devices (Cont.) ■ Have characteristics that present challenges ■ Read and written in “page” increments (think sector) but can’t overwrite in place ● Must first be erased, and erases happen in larger ”block” increments ● Can only be erased a limited number of times before worn out ~ 100,000 ● Life span measured in drive writes per day (DWPD) 4 E.g., A 1TB NAND drive with rating of 5DWPD is expected to have 5TB per day written within warrantee period without failing Operating System Concepts 11.10 Silberschatz, Galvin and Gagne ©2018
- Volatile Memory ■ DRAM frequently used as mass-storage device ● Not technically secondary storage because volatile, but can have file systems, be used like very fast secondary storage ■ RAM drives (with many names, including RAM disks) present as raw block devices, commonly file system formatted ● Found in all major operating systems 4 Linux /dev/ram, macOS diskutil to create them, Solaris /tmp of file system type tmpfs ● Computers have buffering, caching via RAM, so why RAM drives? 4 Caches / buffers allocated / managed by programmer, operating system, hardware 4 RAM drives under user control ■ Used as high speed temporary storage ● Programs could share bulk date, quickly, by reading/writing to RAM drive Operating System Concepts 11.12 Silberschatz, Galvin and Gagne ©2018
- Disk Attachment ■ Host-attached storage accessed through I/O ports talking to I/O busses. Several busses available, including advanced technology attachment (ATA), serial ATA (SATA), eSATA, serial attached SCSI (SAS), universal serial bus (USB), and fibre channel (FC) ● Because NVM much faster than HDD, new fast interface for NVM called NVM express (NVMe), connecting directly to PCI bus ■ Data transfers on a bus carried out by special electronic processors called controllers (or host-bus adapters, HBAs) ● Host controller on the computer end of the bus, device controller on device end ● Computer places command on host controller, using memory-mapped I/O ports ● Host controller sends messages to device controller ● Data transferred via DMA between device and computer DRAM Operating System Concepts 11.14 Silberschatz, Galvin and Gagne ©2018
- HDD Scheduling ■ The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth ● Seek time » seek distance ● Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer ■ Minimize seek time Operating System Concepts 11.16 Silberschatz, Galvin and Gagne ©2018
- Disk Scheduling (Cont.) ■ Note that drive controllers have small buffers and can manage a queue of I/O requests (of varying “depth”) ■ Several algorithms exist to schedule the servicing of disk I/O requests ■ The analysis is true for one or many platters ■ E.g., We illustrate scheduling algorithms with a request queue (0-199) ● 98, 183, 37, 122, 14, 124, 65, 67 ● Head pointer –> 53 Operating System Concepts 11.18 Silberschatz, Galvin and Gagne ©2018
- SCAN ■ The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. ■ SCAN algorithm sometimes called the elevator algorithm ■ But note that if requests are uniformly dense, largest density at other end of disk and those wait the longest Operating System Concepts 11.20 Silberschatz, Galvin and Gagne ©2018
- C-SCAN ■ Provides a more uniform wait time than SCAN ■ The head moves from one end of the disk to the other, servicing requests as it goes ● When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip ■ Treats the cylinders as a circular list that wraps around from the last cylinder to the first one Operating System Concepts 11.22 Silberschatz, Galvin and Gagne ©2018
- Selecting a Disk-Scheduling Algorithm ■ FCFS is common and has a natural appeal ■ SCAN and C-SCAN perform better for systems that place a heavy load on the disk (less starvation, but still possible) ■ To avoid starvation Linux implements deadline scheduler ● Maintains separate read and write queues, gives read priority 4 Because processes more likely to block on read than write ● Implements four queues: 2 x read and 2 x write 4 1 read and 1 write queue sorted in LBA order, (implementing C-SCAN) 4 1 read and 1 write queue sorted in FCFS order 4 All I/O requests sent in batch sorted in that queue’s order 4 After each batch, checks if any requests in FCFS older than configured age (default 500ms) – If so, LBA queue containing that request is selected for next batch of I/O ● In RHEL 7, NOOP and completely fair queueing scheduler (CFQ) are also available, defaults vary by storage device Operating System Concepts 11.24 Silberschatz, Galvin and Gagne ©2018
- Error Detection and Correction ■ Fundamental aspect of many parts of computing (memory, networking, storage) ■ Error detection determines if there a problem has occurred (for example a bit flipping) ● If detected, can halt the operation ● Detection frequently done via parity bit ● Parity – one form of checksum – uses modular arithmetic to compute, store, compare values of fixed-length words ● Another error-detection method common in networking is Cyclic Redundancy Check (CRC) which uses hash function to detect multiple-bit errors ■ Error-correction code (ECC) not only detects, but can correct some errors ● Soft errors correctable, hard errors detected but not corrected Operating System Concepts 11.26 Silberschatz, Galvin and Gagne ©2018
- Storage Device Management (cont.) ■ Root partition contains the OS, other partitions can hold other Oses, other file systems, or be raw ● Mounted at boot time ● Other partitions can mount automatically or manually ■ At mount time, file system consistency checked ● Is all metadata correct? 4 If not, fix it, try again 4 If yes, add to mount table, allow access ■ Boot block can point to boot volume or boot loader set of blocks that contain enough code to know how to load the kernel from the file system ● Or a boot management program for multi-os booting Operating System Concepts 11.28 Silberschatz, Galvin and Gagne ©2018
- Swap-Space Management ■ Used for moving entire ● Secondary storage slower processes (swapping), or pages than DRAM, so important to (paging), from DRAM to optimize performance secondary storage when DRAM ● Usually multiple swap spaces not large enough for all possible – decreasing I/O load processes on any given device ● Best to have dedicated ■ Operating system provides devices swap space management ● Can be in raw partition or a file within a file system (for convenience of adding) ● Data structures for swapping on Linux systems: Operating System Concepts 11.30 Silberschatz, Galvin and Gagne ©2018
- Network-Attached Storage ■ Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) ● Remotely attaching to file systems ■ NFS and CIFS are common protocols ■ Implemented via remote procedure calls (RPCs) between host and storage over typically TCP or UDP on IP network ■ iSCSI protocol uses IP network to carry the SCSI protocol ● Remotely attaching to devices (blocks) Operating System Concepts 11.32 Silberschatz, Galvin and Gagne ©2018
- Storage Array ■ Can just attach disks, or arrays of disks ■ Avoids the NAS drawback of using network bandwidth ■ Storage Array has controller(s), provides features to attached host(s) ● Ports to connect hosts to array ● Memory, controlling software (sometimes NVRAM, etc) ● A few to thousands of disks ● RAID, hot spares, hot swap (discussed later) ● Shared storage -> more efficiency ● Features found in some file systems 4 Snaphots, clones, thin provisioning, replication, deduplication, etc Operating System Concepts 11.34 Silberschatz, Galvin and Gagne ©2018
- Storage Area Network (Cont.) ■ SAN is one or more storage arrays ● Connected to one or more Fibre Channel switches or InfiniBand (IB) network ■ Hosts also attach to the switches ■ Storage made available via LUN Masking from specific arrays to specific servers ■ Easy to add or remove storage, add new host and allocate it storage ■ Why have separate storage A Storage Array networks and communications networks? ● Consider iSCSI, FCOE Operating System Concepts 11.36 Silberschatz, Galvin and Gagne ©2018
- RAID (Cont.) ■ Disk striping uses a group of disks as one storage unit ■ RAID is arranged into six different levels ■ RAID schemes improve performance and improve the reliability of the storage system by storing redundant data ● Mirroring or shadowing (RAID 1) keeps duplicate of each disk ● Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability ● Block interleaved parity (RAID 4, 5, 6) uses much less redundancy ■ RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common ■ Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk and having data rebuilt onto them Operating System Concepts 11.38 Silberschatz, Galvin and Gagne ©2018
- RAID (0 + 1) and (1 + 0) Operating System Concepts 11.40 Silberschatz, Galvin and Gagne ©2018
- Extensions ■ RAID alone does not prevent or detect data corruption or other errors, just disk failures ■ Solaris ZFS adds checksums of all data and metadata metadata block 1 address 1 address 2 ■ Checksums kept with pointer to object, to checksum MB2 checksum detect if object is the right one and whether it changed metadata block 2 address address ■ Can detect and correct data and metadata checksum D1 checksum D2 corruption ■ ZFS also removes volumes, partitions data 1 data 2 ● Disks allocated in pools ZFS checksums all ● Filesystems with a pool share that pool, use metadata and data and release space like malloc() and free() memory allocate / release calls Operating System Concepts 11.42 Silberschatz, Galvin and Gagne ©2018
- Object Storage ■ General-purpose computing, ● Access object via that ID file systems not sufficient for ● Delete object via that ID very large scale ■ Object storage management ■ Another approach – start with a software like Hadoop file storage pool and place objects system (HDFS) and Ceph in it determine where to store ● Object just a container of data objects, manages protection ● No way to navigate the pool to ● Typically by storing N copies, find objects (no directory across N systems, in the structures, few services) object storage cluster ● Computer-oriented, not user- ● Horizontally scalable oriented ● Content addressable, ■ Typical sequence unstructured ● Create an object within the pool, receive an object ID Operating System Concepts 11.44 Silberschatz, Galvin and Gagne ©2018
- Summary (Cont.) ■ Data storage and transmission are complex and frequently result in errors. Error detection attempts to spot such problems to alert the system for corrective action and to avoid error propagation. Error correction can detect and repair problems, depending on the amount of correction data available and the amount of data that was corrupted. ■ Storage devices are partitioned into one or more chunks of space. Each partition can hold a volume or be part of a multidevice volume. File systems are created in volumes. ■ The operating system manages the storage device’s blocks. New devices typically come pre-formatted. The device is partitioned, file systems are created, and boot blocks are allocated to store the system’s bootstrap program if the device will contain an operating system. Finally, when a block or page is corrupted, the system must have a way to lock out that block or to replace it logically with a spare. Operating System Concepts 11.46 Silberschatz, Galvin and Gagne ©2018
- End of Chapter 11 Operating System Concepts Silberschatz, Galvin and Gagne ©2018