banner



What Structure Does The Linux Ext3 File System Use To Keep Track Of Files In The File File System

File-System Implementation

References:

  1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Affiliate 12

12.one File-System Structure

  • Hard disks accept two important properties that make them suitable for secondary storage of files in file systems: (1) Blocks of data can be rewritten in identify, and (ii) they are direct access, allowing any block of data to be accessed with only ( relatively ) minor movements of the disk heads and rotational latency. ( Encounter Chapter 12 )
  • Disks are ordinarily accessed in physical blocks, rather than a byte at a time. Block sizes may range from 512 bytes to 4K or larger.
  • File systems organize storage on disk drives, and tin can be viewed equally a layered design:
    • At the lowest layer are the physical devices, consisting of the magnetic media, motors & controls, and the electronics connected to them and controlling them. Modernistic disk put more and more of the electronic controls directly on the disk drive itself, leaving relatively piddling work for the deejay controller carte to perform.
    • I/O Command consists of device drivers , special software programs ( oftentimes written in assembly ) which communicate with the devices past reading and writing special codes directly to and from memory addresses corresponding to the controller card's registers. Each controller card ( device ) on a arrangement has a dissimilar set of addresses ( registers, a.grand.a. ports ) that it listens to, and a unique fix of command codes and results codes that information technology understands.
    • The basic file organisation level works directly with the device drivers in terms of retrieving and storing raw blocks of data, without any consideration for what is in each block. Depending on the system, blocks may be referred to with a unmarried block number, ( east.m. block # 234234 ), or with head-sector-cylinder combinations.
    • The file organization module knows near files and their logical blocks, and how they map to physical blocks on the disk. In improver to translating from logical to physical blocks, the file organization module likewise maintains the listing of free blocks, and allocates free blocks to files as needed.
    • The logical file system deals with all of the meta information associated with a file ( UID, GID, mode, dates, etc ), i.eastward. everything about the file except the data itself. This level manages the directory construction and the mapping of file names to file control blocks, FCBs , which contain all of the meta data equally well as block number information for finding the information on the disk.
  • The layered approach to file systems means that much of the code tin can be used uniformly for a wide diverseness of different file systems, and only certain layers need to be filesystem specific. Common file systems in employ include the UNIX file system, UFS, the Berkeley Fast File System, FFS, Windows systems Fatty, FAT32, NTFS, CD-ROM systems ISO 9660, and for Linux the extended file systems ext2 and ext3 ( among 40 others supported. )


Effigy 12.1 - Layered file system.

12.2 File-Arrangement Implementation

12.2.1 Overview

  • File systems store several of import data structures on the disk:
    • A boot-control bloc1000, ( per book ) a.k.a. the kick block in UNIX or the segmentation boot sector in Windows contains information nearly how to boot the system off of this deejay. This volition generally be the first sector of the volume if there is a bootable system loaded on that volume, or the block will be left vacant otherwise.
    • A volume command cake, ( per book ) a.yard.a. the main file table in UNIX or the superblock in Windows, which contains data such as the partition table, number of blocks on each filesystem, and pointers to gratuitous blocks and free FCB blocks.
    • A directory structure ( per file system ), containing file names and pointers to respective FCBs. UNIX uses inode numbers, and NTFS uses a master file table.
    • The File Control Block, FCB, ( per file ) containing details about ownership, size, permissions, dates, etc. UNIX stores this data in inodes, and NTFS in the chief file tabular array equally a relational database structure.


Figure 12.2 - A typical file-control block.

  • At that place are too several key data structures stored in memory:
    • An in-memory mount table.
    • An in-retentivity directory cache of recently accessed directory information.
    • A organization-broad open up file tabular array , containing a copy of the FCB for every currently open file in the system, also as some other related information.
    • A per-process open up file table, containing a pointer to the system open up file tabular array as well as another information. ( For instance the current file position pointer may exist either here or in the system file table, depending on the implementation and whether the file is beingness shared or not. )
  • Figure 12.three illustrates some of the interactions of file organization components when files are created and/or used:
    • When a new file is created, a new FCB is allocated and filled out with important information regarding the new file. The appropriate directory is modified with the new file name and FCB information.
    • When a file is accessed during a plan, the open( ) system call reads in the FCB data from deejay, and stores it in the arrangement-broad open file table. An entry is added to the per-process open file tabular array referencing the arrangement-wide table, and an index into the per-process tabular array is returned past the open( ) system call. UNIX refers to this index as a file descriptor , and Windows refers to information technology as a file handle .
    • If some other process already has a file open when a new request comes in for the aforementioned file, and it is sharable, then a counter in the arrangement-wide tabular array is incremented and the per-process tabular array is adjusted to point to the existing entry in the system-wide table.
    • When a file is closed, the per-process table entry is freed, and the counter in the organization-wide table is decremented. If that counter reaches null, then the organisation wide table is also freed. Any data currently stored in retentivity cache for this file is written out to deejay if necessary.


Figure 12.3 - In-retention file-system structures. (a) File open. (b) File read.

12.2.2 Partitions and Mounting

  • Physical disks are commonly divided into smaller units chosen partitions. They can besides be combined into larger units, but that is near usually done for RAID installations and is left for after chapters.
  • Partitions tin either be used as raw devices ( with no structure imposed upon them ), or they can exist formatted to concur a filesystem ( i.eastward. populated with FCBs and initial directory structures as advisable. ) Raw partitions are generally used for swap space, and may also be used for certain programs such equally databases that choose to manage their own disk storage system. Partitions containing filesystems tin generally only be accessed using the file organization structure by ordinary users, only can often exist accessed as a raw device also by root.
  • The boot block is accessed as function of a raw partition, by the kicking program prior to whatever operating system beingness loaded. Mod kick programs empathise multiple OSes and filesystem formats, and can give the user a pick of which of several available systems to kick.
  • The root division contains the Os kernel and at least the key portions of the Os needed to consummate the boot process. At kick fourth dimension the root partition is mounted, and control is transferred from the boot program to the kernel found in that location. ( Older systems required that the root partition lie completely inside the start 1024 cylinders of the disk, because that was as far as the kick programme could attain. One time the kernel had command, then it could admission partitions beyond the 1024 cylinder boundary. )
  • Continuing with the kick process, additional filesystems go mounted, adding their information into the appropriate mount tabular array structure. Equally a role of the mounting process the file systems may be checked for errors or inconsistencies, either considering they are flagged as not having been airtight properly the terminal fourth dimension they were used, or merely for general principals. Filesystems may be mounted either automatically or manually. In UNIX a mount point is indicated by setting a flag in the in-memory re-create of the inode, so all hereafter references to that inode get re-directed to the root directory of the mounted filesystem.

12.ii.three Virtual File Systems

  • Virtual File Systems, VFS , provide a mutual interface to multiple different filesystem types. In addition, it provides for a unique identifier ( vnode ) for files across the entire space, including across all filesystems of different types. ( UNIX inodes are unique only across a single filesystem, and certainly do not carry across networked file systems. )
  • The VFS in Linux is based upon four key object types:
    • The inode object, representing an private file
    • The file object, representing an open file.
    • The superblock object, representing a filesystem.
    • The dentry object, representing a directory entry.
  • Linux VFS provides a set of mutual functionalities for each filesystem, using function pointers accessed through a table. The same functionality is accessed through the same table position for all filesystem types, though the actual functions pointed to by the pointers may exist filesystem-specific. See /usr/include/linux/fs.h for full details. Mutual operations provided include open( ), read( ), write( ), and mmap( ).


Figure 12.iv - Schematic view of a virtual file system.

12.3 Directory Implementation

  • Directories need to be fast to search, insert, and delete, with a minimum of wasted disk space.

12.3.1 Linear List

  • A linear list is the simplest and easiest directory structure to set up up, but it does take some drawbacks.
  • Finding a file ( or verifying one does non already exist upon creation ) requires a linear search.
  • Deletions tin exist done by moving all entries, flagging an entry equally deleted, or by moving the last entry into the newly vacant position.
  • Sorting the list makes searches faster, at the expense of more circuitous insertions and deletions.
  • A linked listing makes insertions and deletions into a sorted listing easier, with overhead for the links.
  • More complex data structures, such as B-trees, could also be considered.

12.three.2 Hash Table

  • A hash table can also be used to speed up searches.
  • Hash tables are generally implemented in add-on to a linear or other structure

12.iv Allotment Methods

  • There are iii major methods of storing files on disks: contiguous, linked, and indexed.

12.4.i Contiguous Resource allotment

  • Face-to-face Allocation requires that all blocks of a file exist kept together contiguously.
  • Performance is very fast, considering reading successive blocks of the same file generally requires no motion of the disk heads, or at about i small-scale footstep to the next adjacent cylinder.
  • Storage allocation involves the same issues discussed earlier for the allocation of face-to-face blocks of retentiveness ( showtime fit, best fit, fragmentation problems, etc. ) The stardom is that the high time penalty required for moving the disk heads from spot to spot may now justify the benefits of keeping files contiguously when possible.
  • ( Even file systems that do non by default store files contiguously can benefit from certain utilities that compact the deejay and make all files contiguous in the process. )
  • Bug can ascend when files grow, or if the verbal size of a file is unknown at creation time:
    • Over-estimation of the file'due south final size increases external fragmentation and wastes deejay space.
    • Nether-estimation may crave that a file be moved or a process aborted if the file grows beyond its originally allocated space.
    • If a file grows slowly over a long fourth dimension menses and the full final infinite must be allocated initially, then a lot of space becomes unusable before the file fills the space.
  • A variation is to allocate file space in big contiguous chunks, called extents. When a file outgrows its original extent, then an additional 1 is allocated. ( For case an extent may be the size of a consummate runway or even cylinder, aligned on an appropriate track or cylinder boundary. ) The high-performance files organization Veritas uses extents to optimize functioning.


Figure 12.five - Contiguous allotment of disk infinite.

12.4.2 Linked Allocation

  • Disk files tin be stored equally linked lists, with the expense of the storage infinite consumed by each link. ( Due east.g. a block may exist 508 bytes instead of 512. )
  • Linked resource allotment involves no external fragmentation, does not require pre-known file sizes, and allows files to grow dynamically at any time.
  • Unfortunately linked allocation is only efficient for sequential access files, as random access requires starting at the commencement of the list for each new location admission.
  • Allocating clusters of blocks reduces the space wasted by pointers, at the cost of internal fragmentation.
  • Another large trouble with linked allocation is reliability if a pointer is lost or damaged. Doubly linked lists provide some protection, at the cost of additional overhead and wasted space.


Figure 12.6 - Linked allocation of disk space.

  • The File Allotment Table, FAT, used by DOS is a variation of linked allocation, where all the links are stored in a separate tabular array at the beginning of the disk. The benefit of this approach is that the FAT table can exist cached in memory, greatly improving random access speeds.


Figure 12.7 File-resource allotment tabular array.

12.4.3 Indexed Allocation

  • Indexed Allocation combines all of the indexes for accessing each file into a common block ( for that file ), every bit opposed to spreading them all over the disk or storing them in a FAT tabular array.


Figure 12.8 - Indexed allocation of disk infinite.

  • Some disk infinite is wasted ( relative to linked lists or FAT tables ) considering an entire index block must be allocated for each file, regardless of how many information blocks the file contains. This leads to questions of how big the index cake should be, and how it should be implemented. There are several approaches:
    • Linked Scheme - An alphabetize block is one disk block, which can be read and written in a single disk operation. The first index block contains some header information, the first N block addresses, and if necessary a arrow to additional linked alphabetize blocks.
    • Multi-Level Alphabetize - The kickoff index block contains a set of pointers to secondary index blocks, which in turn contain pointers to the actual information blocks.
    • Combined Scheme - This is the scheme used in UNIX inodes, in which the first 12 or so data cake pointers are stored directly in the inode, and so singly, doubly, and triply indirect pointers provide admission to more data blocks every bit needed. ( See below. ) The advantage of this scheme is that for small files ( which many are ), the information blocks are readily attainable ( up to 48K with 4K block sizes ); files up to about 4144K ( using 4K blocks ) are accessible with only a unmarried indirect block ( which can be cached ), and huge files are withal attainable using a relatively small number of disk accesses ( larger in theory than tin can exist addressed by a 32-bit address, which is why some systems take moved to 64-bit file pointers. )


    Figure 12.nine - The UNIX inode.

12.4.4 Functioning

  • The optimal allocation method is different for sequential access files than for random access files, and is also dissimilar for small files than for big files.
  • Some systems back up more than i allocation method, which may crave specifying how the file is to be used ( sequential or random access ) at the time it is allocated. Such systems too provide conversion utilities.
  • Some systems have been known to use contiguous access for modest files, and automatically switch to an indexed scheme when file sizes surpass a certain threshold.
  • And of class some systems adjust their allocation schemes ( e.g. cake sizes ) to all-time match the characteristics of the hardware for optimum performance.

12.v Complimentary-Infinite Management

  • Some other important aspect of disk management is keeping rails of and allocating free infinite.

12.5.i Flake Vector

  • One elementary approach is to use a bit vector , in which each bit represents a disk block, gear up to one if gratis or 0 if allocated.
  • Fast algorithms be for rapidly finding face-to-face blocks of a given size
  • The downwards side is that a 40GB disk requires over 5MB simply to shop the bitmap. ( For example. )

12.v.2 Linked List

  • A linked listing tin as well exist used to proceed track of all gratis blocks.
  • Traversing the list and/or finding a contiguous block of a given size are not easy, merely fortunately are non frequently needed operations. By and large the system just adds and removes unmarried blocks from the offset of the list.
  • The FAT table keeps runway of the free listing as just one more linked list on the tabular array.


Figure 12.10 - Linked free-space listing on deejay.

12.5.iii Grouping

  • A variation on linked list costless lists is to utilise links of blocks of indices of free blocks. If a block holds up to N addresses, then the first cake in the linked-list contains up to North-1 addresses of costless blocks and a arrow to the next block of free addresses.

12.5.4 Counting

  • When there are multiple contiguous blocks of free space and so the organisation can keep track of the starting address of the group and the number of contiguous free blocks. As long every bit the average length of a face-to-face grouping of free blocks is greater than 2 this offers a savings in space needed for the free listing. ( Similar to compression techniques used for graphics images when a group of pixels all the same color is encountered. )

12.5.5 Infinite Maps

  • Sun's ZFS file organization was designed for HUGE numbers and sizes of files, directories, and even file systems.
  • The resulting data structures could exist VERY inefficient if not implemented carefully. For example, freeing up a 1 GB file on a 1 TB file organization could involve updating thousands of blocks of free list bit maps if the file was spread across the disk.
  • ZFS uses a combination of techniques, starting with dividing the deejay up into ( hundreds of ) metaslabs of a manageable size, each having their ain space map.
  • Free blocks are managed using the counting technique, just rather than write the information to a table, information technology is recorded in a log-structured transaction record. Adjacent gratuitous blocks are also coalesced into a larger single free block.
  • An in-memory space map is synthetic using a balanced tree data construction, synthetic from the log data.
  • The combination of the in-retention tree and the on-disk log provide for very fast and efficient direction of these very big files and free blocks.

12.6 Efficiency and Functioning

12.6.one Efficiency

  • UNIX pre-allocates inodes, which occupies infinite even earlier any files are created.
  • UNIX also distributes inodes across the deejay, and tries to store data files near their inode, to reduce the altitude of disk seeks between the inodes and the information.
  • Some systems apply variable size clusters depending on the file size.
  • The more than data that is stored in a directory ( e.yard. last admission fourth dimension ), the more often the directory blocks have to exist re-written.
  • As engineering science advances, addressing schemes accept had to grow too.
    • Sunday'south ZFS file organization uses 128-scrap pointers, which should theoretically never need to exist expanded. ( The mass required to store two^128 bytes with atomic storage would exist at least 272 trillion kilograms! )
  • Kernel table sizes used to exist stock-still, and could simply be changed past rebuilding the kernels. Modern tables are dynamically allocated, but that requires more than complicated algorithms for accessing them.

12.6.2 Performance

  • Disk controllers generally include on-board caching. When a seek is requested, the heads are moved into place, and then an unabridged rails is read, starting from whatever sector is currently nether the heads ( reducing latency. ) The requested sector is returned and the unrequested portion of the track is cached in the disk's electronics.
  • Some OSes enshroud disk blocks they expect to need again in a buffer enshroud.
  • A page cache connected to the virtual memory organisation is actually more efficient as retentivity addresses do not demand to be converted to disk block addresses and back again.
  • Some systems ( Solaris, Linux, Windows 2000, NT, XP ) utilize folio caching for both process pages and file data in a unified virtual retentiveness.
  • Figures 11.11 and eleven.12 show the advantages of the unified buffer cache found in some versions of UNIX and Linux - Data does not need to be stored twice, and bug of inconsistent buffer data are avoided.


Figure 12.11 - I/O without a unified buffer cache.


Effigy 12.12 - I/O using a unified buffer enshroud.

  • Page replacement strategies can exist complicated with a unified enshroud, as one needs to decide whether to replace process or file pages, and how many pages to guarantee to each category of pages. Solaris, for instance, has gone through many variations, resulting in priority paging giving procedure pages priority over file I/O pages, and setting limits so that neither can knock the other completely out of retentiveness.
  • Another issue affecting functioning is the question of whether to implement synchronous writes or asynchronous writes. Synchronous writes occur in the order in which the disk subsystem receives them, without caching; Asynchronous writes are cached, allowing the disk subsystem to schedule writes in a more efficient order ( See Affiliate 12. ) Metadata writes are often done synchronously. Some systems support flags to the open up call requiring that writes be synchronous, for example for the benefit of database systems that crave their writes be performed in a required club.
  • The type of file access tin can as well take an impact on optimal page replacement policies. For case, LRU is not necessarily a practiced policy for sequential admission files. For these types of files progression commonly goes in a forward direction only, and the most recently used folio will non be needed again until after the file has been rewound and re-read from the beginning, ( if it is ever needed at all. ) On the other paw, nosotros can expect to need the next page in the file fairly soon. For this reason sequential admission files frequently take advantage of two special policies:
    • Free-backside frees up a page as before long as the next folio in the file is requested, with the assumption that we are now washed with the erstwhile page and won't need it again for a long time.
    • Read-ahead reads the requested page and several subsequent pages at the same time, with the supposition that those pages will be needed in the near future. This is similar to the rail caching that is already performed by the disk controller, except information technology saves the future latency of transferring data from the deejay controller memory into motherboard main retention.
  • The caching arrangement and asynchronous writes speed up disk writes considerably, because the disk subsystem can schedule physical writes to the disk to minimize caput move and disk seek times. ( Meet Chapter 12. ) Reads, on the other hand, must be done more synchronously in spite of the caching system, with the outcome that deejay writes can counter-intuitively be much faster on boilerplate than disk reads.

12.7 Recovery

12.seven.1 Consistency Checking

  • The storing of sure data structures ( due east.g. directories and inodes ) in memory and the caching of disk operations can speed upward operation, but what happens in the result of a organization crash? All volatile retentivity structures are lost, and the data stored on the hard drive may be left in an inconsistent state.
  • A Consistency Checker ( fsck in UNIX, chkdsk or scandisk in Windows ) is ofttimes run at kicking time or mount time, particularly if a filesystem was not closed downwardly properly. Some of the problems that these tools look for include:
    • Disk blocks allocated to files and likewise listed on the free list.
    • Disk blocks neither allocated to files nor on the free listing.
    • Disk blocks allocated to more than than i file.
    • The number of deejay blocks allocated to a file inconsistent with the file's stated size.
    • Properly allocated files / inodes which do not appear in any directory entry.
    • Link counts for an inode not matching the number of references to that inode in the directory construction.
    • Ii or more than identical file names in the same directory.
    • Illegally linked directories, due east.thou. cyclical relationships where those are non immune, or files/directories that are non accessible from the root of the directory tree.
    • Consistency checkers will frequently collect questionable disk blocks into new files with names such as chk00001.dat. These files may contain valuable information that would otherwise exist lost, but in nigh cases they can exist safely deleted, ( returning those deejay blocks to the gratis list. )
  • UNIX caches directory information for reads, but whatsoever changes that bear upon space allotment or metadata changes are written synchronously, before whatsoever of the respective information blocks are written to.

12.7.2 Log-Structured File Systems ( was eleven.8 )

  • Log-based transaction-oriented ( a.g.a. journaling ) filesystems borrow techniques adult for databases, guaranteeing that whatever given transaction either completes successfully or can be rolled back to a safety state before the transaction commenced:
    • All metadata changes are written sequentially to a log.
    • A set of changes for performing a specific job ( eastward.thou. moving a file ) is a transaction .
    • As changes are written to the log they are said to exist committed, assuasive the system to return to its work.
    • In the meantime, the changes from the log are carried out on the actual filesystem, and a arrow keeps runway of which changes in the log have been completed and which take not yet been completed.
    • When all changes corresponding to a particular transaction have been completed, that transaction tin be safely removed from the log.
    • At whatever given fourth dimension, the log will contain information pertaining to uncompleted transactions only, e.k. actions that were committed but for which the entire transaction has not withal been completed.
      • From the log, the remaining transactions tin be completed,
      • or if the transaction was aborted, then the partially completed changes tin can be undone.

12.7.3 Other Solutions ( New )

  • Sun'south ZFS and Network Appliance'southward WAFL file systems accept a different approach to file system consistency.
  • No blocks of data are always over-written in place. Rather the new data is written into fresh new blocks, and after the transaction is complete, the metadata ( information block pointers ) is updated to point to the new blocks.
    • The onetime blocks tin and so be freed up for future utilize.
    • Alternatively, if the quondam blocks and old metadata are saved, then a snapshot of the system in its original land is preserved. This approach is taken by WAFL.
  • ZFS combines this with check-summing of all metadata and data blocks, and RAID, to ensure that no inconsistencies are possible, and therefore ZFS does not incorporate a consistency checker.

12.seven.4 Backup and Restore

  • In order to recover lost data in the outcome of a deejay crash, it is important to carry backups regularly.
  • Files should be copied to some removable medium, such as magnetic tapes, CDs, DVDs, or external removable hard drives.
  • A full backup copies every file on a filesystem.
  • Incremental backups copy only files which have changed since some previous time.
  • A combination of full and incremental backups can offering a compromise between full recoverability, the number and size of backup tapes needed, and the number of tapes that demand to be used to do a full restore. For case, one strategy might be:
    • At the beginning of the month do a full backup.
    • At the end of the first and over again at the terminate of the 2nd week, backup all files which take inverse since the beginning of the calendar month.
    • At the end of the third week, backup all files that accept changed since the stop of the second calendar week.
    • Every day of the month not listed in a higher place, practise an incremental backup of all files that have changed since the near contempo of the weekly backups described above.
  • Fill-in tapes are ofttimes reused, particularly for daily backups, but there are limits to how many times the aforementioned record can be used.
  • Every and so often a full fill-in should exist made that is kept "forever" and not overwritten.
  • Backup tapes should exist tested, to ensure that they are readable!
  • For optimal security, backup tapes should be kept off-bounds, and so that a fire or burglary cannot destroy both the system and the backups. In that location are companies ( e.g. Iron Mount ) that specialize in the secure off-site storage of critical backup information.
  • Proceed your backup tapes secure - The easiest way for a thief to steal all your information is to only pocket your backup tapes!
  • Storing important files on more than than 1 calculator can be an alternate though less reliable form of fill-in.
  • Note that incremental backups tin can also help users to get dorsum a previous version of a file that they have since changed in some way.
  • Beware that backups can help forensic investigators recover east-mails and other files that users had though they had deleted!

12.8 NFS ( Optional )

12.8.1 Overview


Figure 12.13 - 3 independent file systems.


Figure 12.14 - Mounting in NFS. (a) Mounts. (b) Cascading mounts.

12.viii.2 The Mount Protocol

  • The NFS mountain protocol is similar to the local mount protocol, establishing a connection between a specific local directory ( the mount signal ) and a specific device from a remote system.
  • Each server maintains an export listing of the local filesystems ( directory sub-copse ) which are exportable, who they are exportable to, and what restrictions apply ( e.g. read-simply access. )
  • The server also maintains a listing of currently connected clients, then that they can be notified in the event of the server going downward and for other reasons.
  • Automount and autounmount are supported.

12.viii.iii The NFS Protocol

  • Implemented equally a set of remote procedure calls ( RPCs ):
    • Searching for a file in a directory
    • REading a set of directory entries
    • Manipulating links and directories
    • Accessing file attributes
    • Reading and writing files


Figure 12.xv - Schematic view of the NFS architecture.

12.8.4 Path-Name Translation

11.8.5 Remote Operations

  • Buffering and caching improve performance, but tin cause a disparity in local versus remote views of the same file(s).

12.9 Example: The WAFL File System ( Optional )

  • Write Anywhere File Layout
  • Designed for a specific hardware architecture.
  • Snapshots record the state of the system at regular or irregular intervals.
    • The snapshot just copies the inode pointers, not the actual data.
    • Used pages are not overwritten, so updates are fast.
    • Blocks keep counters for how many snapshots are pointing to that block - When the counter reaches zero, so the cake is considered gratis.


Figure 12.16 - The WAFL file layout.


Figure 12.17 - Snapshots in WAFL

12.10 Summary

What Structure Does The Linux Ext3 File System Use To Keep Track Of Files In The File File System,

Source: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/12_FileSystemImplementation.html

Posted by: goinslaing1951.blogspot.com

0 Response to "What Structure Does The Linux Ext3 File System Use To Keep Track Of Files In The File File System"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel