Linux File System Stack – 2

Linux Content Index

File System Architecture – Part I
File System Architecture– Part II
File System Write
Buffer Cache
Storage Cache

A Linux file system is expected to handle two types of data structure species, Dentries & Inodes, they are indeed defining characteristic of a file system running inside the Linux kernel. For example a path “/bm/celtic” contains three elements, “/” , “bm” & “celtic”, so each of them will have its own own dentry and inode. Among a lot of other information a dentry encapsulates the name, a pointer to the parent dentry and also a pointer to the corresponding inode.

What happens when we type “cd /bm/celtic”?

Setting the current working directory involves pointing the process “task_struct” to the dentry associated with “celtic”, locating that particular entry involves the following steps.

  1. “/” at the beginning of the string indicates root
  2. The root dentry is furnished during the file system mount so VFS has a point where it can start its search for a file or a directory.
  3. A file system module is expected to have the capability to search for a child when the parent dentry is provided to it, so VFS will request the dentry for “bm” by providing its parent dentry (root).
  4. It up to the file system module to find the child entry using the parent dentry (*Parent Dentry also has a pointer to its own inode which should hold the key*).

The above sequence of steps will be repeated recursively but this time the parent will be  “bm” and “celtic” will be the child, eventually VFS will have a list of the Dentries associated with the path.

Linux is geared to run on sluggish hard disks supported with relatively large DRAM memories. This might mean that there is this ocean of Dentries and Inodes cached in RAM & whenever a cache miss is encountered VFS tries to access it using the above steps by calling the file system module specific “look_up” function.

Fundamentally a file system module is only expected to work on top of inodes, Linux will request operations like creation and deletion of inodes, look up of inodes, linking of inodes and allocation of storage blocks for inodes.

Parsing of paths, control cache management are all abstracted in kernel as part of VFS and buffer management as part of block driver framework.

Writing a new file:

  1. User space communicates the buffer to be written using the “write” system call.
  2. VFS then allocates a kernel page and associates that with the write offset in the “address_space” of that inode, each inode has its own address_space indexed by the file offset.
  3. Every write needs to eventually end up in the storage device so the new page in the RAM cache will have to be mapped to a block in the storage device, for this purpose VFS calls the “get_block” interface of the the file system module which establishes this mapping.
  4. A copy_from_user_space routine moves the contents into that kernel page and marks it as dirty.
  5. Finally the control returns to the application.

Overwriting contents of a file differ in two aspects, one is that the offset being written to might already have a page allocated in the cache and the other is that it should be already mapped to a block in the storage so its just a matter of memcpy from user space to kernel space buffer. All the dirty pages are written when the kernel flusher threads kick in and at this point the already established storage mapping will help the kernel identify to which storage block the page must go.

Reading a new file follows the similar steps but its just that the contents needs to be read from the device into the page and then into the user space buffer. If an updated page is encountered in the page cache then the device read is avoided and hence the operations will be faster.

Tags: , , , , , , , , , , , , , ,

About Mahesh Sreekandath

Embedded Systems, edgy politics & deafening music

6 responses to “Linux File System Stack – 2”

  1. Xin Li says :

    Hi,

    Can you briefly explain what does the term “kernel page” refer to? Is it the piece of 4kB physical page frame on the memory that can only used by kernel?

    As we know that the OS need to allocate pages for page cache if data on the disk need to be load into the page cache, but who does the page, who will hold data in the page cache, belong to?

    • Mahesh Sreekandath says :

      As you mentioned, kernel pages are simply the 4K blocks used for managing dynamic memory allocations within the kernel.

      In the context of a file system module, the RAM file data cache or file system meta data cache are usually managed in terms of 4K blocks. A data cache will be usually associated with an VFS inode but how they are managed is file responsibility of the file system module.

      So the operations like the allocation, flushing and freeing of page memory are eventually the responsibility of the file system. But kernel provides useful helper APIs for various such operations. Hope this clarifies your question.

      Have tried to explain more on the file system caches here: http://tekrants.me/2015/04/24/linux-storage-cache/

Trackbacks / Pingbacks

  1. A Linux File System – 1 | Objective - May 9, 2013
  2. Linux Buffer Cache | Objective - August 11, 2014
  3. Linux File System Write | Objective - August 11, 2014
  4. Linux Storage Cache | Embedded Sense - April 24, 2015

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: