As GRUB Legacy was not evolving anymore, it was necessary to move to another solution. GRUB2 comes as a complete rewriting and provides the following new features:
- ability to boot on various file systems (xfs, ext4, ntfs, hfs+, raid, etc),
- gzip files decompression on the fly,
- management of all disk geometries,
- support for GPT (GUID Partition Tables) and MBR (Master Boot Record),
- portability with different architectures (BIOS, EFI, Coreboot, etc),
- ability to load modules at execution time.
The new default file system for RHEL 7 is XFS. Its main advantage is to allow the creation of file system up to 500TB (50TB in RHEL6) compare to the 50TB limit of Ext4 (16TB in RHEL6). This is in line with the requirements of some big Red Hat customers.
According to Ric Wheeler (the lead for Red Hat‘s storage and filesystem), XFS would present these additional attractive features:
- best performance for most workloads (especially with high speed storage and larger number of cores),
- tends to be less CPU intensive (better optimizations around lock contention, etc),
- the most robust at large scale – has been run at hundred plus TB sizes for many years (and today’s storage is getting way bigger, 16TB is about half a shelf of drives),
- the most common file system in multiple key upstream communities: most common base for ceph, gluster and openstack more broadly,
- pioneered most of the techniques now in Ext4 for performance (like delayed allocation).
Also, unlike most of the other file systems, XFS doesn’t execute any file system check at boot time. In case of trouble, you have to rely on the xfs_repair command.
In addition, XFS runs a CRC checksum on all metadata blocks, which is not the case for Ext4 until now.
However, XFS has got one serious drawback. It doesn’t allow file systems to be reduced even when unmounted (shrinking support is considered but not available). This is a good reason to stay with Ext4 when big file systems are not needed. In addition, Ext4 tends to be faster with some specific workloads like single threaded, metadata intensive workloads.
BTRFS is a technology preview. Although BTRFS (B–TRee File System) is not completely production-ready, its capabilities (copy-on-write, snapshot, filesystem online shrink, etc) are amazing.
You can get a preview through this Suse BTRFS presentation.
However, about BTRFS and SELinux, here is what Dan Walsh from Red Hat wrote in one of his articles (Bringing new security features to Docker): “SELinux currently will only work with the device mapper back end. SELinux does not work with BTRFS. BTRFS does not support context mount labeling yet, which prevents SELinux from relabeling all content when the container starts via the mount command. Kernel engineers are working on a fix for this and potentially Overlayfs if it gets merged into the container.”
The NFS 4.1 version is now supported, bringing better performance on increasingly-congested networks.
Better Parallel NFS client support has been added to improve integration with commercially available pNFS servers.
Additional information is available on the Red Hat Enterprise Linux Blog. Also, a presentation about NFS evolutions was given during the Red Hat annual Summit (2014).
GFS2 journaling code has been improved to reduce the number of journal update operations, consolidate IO operations and increase overall GFS2 file system performance.
In addition, GFS2 file system creation tools now utilize device topology knowledge, deal with RAID stripe alignment, and carefully orchestrate the placement of performance critical file system elements, such as journals and resource groups. This improvement increases the scalability and performance of GFS2 not only during file system creation time but also during file system usage.
Additional information is available on the Red Hat Enterprise Linux Blog.
The SCSI Target Daemon, tgtd, has been replaced by the LIO kernel target subsystem, standard open source SCSI target for block storage. The latter is now used for all of the following storage fabrics: FCoE, iSCSI, iSER, and SRP.
FS-Cache is a fully supported feature in the Red Hat Enterprise Linux 7. It provides a persistent local cache that can be used by file systems to take data retrieved over the network and cache it on a local disk. This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS). FS-Cache can significantly reduce the network and server loading by satisfying read requests locally without consuming network bandwidth.
Source: Red Hat Enterprise Linux Blog.
The IO scheduler policy has changed with Red Hat Enterprise Linux 7.
The default IO Scheduler is now CFQ for SATA drives and Deadline for everything else.
Indeed, for faster storage than SATA drives, Deadline outperforms CFQ, giving a performance increase without any special tuning.
Source: RHEL 7 Performance Tuning Guide.
Red Hat Presentation
During the Red Hat annual Summit (2014), a presentation about RHEL 7 File Systems was given.