Sun Solaris 10 ZFS offers blazing filesystem performance

The new Sun file system ZFS represents a previously unseen breakthrough in file system performance and management. All HELIOS UB based products and tools have successfully passed performance and reliability testing using the new Sun ZFS file system. Important tasks like working with many small files and re-indexing the HELIOS Desktop database work many times faster compared to the standard UFS file system. In addition, ZFS offers disk / file system / reliability features that are missing in other standard server operating systems. Our engineers are delighted and we feel that the latest Sun options like supporting AMD-CPUs, offering OpenSolaris on multiple platforms – and especially the new ZFS – bring unmatched performance benefits for customers.

Sample ZFS performance

For the following test run (3 million files tested), a directory tree is created, rebuilt and then removed. Click here to see performance test details.

Task

ZFS [min.]

UFS [min.]

ZFS:UFS performance

Create

3.75

93.75

25x faster

Rebuild HELIOS Desktop

25.30

116.75

4.6x faster

Remove

16.75

217.30

13x faster

 [min.] = minutes

ZFS feature overview

  • 128-bit file system(s); no practical size limit
  • Transactional semantics (like journaling, disk is always consistent)
  • Dynamic storage management
    • Allows growing file system size
    • Allows adding disks
    • Allows striping and mirroring
    • No dedicated file system space reservation required, e.g. two file systems can grow until ZFS pool is full
  • File system snapshot support
    • Allows backup of the frozen system without service interrupt
    • Allows backup snapshots to allow easy access of older files, e.g. snapshot at 10 a.m., 12 a.m., 2 p.m., 4 p.m., etc.
    • Allows replicating a ZFS snapshot (e.g. to a remote disk / server)
    • Allows rolling back the file system to an older snapshot version 
  • Snapshot cloning allows read-write access of a snapshot
  • Disk block checksums will detect data errors
  • Disk block compression features
  • ZFS disk image compatibility between different platforms (SPARC / AMD)

ZFS requirements

  • Sun SPARC / x86 (64-bit is highly recommended!)
  • Solaris 10 (edition 11/06) or newer; earlier Solaris 10 ZFS versions will not work reliably
  • 512 MB RAM (the more memory the better performance!)
  • Any disk or RAID system will work
  • HELIOS UB products (arch.: “sol4” and “solx86”)
  • Attention: For using ZFS with database applications see TechInfo #106

What is the performance advantage of running HELIOS file services on ZFS?

HELIOS file servers use the AFP 3 file service (EtherShare UB) for Mac files and SMB/CIFS (PCShare UB) for Windows files. A Mac file is comprised of a data fork and a resource fork and Windows files can contain metadata and file streams. That means that each file in the file system can have several forks. Since ZFS can handle many small files way faster than UFS, HELIOS file services benefit from this performance boost.

ZFS “HOWTO” HELIOS samples

In our example we will use a complete disk “c0t3d0” and later add another disk “c0t4d0” for the ZFS pool “testpool”. For this pool, a file system container “myprojects” will be created and a few properties set, which are inherited by the individual “project1” and “project2”.
  1. Create a ZFS pool via:
    Format: zpool create poolname storage*
    zpool create testpool c0t3d0
    * storage can be diskname (c0t3d0), partitionname(c0td0s5), or filepath(/data/bigfile)
  2. Create two file systems “project1” and “project2” on this pool
    Format: zfs create poolname / filesystemname
    First create a file system hierarchy, which acts as a container for the individual file systems that will be created later, and set required properties:
    # zfs create testpool/myprojects
    # zfs set mountpoint=/export/myprojects testpool/myprojects


    Then create your individual project file system(s):
    # zfs create testpool/myprojects/project1
    # zfs create testpool/myprojects/project2


    Both file systems “project1” and “project2” will automatically be mounted below “/export/myprojects”.

    Note: Where to define HELIOS volumes: at least one level below file system mount point!

    Example: For the “/export/myprojects/project1” file system the HELIOS mount point could be at “/export/myprojects/project1/project1_volume” but must NOT reside on the “project1” directory itself. This is required due to the way ZFS stores its snapshot information from a ZFS file system.

    Additional storage can be added via:
    Format: zpool add poolname storage

    Add a second disk “c0t4d0”:
    # zpool add testpool c0t4d0
  3. Create a snapshot of the file system “project”:
    Format: zfs snapshot <poolname>/<filesystemname>@snapshotname
    # zfs snapshot testpool/myprojects/project1@snap
  4. Create a clone read-write file system of the snapshot:
    Format: zfs clone <poolname>/<filesystemname>@snapshotname <poolname>/<filesystemname>
    # zfs clone testpool/myprojects/project1@snap testpool/myprojects/clone_p1
    The clone of “project1” is available at: “/export/myprojects/clone_p1”.
  5. List all ZFS file systems:
    # zfs list

    NAME

    USED

    AVAIL

    REFER

    MOUNTPOINT

    test/…/project1

    29.2M

    53.8G

    28.7M

    /exp/…/project1

    test/…/project2

    329.2M

    53.8G

    328.8M

    /exp/…/project2

    test/…/project1@snap

    505K

    28.7M

    test/…/clone_p1

    806K

    53.8G

    28.7M

    /exp/…/clone_p1


  6. Remove the clone, remove the snapshot:
    Format: zfs destroy <poolname>/<filesystemname>
    OR zfs destroy <poolname>/<filesystemname>@snapshotname
    # zfs destroy testpool/myprojects/clone_p1
    # zfs destroy testpool/myprojects/project1@snap
  7. Backup a ZFS file system snapshot into a disk file:
    Format: zfs send <poolname>/<filesystemname>@<snapshotname> > filepath
    # zfs send testpool/myprojects/project1@snap > /backup/project1_snap.bkp
  8. Remote HTTP based ZFS administration via:
    “https://hostname:6789/zfs”

Get I/O statistics

With “zpool iostat” you can list I/O statistics of your pools, with option “-v” also separated for the single disks.
# zpool iostat -v

CAPACITY

OPERATION

BANDWITH

POOL

USED

AVAIL

READ

WRITE

READ

WRITE

testpool 21.6G 46.4G 11 25 875K 516K
c0t3d0 21.6G 46.4G 11 25 875K 516K
With fsstat <filesystem> you can list I/O statistics per file system.
# fsstat /export/myprojects/project1
 

NEW

NAME

NAME

ATTR

ATTR

LOOKUP

RDDIR

READ

READ

WRITE

WRITE

file remov chng get set ops ops ops bytes ops bytes
19.2M 12.3M 43 123M 14.0M 436M 2.63M 28.0M 12.1G 16.8M 15.6G

Tested applications

HELIOS products

EtherShare

Highest-Performance Server for Mac Clients

PCShare

Highest-Performance Server for Windows Clients

WebShare

Highest-Performance Server for Real Time Remote File Access

ImageServer

Server-based Image Processing and ICC Color Transformation

PDF HandShake  

Create PDF Server • PDF Preflight • PDF Printing • PDF OPI

PrintPreview

Local and Remote Proofing on Monitor and Printer

Tools

HELIOS File System Test 

Professional tool to test file server compatibility

HELIOS LanTest

Professional tool to test and measure the performance of AppleShare services

HELIOS “htar”

UNIX batch disk backup utility

HELIOS “dt” tools

Allows storing and working with client files on a UNIX server, while ensuring that Mac resource information, Windows file stream information, and meta data are left intact

HELIOS “mkisofs”

HELIOS “winfstest”

“dd” performance testing

Performance test details

Tested configuration:

  • Sun Fire X4100 Server (two AMD CPUs, 4 GB RAM)
  • Two 70 GB disks (one for the OS, second for ZFS)
  • Solaris 10 (edition 11/06), all patches installed (as of January 15, 2007) 
Create:
A perl script creates a directory tree with 111.000 folders, no folder contains more than 10 folders. Each bottom folder contains 30 files of 512 bytes.
Rebuild:
For this directory tree a desktop database is created with rebuild -f.
Remove:
The directory tree is removed with rm -r.
UFS:
UFS (UNIX File System) has been the default Solaris UNIX file system for many years. ZFS (Zetabyte File System) is the new Solaris file system.

Notes:

ZFS does not have user / group quotas, instead file system quotas. As it is very simple to set up a file system per user / project / etc (with quota and other properties as required), this could be a simple mechanism to use.

HELIOS programs will display the “Used” and “Available” disk space on ZFS file systems correctly. On regular file systems, this would add up to the “Capacity” of the file system. Due to the nature of the pooled ZFS file systems, this is not true, and the “Capacity” value will be different, even for ZFS file systems in the same pool.
Copying directly from a snapshot into a HELIOS volume is not supported. First clone the snapshot, which does not require additional disk space, and after you have defined a HELIOS volume for the clone volume, you can mount it through AFP or SMB.
Tests were performed on ZFS file system with up to 10 million files.
Tests were performed with HELIOS SQL desktop database.

Additional ZFS links

ZFS documentation

What is ZFS?