Performance Metrics - Phoronix Test Suite

The file server's BIOS settings allow it boot off a USB key. We had no trouble doing the same with a portable installation of Ubuntu 14.04 (kernel version 3.16).

Database Benchmarks

The first test we look at involves determination of the time taken to perform 12500 record insertions into an indexed database. We use SQLite v3.7.3. SQLite performance depends to a large extent on the capabilities of the CPU. Benchmarks from other systems can be viewed on OpenBenchmarking.org.

SQLite v3.7.3 - Transactions Efficiency

The pgbench database benchmark involves recording the transaction rate for database operations using PostgreSQL. Unlike SQLite insertions-only test, pgbench is based on TPC-B, running five SELECT, UPDATE and INSERT commands per transaction.

SQLite v3.7.3 - Transactions Efficiency

Benchmarks from other systems can be viewed on OpenBenchmark.org

Web Server Benchmarks

The NGINX and Apache benchmarks record the number of static web page requests that can be serviced in a given time interval. It gives an idea of the load that can be handled if a given system were to be configured as a web server. The test load consists of a total of 500K requests (for NGINX) / 1M requests (for Apache) with 100 concurrent ones.

NGINX Benchmark

Apache Benchmark

Benchmark numbers for both of these are available on OpenBenchmarking.org (NGINX, Apache).

TCP Loopback

The efficiency of the networking stack in the system (not to be confused with the hardware network adapter itself) can be determined by measures the loopback TCP performance. We record the time taken to transfer 10GB of data via loopback.

Loopback TCP Network Performance

Given that we have the same networking stack for a given OS release across different hardware configurations, the efficiency is going to vary based solely on the CPU capabilities again. Benchmarks from other systems can be viewed on OpenBenchmarking.org.

CacheBench

CacheBench is an artificial benchmark to determine the performance of the cache and DRAM components in a system. It consists of three profiles - reads, writes and read/modify/writes. The bandwidth is recorded for each profile, with bigger numbers indicating better performance.

CacheBench - Read

CacheBench - Write

CacheBench - Read/Modify/Write

The numbers depend on the internal cache access speeds as well as the speed of the DRAM in the system. Benchmarks from other systems can be viewed on OpenBenchmarking.org

Stream

The system memory is tested out using the stream benchmark. The STREAM benchmark is a simple, synthetic benchmark designed to measure sustainable memory bandwidth (in MB/s) and a corresponding computation rate for four simple vector kernels (Copy, Scale, Add and Triad).

Stream - Copy

Stream - Scale

Stream - Add

Stream - Triad

7-Zip Compression

The 7-Zip compression benchmark records the MIPS for the compression mode. This is the same benchmark that we use in the evaluation of mini-PCs, except that this is based on the Linux version. Higher MIPS ratings correspond to better performance, and the numbers are primarily based on the performance of the CPU in the system.

7-Zip Compression MIPS

Benchmark numbers for other systems can be viewed on OpenBenchmarking.org

Linux Kernel Compilation

The timed Linux kernel compilation benchmark records the time taken to build the Linux 3.18 kernel. It is a good multi-discipline benchmark, stressing multiple aspects of the system including the DRAM, CPU and, to a certain extent, even the storage.

Timed Linux Kernel Compilation

Benchmark numbers for other systems can be viewed on OpenBenchmarking.org

C-Ray

C-Ray is a simple raytracer designed to evaluate the floating point performance of a CPU. This is a multi-threaded test, and the time taken to complete the routine is recorded.

C-Ray Raytracing Time

Benchmark numbers for other systems can be viewed on OpenBenchmarking.org

Setup Impressions and Platform Analysis Performance Metrics - Storage Subsystem
Comments Locked

48 Comments

View All Comments

  • nevenstien - Monday, August 10, 2015 - link

    An excellent article on a cost effective File Server/NAS DIY build with a good choice of hardware. After struggling with the dedicated NAS vs. File server question for over a year I decided on FreeNAS using jails for whatever service I wanted to run. I was not a FreeNAS fan before the latest versions which I found very opaque and confused. My experience in the past with how painful hardware failures can be on storage systems even at a PC level convinced me the ZFS file system is the file system of choice for storage systems. The portability of the file system trumps everything else in my opinion. Whether you install FreeNAS or ZFS based Linux the ZFS file system should be the one that is used. When a disk fails its easy and when the hardware fails it’s just a matter of moving the disks to hardware that is not vendor dependent which means basically any hardware with enough storage ports. The software packages of the commercial NAS vendors is great but the main priority for me is the data integrity, reliability portability than the other services like serving video, web hosting or personal cloud services.
  • tchief - Monday, August 10, 2015 - link

    Synology uses mdadm for their arrays along with ext4 for the filesystem. It's quite simple to move the drives to any hardware that runs linux and remount and recover the array.
  • ZeDestructor - Monday, August 10, 2015 - link

    If you virtualize, even the "hardware" becomes portable :)
  • xicaque - Monday, November 23, 2015 - link

    are you pretty good with Freenas? I am not a programmer and there are things that the freenas manual does not address in a clearer way to me. I have a few questions that I like to ask offline. Thanks.
  • thewishy - Tuesday, December 1, 2015 - link

    Agreed, after data corrupting following a disk failure on my synology, it's either a FS with checksum or go home.

    Based on those requirements, it's ZFS or BRTFS. ZFS disk expansion isn't ideal, but I can live with it. BRTFS is "getting there" for RAID5/6, but it's not there yet.

    The choice of board for the cost comparison is about 2.5x the price of the CPU (Skylake pentium) and Motherboard (B150) I decided on. Add a PCI-E SATA card and life is good.
    Granted, it doesn't support ECC, but nor do a lot of mid-range COTS NAS units.
  • Navvie - Monday, August 10, 2015 - link

    Any NAS or Fileserver which isn't using ZFS is a non-starter for me. Likewise a review of such a system which doesn't include some ZFS numbers is of little value.

    I appreciate ZFS is 'new' but people not using it are missing a trick and AnandTech not covering it are doing a disservice to their readers.

    All IMO of course.
  • tchief - Monday, August 10, 2015 - link

    Until you can expand a vdev without having to double the drive count, ZFS is a non starter for many NAS appliance users.
  • extide - Monday, August 10, 2015 - link

    You can ... you can add drives one at a time if you really want (although I wouldn't suggest doing that...)
  • jb510 - Monday, August 10, 2015 - link

    Or one could use BtrFS. Which could stand for better pool resizing (it doesn't, that's just a joke people).

    Check out RockStor, it's no where near as mature as FreeNAS but it's catching up fast. Personally I'd much rather deal with Limux and docker containers than BSD and jails.
  • DanNeely - Monday, August 10, 2015 - link

    If there're major gotchas involved it's a major regression compared to other alternatives out there.

    I'm currently running WHS2011 + StableBit DrivePool. I initially setup with 2x 3GB drives in mirrored storage (raid 1ish equivalent). About a month ago, my array was almost completely full. Not wanting to spend more than I had to at this point (I intend to have a replacement running by December so I can run in parallel for a few months before WHS is EOL) I slapped in an old 1.5GB drive into the server. After adding to the array and rebalancing it I had an extra 750GB of mirrored storage available; it's not a ton but should be plenty to keep the server going until I stand it down. I don't want to lose that level of flexibility in being able to add un-matched drives into my array at need with whatever I use to replace my current setup with.

    If the gotcha is that by adding a single drive I end up with an array that's effectively a 2 drive not-raid1 not-raid0ed with a single drive, it'd be a larger regression in a feature I know I've needed than I'm confortable just to gain a bunch of improvements for what amount to what-if scenarios I've never encountered yet.

Log in

Don't have an account? Sign up now