Benchmark Configuration

When we look at the market of competing tower servers, almost all of them use some form of LSI RAID chip. In the low-end and midrange server market, the slightly older (2012) but mature dual-core LSI 2208 chip is by far the most popular solution. Most tower servers also ship with a Xeon E5, so we wanted our Cirrus 1200 configuration to reflect that. However, most tower servers have fewer disks bays. In a nutshell, the typical competing tower has a faster CPU, eight 3.5" diskbays, and a dual-core LSI RAID chip. So we wanted our competing configuration to reflect that.

We converted the components of our Supermicro 2U 6027R-73DARF into a tower server and inserted an LSI MegaRAID 9265-8i. The LSI MegaRAID 9265-8i is not the latest LSI controller, but it uses an LSI 2208 RAID-on-Chip (RoC). This RoC is a dual-core PowerPC at 800 MHz with a 1333 MHz DDR3 interface.

Supermicro 6027R-73DARF

CPU One Intel Xeon E5-2680 v2 (2.8GHz, 10c, 25MB L3, 95W)
RAM Up to 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Internal Disks 2 x Seagate NAS HDD ST4000VN000 4TB (RAID-1)
4 Seagate NAS HDD ST4000VN000 4TB (RAID-10)
2 x Intel SSD710 200GB (RAID-1)
Motherboard Supermicro X9DRD-7LN4F
Chipset Intel C602J
BIOS version R 3.0a (December the 6th, 2013)
PSU Supermicro 740W PWS-741P-1R (80+ Platinum)

We enabled CacheCade and Fastpath.

The disk caches are of course disabled. The two disks in RAID-1 house the OS and the database logs. We use only four drives for the data of our SQL Server database. The reason is is so we can measure whether the Cirrus 1200 (with 12 + 6 disks bays) has an advantage in our workload over a typical tower server that comes with 8 disks bays.

Advatronix Cirrus 1200

We use an eight drive setup in RAID-10 for our data; this way we can also have hotspare disk.

CPU One Xeon E3-1260L (2.4 GHz, 4C, 8MB L3, 45W TDP)
RAM 32GB (4x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Storage system Adaptec ASR71605Q with "MaxCache" and BBU Enabled
2x Seagate NAS HDD ST4000VN000 4TB (RAID-1)
8 Seagate NAS HDD ST4000VN000 4TB (RAID-10)
2x Intel SSD710 200GB (RAID-1)
Motherboard Supermicro X9SCL
Chipset Intel C204
BIOS version v2.10
PSU One Athena Power 500W AP-RRMUD6508 (80 Plus)

Avatronix uses the Adaptec ASR71605Q, which is based on the MIPS 1004k with 2 cores at 1 GHz. The two disks in RAID-1 house the OS and the database logs. The heavy duty SQL Server 2012 database is located on the eight disk RAIDset. The two Intel SSD710 are used as "Maxcache", Adaptec nomenclature for an SSD cache. We could have given the Cirrus 1200 more SSDs (up to six), but it would increase the cost significantly and there is no reason why adding more SSDs would help in our specific benchmarks. We tested with up to 1024 connections requesting a transaction every 100 ms, which is a very high load for such a small business server, but it's still unlikely to overwhelm our SSD cache.

Our Test: a Low Latency Database Server Low latency database transactions test
Comments Locked

39 Comments

View All Comments

  • thomas-hrb - Friday, June 6, 2014 - link

    If you looking at storage servers under the desk why not consider something like the DELL VRTX. that at least have a significant advantage in the scalability department. You can start small and re-dimension to many different use cases as you grow
  • JohanAnandtech - Friday, June 6, 2014 - link

    Good suggestion, although the DELL VRTX is a bit higher in the (pricing) food chain than the servers I described in this article.
  • DanNeely - Friday, June 6, 2014 - link

    With room for 4 blades in the enclosure the VRTX is also significantly higher in terms of overall capability. Were you unable to find a server from someone else that was a close match in specifications to the Cirrus 1200? Even if it cost significantly more, I think at least one of comparison systems should've been picked for equivalent capability instead of equivalent pricing.
  • jjeff1 - Friday, June 6, 2014 - link

    I'm not sure who would want this server. If you have a large SQL database, you definitly need more memory and better reliability. Same thing if you have a large amount of business data.

    Dell, HP or IBM could all provide a better box with much better support options. This HP server supports 18 disk slots, 2 12 core CPUs, and 768GB memory.

    http://www8.hp.com/us/en/products/proliant-servers...
    It'll cost more, no doubt. But if you have a business that's generating TBs of data, you can afford it.
  • Jeff7181 - Sunday, June 8, 2014 - link

    If you have a large SQL database, or any SQL database, you wouldn't run it on this box. This is a storage server, not a compute server.
  • Gonemad - Friday, June 6, 2014 - link

    I've seen U server racks on wheels, with a dark glass and keys locking it, but that was just an empty "wardrobe" where you would put your servers. It was small enough to be pushed around, but with enough real estate to hide a keyboard and monitor in there, like a hypervisor KVM solution. On the plus side, if you ever decided to upgrade, just plop your gear on a real rack unit. It felt less cumbersome than that huge metal box you showed there.

    Then again, a server that conforms to a rack shape is needed.
  • Kevin G - Friday, June 6, 2014 - link

    Actually I have such a Gator case. It is sold as a portable case for AV hardware but conforms to standard 19" rack mount widths and hole mounts. There is one main gotcha with my unit: it does't provide as much depth as a full rack. I have to use shorter server cases and they tend to be a bit taller. It works out as the cooling systems of taller rack cases tend to be quieter and an advantage when bring them to other locations An more of a personal preference thing but I don't use sliding rails in a portable case as I don't see that as wise for a unit that's going to be frequently moved around and traveling.
  • martixy - Friday, June 6, 2014 - link

    Someone explain something to me please.

    So this is specifically low-power - 500W on spec. Let's say then that it's a non-low-power(e.g. twice - 1kW). I'm gonna assume we're threading on CRAC territory at that point. So why exactly? Why would a high powered gaming rig be able to easily handle that load, even under air cooling, but a server with the same power factor require special cooling equipment with fancy acronyms like CRAC?
  • alaricljs - Friday, June 6, 2014 - link

    A gaming rig isn't going to be pushing that much wattage 24x7. A server is considered a constant load and proper AC calculations even go so far as to consider # of people expected in a room consistently, so a high wattage computer is definitely part of the equation.
  • DanNeely - Friday, June 6, 2014 - link

    I suspect it's mostly marketing BS. One box even a high power one that's at a constant 100% load doesn't need special cooling. A CRAC is needed when you've got a data center packed full of servers because they collectively put out enough heat to overwhelm general purpose AC units. (With the rise of virtualization many older data centers capacity has become a thermal limit instead of being limited by the number of racks there's room for.)

    At the margin they may be saying it was designed with enough cooling to keep temps reasonable in air on the warm side of room temperature instead of only when it's being blasted with chilled air. OTOH a number of companies that have experimented with running their data centers 10 or 20F hotter than traditional have found the cost savings from cooling didn't have any major impact on longevity so...

Log in

Don't have an account? Sign up now