Almost every small company out there needs a minimum level of IT services: file serving, document management, e-mail, and so on. Most of those services can now be found in the Cloud: Google Apps Professional, Microsoft Office 365, Dropbox, Amazon AWS, and many others can take care of nearly any IT service you can think off. Although trendy, cloud services are not without disadvantages.

One example related to our review today is that the latency involved in accessing data over the internet is much higher than on a LAN. While network latency of a well configured LAN is less than half a millisecond, the latency of accessing a cloud service is several tens of milliseconds. And although high bandwidth Internet access has become a lot cheaper over the years, a 100 mbit/s or better is still not widespread among smaller companies, where a 1Gbps LAN is easily attainable. Moreover, while renting a few Terrabytes in the cloud has become relatively affordable, you again run into speed issues, especially if you have massive amounts of data that you want to analyze -- or if you simply feel that a third party should have no control over your (sensitive) data.

So if you require high bandwidth file serving and low latency database access, or if you need massive amounts of storage capacity, a local server could still be the attractive option. Of course, as a small company you likely don't have nor want a dedicated data center (or even just a smaller data room). A decent data room can be an expensive investment (e.g. it would need CRAC and other facilities) and the energy cost can be very high. Due to the costs, some might be tempted to use what others would consider an old fashioned twentieth century option: a server somewhere on a shelf or under a desk. But there are some 21st century requirements that are needed, so a noisy, power hungry tower server is out of the question. Density isn't generally an issue, but it would be great if the server is able to cool it's components in an office environment without being louder than the ambient office noise -- local whirlwinds are generally frowned upon.

The desire for a quiet, low energy server underneath your desk can still make sense: you are in control of your data, the capex investment is limited, and with a little help from a good service provider, it is workable even for those who don’t have an IT department. Sometimes, old and tried methods beat the newest hype. Advatronix felt that it could do better than the current tower server offerings and designed a proprietary chassis that resembles a cube shaped desktop.

The reason behind this rather bulky chassis with 18 (!) drivebays is that most companies that need an in-house server usually have high storage demands: they need low latency, high capacity, or both. Thus there must be enough room for plenty of magnetic disks and some space for an SSD caching tier. And of course, a large chassis also allows large fans and thus relatively quiet operation. In a nutshell, Advatronix feels the Cirrus 1200 sets itself apart from the competition for the following reasons:

  • Quiet (enough) operation
  • Low Power, able to keep to cool in an office environment (no need for a CRAC)
  • Magnetic filter to cope with the fact that this server will be in dusty office instead of a clean data center
  • A good mix of components with a focus on storage performance

As the IT services of small companies are typically bottlenecked by storage and not by CPU performance, the Cirrus 1200 uses a low power quad-core Xeon E3 with up to 32GB of RAM. It's certainly nothing earth shattering, but the combination of all points mentioned above might make the Cirrrus 1200 very attractive for a certain market niche.

Tech Specs
POST A COMMENT

39 Comments

View All Comments

  • thomas-hrb - Friday, June 6, 2014 - link

    If you looking at storage servers under the desk why not consider something like the DELL VRTX. that at least have a significant advantage in the scalability department. You can start small and re-dimension to many different use cases as you grow Reply
  • JohanAnandtech - Friday, June 6, 2014 - link

    Good suggestion, although the DELL VRTX is a bit higher in the (pricing) food chain than the servers I described in this article. Reply
  • DanNeely - Friday, June 6, 2014 - link

    With room for 4 blades in the enclosure the VRTX is also significantly higher in terms of overall capability. Were you unable to find a server from someone else that was a close match in specifications to the Cirrus 1200? Even if it cost significantly more, I think at least one of comparison systems should've been picked for equivalent capability instead of equivalent pricing. Reply
  • jjeff1 - Friday, June 6, 2014 - link

    I'm not sure who would want this server. If you have a large SQL database, you definitly need more memory and better reliability. Same thing if you have a large amount of business data.

    Dell, HP or IBM could all provide a better box with much better support options. This HP server supports 18 disk slots, 2 12 core CPUs, and 768GB memory.

    http://www8.hp.com/us/en/products/proliant-servers...
    It'll cost more, no doubt. But if you have a business that's generating TBs of data, you can afford it.
    Reply
  • Jeff7181 - Sunday, June 8, 2014 - link

    If you have a large SQL database, or any SQL database, you wouldn't run it on this box. This is a storage server, not a compute server. Reply
  • Gonemad - Friday, June 6, 2014 - link

    I've seen U server racks on wheels, with a dark glass and keys locking it, but that was just an empty "wardrobe" where you would put your servers. It was small enough to be pushed around, but with enough real estate to hide a keyboard and monitor in there, like a hypervisor KVM solution. On the plus side, if you ever decided to upgrade, just plop your gear on a real rack unit. It felt less cumbersome than that huge metal box you showed there.

    Then again, a server that conforms to a rack shape is needed.
    Reply
  • Kevin G - Friday, June 6, 2014 - link

    Actually I have such a Gator case. It is sold as a portable case for AV hardware but conforms to standard 19" rack mount widths and hole mounts. There is one main gotcha with my unit: it does't provide as much depth as a full rack. I have to use shorter server cases and they tend to be a bit taller. It works out as the cooling systems of taller rack cases tend to be quieter and an advantage when bring them to other locations An more of a personal preference thing but I don't use sliding rails in a portable case as I don't see that as wise for a unit that's going to be frequently moved around and traveling. Reply
  • martixy - Friday, June 6, 2014 - link

    Someone explain something to me please.

    So this is specifically low-power - 500W on spec. Let's say then that it's a non-low-power(e.g. twice - 1kW). I'm gonna assume we're threading on CRAC territory at that point. So why exactly? Why would a high powered gaming rig be able to easily handle that load, even under air cooling, but a server with the same power factor require special cooling equipment with fancy acronyms like CRAC?
    Reply
  • alaricljs - Friday, June 6, 2014 - link

    A gaming rig isn't going to be pushing that much wattage 24x7. A server is considered a constant load and proper AC calculations even go so far as to consider # of people expected in a room consistently, so a high wattage computer is definitely part of the equation. Reply
  • DanNeely - Friday, June 6, 2014 - link

    I suspect it's mostly marketing BS. One box even a high power one that's at a constant 100% load doesn't need special cooling. A CRAC is needed when you've got a data center packed full of servers because they collectively put out enough heat to overwhelm general purpose AC units. (With the rise of virtualization many older data centers capacity has become a thermal limit instead of being limited by the number of racks there's room for.)

    At the margin they may be saying it was designed with enough cooling to keep temps reasonable in air on the warm side of room temperature instead of only when it's being blasted with chilled air. OTOH a number of companies that have experimented with running their data centers 10 or 20F hotter than traditional have found the cost savings from cooling didn't have any major impact on longevity so...
    Reply

Log in

Don't have an account? Sign up now