Zfs Windows Server 2012



Microsoft Windows Server 2012 R2 with the Hyper-V role, failover cluster, and multipath I/O features. One NIC is used for the cluster heartbeat, one for the management/VM network, and two NICs are used for an iSCSI connection to the Oracle ZFS Storage Appliance through an Oracle ES2-64 10GbE switch.

ZFSBuild 2012

  • Jan 16, 2014 Overview. Microsoft’s Windows Server has had the ability to host NFS shares since Server 2003. There are a number of reasons why you may need it, such as backing up SharePoint or sharing files with UnixLinux computers, and for the most part it works fairly well.
  • ZFS and Windows Server Hey all, I have been trying to build a home server for about a month now, and have been going in circles. First I spent roughly $1000 at Best Buy and Amazon getting parts, building basically a second computer but it was really overkill for what I needed and stupid expensive.

It’s been two years since we built our last ZFS based server, and we decided that it was about time for us to build an updated system. The goal is to build something that exceeds the functionality of the previous system, while costing approximately the same amount. The original ZFSBuild 2010 system cost US $6765 to build, and for what we got back then, it was a heck of a system. The new ZFSBuild 2012 system is going to match the price point of the previous design, yet offer measurably better performance.

The new ZFSBuild 2012 system is comprised of the following :

SuperMicro SC846BE16-R920 chassis – 24 bays, single expander, 6Gbit SAS capable. Very similar to the ZFSBuild 2010 server, with a little more power, and a faster SAS interconnect.

SuperMicro X9SRI-3F-B Motherboard – Single socket Xeon E5 compatible motherboard. This board supports 256GB of RAM (over 10x the RAM we could support in the old system) and significantly faster/more powerful CPU’s.

Intel Xeon E5 1620 – 3.6Ghz latest generation Intel Xeon CPU. More horsepower for better compression and faster workload processing. ZFSBuild 2010 was short on CPU, and we found it lacking in later NFS tests. We won’t make that mistake again.

20x Toshiba MK1001TRKB 1TB SAS 6Gbit HDD’s – 1TB SAS drives. The 1TB SATA drives that we used in the previous build were ok, but SAS drives give much better information about their health and performance, and for an enterprise deployment, are absolutely necessary. These drives are only $5 more per drive than what we paid for the drives in ZFSBuild 2010. Obviously if you’d like to save more money, SATA drives are an option, but we strongly recommend using SAS drives when ever possible.

LSI 9211-8i SAS controller – Moving the SAS duties to a Nexenta HSL certified SAS controller. Newer chipset, better performance, and replaceability in case of failure.

Zfs2012

Intel SSD’s all around – We went with a mix of 2x Intel 313 (ZIL), 2x 520 (L2ARC) and 2x 330 (boot – internal cage) SSD’s for this build. We have less ZIL space than the previous build (20GB vs 32GB) but rough math says that we shouldn’t ever need more than 10-12GB of ZIL. We will have more L2ARC (480GB vs 320GB) and the boot drives are roughly the same.

64GB RAM – Generic Kingston ValueRAM. The original ZFSBuild was based on 12GB of memory, which 2 years ago seemed like a lot of RAM for a storage server. Today we’re going with 64 GB right off the bat using 8GB DIMM’s. The motherboard has the capacity to go to 256GB with 32GB DIMM’s. With 64GB of RAM, we’re going to be able to cache a _lot_ of data. My suggestion is to not go super-overboard on RAM to start with, as you can run into issues as noted here : http://www.zfsbuild.com/2012/03/05/when-is-enough-memory-too-much-part-2/

For the same price as our ZFSBuild 2010 project, the ZFSBuild 2012 project will include more CPU, much more RAM, more cache, better drives, and better chassis. It’s amazing what two years difference makes when building this stuff.

Zfs Windows Server 2012

Expect that we’ll evaluate Nexenta Enterprise, OpenIndiana, and revisit FreeNAS’s ZFS implementation. We probably won’t go back over the Promise units, as we’ve already discussed them and they likely haven’t changed (and we don’t have any lying about not doing anything anymore).

We are planning to re-run the same battery of tests that we used in 2010 for the original ZFSBuild 2010 benchmarks. We still have the same test blade server available to reproduce the testing environment. We also plan to run additional tests using various sized working sets. InfiniBand will be benchmarked in additional to standard gigabit Ethernet this round.

So far, we have received nearly all of the hardware. We are still waiting on a cable for the rear fans and a few 3.5 to 2.5 drive bay converters for the ZIL and L2ARC SSD drives. As soon as those items arrive, we will place the ZFSBuild 2012 server in our server room and begin the benchmarking. We are excited to see how it performs relative to the ZFSBuild 2010 server design.

Linux Server Software

Here are a couple pictures we have taken so far on the ZFSBuild 2012 project: