Friday, November 14, 2008

Solaris 10, ZFS, and Dell JBODs

For everyone else out there who is trying to do more with less, I thought I would post some of my projects that I feel really had a lot of "bang for the buck".

The first in the series is regarding Solaris 10 and my new favorite filesystem, ZFS. In Spring of '07, I was charged with migrating a team of SAS users off of a Sun V240 and onto a larger V445. Not only did they need a good amount of disk space, but performance was a critical factor. They would also likely grow about 8-12% per year.

The problem was, there just wasn't money in the budget for an expensive SAN. So, I started testing out ZFS in the lab with spare equipment and was amazed at the performance.

After enough testing, I decided to go with the Sunfire V445 and two Dell PowerVault 22os JBOD arrays. I loaded the 220s with 14 U320 146GB SCSI disks, and direct attached each array to a separate SCSI controller onthe server.

Now, at the time I was still very new to ZFS and did not choose an optimal configuration. I figured that more spindles in a raid array meant better performance, so I assigned 21 of the 28 disks to the zpool in a raidz vdev:

zpool create sbimktg raidz c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t8d0 c2t9d0 c2t10d0 c2t11d0 c2t12d0 c2t12d0 c2t14d0 c2t15d0 c3t0d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0 c3t5d0 c3t8d0

And, voila! A shiny new ZFS raidz for the marketing folks. I then ran iozone to get an idea of performance, and things looked great:



I know, I know- With the components I used, I should be able to reconfigure and obtain much better performance than what's shown in the graphs. But, compared to what we were getting on the old server, this was a phenomenal performance boost.

I am planning on a reconfig in the near future, which will ultimately put the data on a single zpool consisting of four 7-disk raidz vdevs. This should substantially boost performance for the marketing team.

No comments: