As the follow-on to my previous post, I just completed another project in which again shows that ZFS with Dell JBODs just makes sense.
Another team of SAS users has a Sunfire V240 which had about the worst disk configuration I have ever seen, running Solaris 9. The performance was absolutely awful. Here's the scenario:
- One 1GB fibre channel connection to an EMC Symmetrix 8530.
- The 8530 served up 17 concatenated disk pairs as, get this, 116 9GB disks. WTF??
- These 116 9GB logical disks were then combined into 13 RAID5 groups- Try and make sense of iostat -xn output with that crap!
- One 2GB fibre channel connection to a 4-disk (300GB, 10K) RAID5 group on an EMC CX300.
To give you an idea of the performance, here' s the iozone read/write results under that config:
350MB/sec write speed?! Blech! And the read is only a paltry 900MB/sec. With that many spindles on FC, this thing should scream. Oh, and this system is used for data mining of all things.
I was constrained by the parameters of the project as well. I couldn't buy a new server, and we were phasing out both the Symmetrix and the CX300. I needed a lot of disk and it had to perform.
I really liked the performance gain we got with the Sunfire v445 and Dell JBODs on the previously posted project and decided to go with a similar config.
So, I ran a full backup of the server and then shut everything down. Disconnected the FC cables and pulled the HBAs. Dropped in two shiny new SCSI controllers, connected the PV220s arrays and fired it up. It was only then that I realized that I had two different speed arrays. One was a U320 and one was a U160. It was do or die time, so I proceeded with just one array. If performance was poor, I could get the second array upgraded to U320 in just a few days.
I loaded Solaris 10 on the system, using the cool new ability to boot into ZFS. This time, I created a zpool with two 7-disk raidz vdevs:
zpool create sbidata raidz c3t0d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0 c3t5d0 c3t8d0 raidz c3t9d0 c3t10d0 c3t11d0 c3t12d0 c3t13d0 c3t14d0 c3t15d0
I then restored the passwd, shadow & group files, the home directories, the SAS application, and the data from the old system. Then I fired up the system and ran some tests. The iozone results with just the single 14-disk JBOD were staggering:
Unbelievable. Write performance had more than tripled and read performance doubled! Keep in mind, I went from two EMC fibre channel arrays to a single, 14-disk SCSI JBOD. The previous configuration was just that bad.
Anyway, batch jobs that took 11 hours now only take 5. User driven job time has now been cut by as much as 80% in some cases and at least 66% in most. I just got the parts to upgrade the second JBOD to U320, and will make the change tomorrow morning. I will post the new iozone results when I'm done.
I can't wait to see what the performance looks like tomorrow afternoon!
Friday, November 14, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment