Performance tests on 72 TB Infortrend ESDS RAID Storage

Author: L.S.Lowe. File: raidperf24. Original version: 20120723. This update: 20130104. Part of Guide to the Local System.

The RAID for this series of tests is an Infortrend EonStorDS ESDS S24S G2240, equipped with 24 Hitachi disks (3TB, SATA, 7.2k, HUA723030ALA640), and 2 GB of RAM buffer memory, to be set up with 2 RAIDsets of 12 disks each configured as RAID6, and so with the data equivalent of 10 disks each (30 TB of real data). The RAID stripe size will be kept at the factory default: 128 kiB. I have three of these currently, all to be deployed at the same time. For my earlier Infortrend RAIDs, see my local Guide (linked above).

The Infortrend brochure says that this ESDS S24S-G2240 is equipped for host connectivity with two 6Gb/s SAS 4x wide ports, without any host-out ports, and, for an expansion enclosure, one 6Gb/s SAS 4x wide port. I interpret this as meaning peak transfer rate of 24 Gbit/sec, that is 3 GByte/sec, per port. (There are alternative four host-port and dual controller versions).

The HBA card used to connect the RAID to the Dell R710 host was an LSI SAS 9201-16e full-height half-length card (part number LSI00276) which uses PCI-Express x8 (generation 2) bus. PCI-Express x8 G2 has a theoretical performance of 5 GT/sec or 500 MBytes/sec per lane, so a theoretical 4 GBytes/sec.

When re-deploying one of the RAIDs (f25) later, the HBA card used to connect the RAID to a Dell R710 host was an LSI SAS3801E low-profile half-length card (part number LSI00138). This operates on a 3Gb/s link 4x wide port, so I understand this to be a peak transfer rate of 12 Gbit/sec, that is 1.5 GByte/sec, per port. It also uses PCI-Express x8 G2. The RAID performance with this lower-spec card was found to be equally good, see results below, so wasn't a limiting factor in its read or write performance.

One of the Infortrend units had been delivered with firmware level 3.86C.09, and two with 3.86C.11.

Firmware functional performance - version 3.86C.09

Initialization time tests

Two logical drives were created: RAID6, 12 disks each. This was done serially; creation took just a few seconds, and then initialization took under 10 minutes to reach 1% completion, and took 8h04m and 8h02m respectively to reach 100% completion, in the default online mode. No other activity was done during this time: host SAS cables were not connected during initialization. Confirmed later at 7h58m.

Clone time tests

While initialization is generally a one-off operation, cloning a drive or rebuilding the array can be needed a number of times during the life-time of a RAID.

A clone operation of an individual disk drive can be done if it's failing, and if you have a spare drive in the unit to copy it to. To do 5% took around 16 minutes, 45% took 2h35m, so that's about 5h44m to do the complete copy. This works out as around 145 MBytes/second, close to the 157 Mbyte/sec official individual sustained transfer speed of these particular Hitachi drives.

Rebuild time tests

For a rebuild after an individual drive failure, the rebuild-time is a critical parameter of the reliability figure calculation for a RAID, as a long rebuild time makes the RAID open to further disk failures during the rebuild window, and gives the potential for complete data loss particularly for RAID-5.

To test re-build time, I needed to simulate a disk failure, so I removed a drive from an existing 12-disk RAID-6 logical drive which was part of an online logical volume, while it was not busy. After a couple of minutes I replaced it, but it was marked EXILED, so I needed to clear that status for this perfectly good drive. It should be possible to do this by clearing the disk reserved area. I couldn't do that on the unit itself, oddly, because the unit was sufficiently busy with the EXILED drive that the password for enabling that operation was cleared several times before I was able to enter it in full! Maybe if I'd used the web GUI, I'd have had more success. Anyway, I cleared the reserved area successfully using another RAID with spare slots. Having re-inserted the cleared drive into its original RAID slot, it was then accepted as a fresh drive, and re-building started. I noted that the configuration disk array parameter Rebuild Priority was set to Normal, as on my previous Infortrend arrays. This presumably only affects relative priority of rebuild versus host-I/O activity, so where there is no host activity, its setting has no likely effect.

Rebuilding reached 1% after 10 minutes, 10% after 54 minutes. The full rebuild time was 8h36m for this 12-disk 3TB RAID6 array.

Firmware functional performance - version 3.88A.03

This section is about one of those blind alleys that you can go down, and then regret it ...!

I had seen that the firmware levels on the other two RAIDs was out of step, and that also there was more recent firmware available from Infortrend, so I decided with Infortrend's help to update to the latest recommended level at the time: 3.88A.03. Before doing this I deleted the logical drives already created so that new logical drives could be created with any benefits of the new firmware.

Initialization time tests with this firmware

Unfortunately this update caused me annoying issues. The time to create the two logical drives (one at a time) went up by a factor of 4: to be precise 34h30m, and 34h23m respectively, each. This seems to be a crazy amount of time. If it just affects initialisation, then of course I can live with it, but not if it indicates underlying performance issues.

I've checked on another similar RAID that this does not depend on whether the initialisation is done in Online or Offline mode: in the situation where there is no host activity, I wouldn't expect a difference anyway. Also I did a Restore Defaults using the button at the back, which is like a Factory Reset: this cleared the password and the unit name that I had set, but didn't help with restoring a sane initialisation time. (Later on I also did a clear of all logical drives and removal of all drives reserved areas, see firmware section below, and started again, but this didn't help either). Also I checked that turning on Verification on LD Initialization Writes did make it take even longer (108m to do 2%, which extrapolates to 90 hours), so it's unlikely that the problem is due to verifies being done unsolicited. As you'd expect, all these tests were done with a Controller Reset (that is, a RAID reboot) after any config change, to have a clean start.

Clone time tests with this firmware

I measured the clone time by having a logical drive for slots 1-12 and a global spare at slot 24, and requesting a clone of slot 12 to slot 24. This simulates a case where slot 12 is known to have a failing drive. Only the two disks are involved in this operation. It took 28 minutes for the clone operation to reach 1% of completion, 56 minutes for 2%, 80 minutes for 3%. So this would take 44 hours to complete! With the earlier 3.86C.09 firmware (above), it would take under 6 hours.

Rebuild time tests with this firmware

I've already mentioned the importance of a low rebuild time in rating a RAID for overall data security and reliability.

For a RAIDset of 12 drives in a RAID6: after 70 minutes, rebuilding had reached just 1%! The full rebuild-time was 60 hours 1 minute. This is to be compared with around 8.5 hours with the earlier firmware!

Manufacturer comments on this firmware

Infortrend were helpful in providing links to the firmware versions, but took a couple of weeks to agree that there was an issue with the later firmware versions, not only with the initialize time but also the rebuild time, and eventually put it down to the addition of an extra layer of data services in those firmware versions. They said that they were planning a new version of firmware which gave the customer the option of turning such services off or on. Unfortunately too late for me: I had RAIDs to deploy!

Downgrading firmware

Needless to say, after my unfortunate experience with the later firmware, and while the manufacturer was mulling over the causes, I had wanted immediately to try the earlier firmware versions that had come with these three RAIDs: in particular 3.86C.09. But it wasn't as simple as that, and it took a while to discover what extra steps were needed to achieve the earlier performance.

The time taken wasn't helped by the fact that installing an old version of firmware couldn't be done quickly via the web browser GUI, as this silently refused to accept earlier than current versions, so I ended up doing it via Hyperterm at the recommended speed of 9600 bits/sec, which takes two hours. Multiply that by the number of retries I needed to get the steps right!

My solution to getting a good working version of the Infortrend firmware was: remove all data on the drives by unmaking the Host LUN, deleting the logical volumes, and deleting the logical drives; remove the reserved area on every physical drive; power off; remove power leads for a minute; pull out every physical drive (half-way); replace power leads; power on with Restore Defaults button pressed; wait for initialisation; set a password; shutdown the controller (not a reset) to prepare for update; update the firmware via the serial port at 9600 baud using Hyperterm; allow reboot with Restore Defaults button pressed; push in every physical drive; create logical drives and volumes as required.

This may have been excessively elaborate, but certainly it was necessary to remove the reserved area on every drive (which is later automatically re-instated when the disks are made part of a RAIDset), in order to achieve the earlier firmware's good performance. Deciding if other steps are un/necessary I'll leave to others to discover!

Because of these shenanigans required for proper downgrading, I've only tested the 3.86C.09 and 3.88A.03 firmware versions for sure, and not the 3.86C.11 firmware which two of the RAIDs had been delivered with.

Creating LUNs

In Infortrend naming convention, Logical Drives are RAIDsets made up of a set of actual disk drives, Logical Volumes are sets of one or more Logical Drives, Partitions are subsets of Logical Drives/Volumes which can be assigned a channel LUN. In earlier versions of the Infortrend firmware, it was possible to partition a Logical Drive or a Logical Volume and assign either sort of partition to a channel LUN. In the version now in use, there is a strict hierarchy: a Logical Volume must be formed of one or more Logical Drives, only a Logical Volume can be partitioned, and only a logical Volume Partition can be assigned to a channel LUN. At least this makes the documentation straightforward, I guess. Creating a logical Volume takes around a minute, but creating a Partition takes nearly 4 minutes. I found later that deleting a Logical Volume takes 6 minutes. The disks are fairly busy during these times, and there was no interaction possible with the user interface during that time.

So I assigned Channel 0 ID 0 LUN 0 to LV0 Partition 0, and Channel 1 ID 0 LUN 0 to LV1 Partition 0.

These Partitions have nothing to do with the DOS or GPT partition that might (or might not) be created on the "disk" corresponding to the LUN, as seen by the host operating system.

When I subsequently attached the device to a server with an operating system, I found for me on that occasion with more than one LUN defined, that the order of the /dev/sdX device files, dynamically assigned at boot time, wasn't top to bottom in the RAID. There are probably various factors which could affect this (cabling, ordering of sockets on the LSI sas/sata HBA card, or presentation order by the RAID unit), but of course this is exactly why Linux uses filesystem LABELs and/or UUIDs, in order to eliminate ambiguities once the filesystems have been setup.

Host I/O performance

Already noted is that the sustained I/O transfer speed of the individual drives in this RAID is 150 MBytes/second. So the maximum sustained real-data transfer speed in a RAID6 of 10+2 drives is going to be 1500 MBytes/second, and less than that if other factors come into play (as they do!).

Quick hdparm read performance test

The hdparm command provides an option -t to do quick simple read tests on a disk drive or system. The hdparm -t command was run on the raw /dev/sdb device from a SL5/RHEL5 system, with the --direct option (O_DIRECT), and without (giving normal kernel page-buffered I/O). The amount of read-ahead was modified per run, using the blockdev --setra command. This gave the following results:

hdparm read test

Quick dd write performance to raw LUN device

A quick dd test was run using the dd(1) *nix command, copying from /dev/zero to the raw host device corresponding to the logical volume partition for that device file. Obviously not to be run after putting a filesystem on that device! This was done for a variety of dd bs values ("blocksize"). Most of the bs values were chosen as a multiple of the RAID single-disk stripe values, but just for comparison, there's one which is not such a multiple at the left hand edge of each graph. The two graphs are for the earliest and later firmwares.

ddwrite results ddwrite results

Creating a filesystem on a logical volume partition

One of the logical volumes of 30TB was formatted as XFS. As I mentioned, for me the order of the dynamically assigned /dev/sdX device files wasn't top to bottom in the RAID; that's not a problem once the filesystems have a filesystem LABEL or a UUID, but worth checking (as always), say by exercising the RAID and checking visually, before setting up that label or uuid. It's perfectly possible for an external RAID to turn up as /dev/sda and an internal disk (say a RAID-1) to get assigned as /dev/sdb!

I formatted and mounted as the bare device, rather than as a GPT partition. (There was no real need for a GPT partition table, but if I did partition, it would be sensible to ensure that partition(s) begin on a RAID full-stripe boundary). I later found this useful XFS guide which confirms my mkfs.xfs parameters below:

# mkfs.xfs -f -d su=128k,sw=10 -L 24a /dev/xxx
meta-data=/dev/xxx               isize=256    agcount=32, agsize=228880256 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=7324168192, imaxpct=25
         =                       sunit=32     swidth=320 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

real	0m0.816s
user	0m0.002s
sys	0m0.016s

# mount LABEL=24a /disk/f24a

Bonnie++ performance

Bonnie++ 1.03e tests were run on the same Scientific Linux 5.8 system as above. I'm using this system for this RAID in particular as the unit will be acting as a storage pool node for GridPP, and this currently requires SL5/RHEL5.

For the first setup, the system was booted with a kernel parameter of mem=4G in order to limit the amount of RAM available, as we wanted to measure the performance of the RAID, not of the Linux page buffer in RAM; a benchmark file space of 24 GiBytes was used, much larger than that RAM value; the bonnie chunk size was set as the RAID's per-disk stripe: 128 kiB. A second setup using the full RAM of 24 GiBytes and a larger file space of 48 GiBytes and a bonnie chunk size of 8 kiBytes was also done, and yielded very similar throughput results, as in graphs below (and much better random seeks: see detailed figures further below). The benchmark was repeated for a number of different read-ahead values, set using the blockdev --setra command.

More importantly, the tests were done on two different firmwares: the earliest supplied 3.86C.09, and the latest supplied 3.88A.03. Just as with the firmware functional tests, the later firmware has signicantly lower performance, for Writes and for ReWrites (which in bonnie++ are read & modify & update in place), as seen in the graphs below.

bonnie results bonnie results

Bonnie++ results in detail

Note that bonnie++ reports data rates in kiBytes/second, while the graphs above use MBytes/second (ie millions of bytes), the latter being commonly used when talking about data rates.

bonnie++ Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Label - fstype - Read-ahead sectorsSize:Chunk SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU
3.86C.09 firmware
Server RAM 4GB, system SL5.8
f24a-xfs-25624G:128k93699997530825622892920851909850070129220.71256:1000:1000/2564468831832299226912442847714981394189022
f24a-xfs-102424G:128k93662997196085426117617864669972125628222.81256:1000:1000/2564409811719909428282143337913883991181719
f24a-xfs-204824G:128k93716997389405534482123868669995865736249.22256:1000:1000/2564401791640728927902142917814111694180921
f24a-xfs-307224G:128k91118997248235334657823859289987997931244.52256:1000:1000/2564459811773989126741943728016049993182319
f24a-xfs-409624G:128k92499997249885335824224870219990470831236.02256:1000:1000/2564429801841389727101843367916092597182120
f24a-xfs-614424G:128k94019997066295336375425870549990937433223.91256:1000:1000/2564446801716049027322141937815217394181119
f24a-xfs-819224G:128k93925997006995236998526869769994657134220.81256:1000:1000/2564420811778309327801943478015848194178425
f24a-xfs-1228824G:128k93649997265385437230428858799995359836240.02256:1000:1000/2564428811633939228182043697815479598180719
f24a-xfs-1638424G:128k936299969790553391535308433399105573540242.52256:1000:1000/2564406791830598928112043297915846396186522
f24a-xfs-2457624G:128k935719972023354413144328662599109961743251.32256:1000:1000/2564396801690999226641943437914700894180519
f24a-xfs-3276824G:128k938719970506152394580318650399102528641227.72256:1000:1000/2564402811679959228181943487914366793175625
f24a-xfs-4915224G:128k90897997098045237462430854569997722139240.02256:1000:1000/2564437801741539428652242917914363392176926
f24a-xfs-6553624G:128k93991997246005532251226854609998177339240.72256:1000:1000/2564423801734649327161943208013433097178124
 
Server RAM 24GB, system SL5.8
f24a-xfs-25648G:8k94254997740486020296721828349548532829457.01256:1000:1000/2564466851663469328022544728215088493191323
f24a-xfs-102448G:8k94108997943846123199418848069770353030451.00256:1000:1000/2564543841393858826082043718114145891190823
f24a-xfs-204848G:8k93973997712456027574322858629991536436413.80256:1000:1000/2564445831717729927472044438113883492174425
f24a-xfs-307248G:8k93692997764386029797624866819984626832411.30256:1000:1000/2564462831689319127612044738214873994180820
f24a-xfs-409648G:8k93877997768386131028525834409986119932436.11256:1000:1000/2564487831659899127051943158114660796186422
f24a-xfs-614448G:8k93010997787696130971325866779989185333422.80256:1000:1000/2564460821625059027221844698213017491174826
f24a-xfs-819248G:8k93906997766826031215925869229992728335429.50256:1000:1000/2564515831469628926692444068212932789176025
f24a-xfs-1228848G:8k93977997807226132760626864139997331437443.01256:1000:1000/2564522831572739528862544818213101589176325
f24a-xfs-1638448G:8k94223997810696132081826861509997347636409.51256:1000:1000/2564506821566459328012843648115085294187220
f24a-xfs-2457648G:8k94686997821886032531027863739999391239422.71256:1000:1000/2564449821622948927332143998114283799177520
f24a-xfs-3276848G:8k941889978089861329979278602999100033939422.20256:1000:1000/2564477831457408729022942668114984894191023
f24a-xfs-4915248G:8k895869978724561330876278593999105378441426.10256:1000:1000/2564518831405548626792744028013885393191023
f24a-xfs-6553648G:8k942759977698261319123288565599104165041411.31256:1000:1000/2564468831632259226612045178214876096190322
 
3.88A.03 firmware
Server RAM 4GB, system SL5.8
f24a-xfs-25624G:128k95117994073863215486913850659849775129222.51256:1000:1000/2564288861625099128021942888015356099180219
f24a-xfs-102424G:128k93609994007643216183111864829971664129219.81256:1000:1000/2564308831598709027142241357914774292191521
f24a-xfs-204824G:128k93560993964533217579613864919995285534224.51256:1000:1000/2564318831563319226482041368014604493176023
f24a-xfs-307224G:128k93679994036343219153614872959990150231219.71256:1000:1000/2564303831754069229072242778113922892179420
f24a-xfs-409624G:128k93584994000543219999914869029991323832221.52256:1000:1000/2564299831585879226952242248014765791182421
f24a-xfs-614424G:128k90376994093383321840916868389993735134228.02256:1000:1000/2564236821566899728312742548114962896179019
f24a-xfs-819224G:128k95350994074723222714016846889999004833243.52256:1000:1000/25643108318196110027791842058013686696174226
f24a-xfs-1228824G:128k93100993961013223259617861879997607534232.82256:1000:1000/2564319831656809627282341978015408298179419
f24a-xfs-1638424G:128k952979940386632241401188449099100038938245.72256:1000:1000/2564310821679929126531942248113916793187821
f24a-xfs-2457624G:128k944889939964032244820238622499101184937222.41256:1000:1000/2564314821686229228582042208015809591181020
f24a-xfs-3276824G:128k932069940638632251109278740799104230042222.32256:1000:1000/2564335821660689128552242228115907896189721
f24a-xfs-4915224G:128k935309941869133242485288482099111755545253.42256:1000:1000/2564326831794609126631842408114786792192722
f24a-xfs-6553624G:128k935989939899232242670328615099102242641220.11256:1000:1000/2564304821571049227032042548214613693179322
 
Server RAM 24GB, system SL5.8
f24a-xfs-25648G:8k94046994397133712707613820169547447530419.40256:1000:1000/2564513871576599226302044198214922389178220
f24a-xfs-102448G:8k82296994369313613912310836129970604030411.50256:1000:1000/2564348841757069026321843538415318489174524
f24a-xfs-204848G:8k94148994431653714796011859789893187436426.71256:1000:1000/2564380841682629127591943748414651392185423
f24a-xfs-307248G:8k93124994441583716271513838799986354833436.70256:1000:1000/2564369851716979529592842008215326792179621
f24a-xfs-409648G:8k94316994403903716333412863769982296530412.00256:1000:1000/2564482821755229927792044058115216791180720
f24a-xfs-614448G:8k92935994425893817916614865409984993731409.81256:1000:1000/2564397851565749526191842968214189391188923
f24a-xfs-819248G:8k93867994411083818511314868329993623535400.80256:1000:1000/2564352851474659026261942958315299689180124
f24a-xfs-1638448G:8k94131994453603820774416864139996539836423.90256:1000:1000/2564424851604309027512542388214652391180221
f24a-xfs-2457648G:8k942659944310537220534218575099103009840425.80256:1000:1000/2564428851554918926242043588315413289174625
f24a-xfs-3276848G:8k918369944303938221638268492499100370239419.60256:1000:1000/2564422841616499126242043718314941091173025
f24a-xfs-4915248G:8k941839944479538209690288569699107407842421.41256:1000:1000/2564430851638869027772042638215192794191223
f24a-xfs-6553648G:8k82137994386853721951131830819996199638400.20256:1000:1000/2564335851694249827561843548314055793175825
 
Reminder of an example good result from above
f24a-xfs-1638448G:8k94223997810696132081826861509997347636409.51256:1000:1000/2564506821566459328012843648115085294187220
 
Different raid: f26
3.86C.09 firmware
Server rex15 RAM 12GB, system SL6.2
One RAID6 set on one channel, single bonnie++
26a-xfs-1638448G:8k873129974132972328550388221199105509550289.10256:1000:1000/256254729924978999353699924686972187751002822295
Two RAID6 sets on one channel: two synched bonnie++ (sum them for total)
26a-xfs-1638448G:8k91007993261973023524029824279964380333258.20256:1000:1000/25618311762449229923129951827274188190991794293
26b-xfs-1638448G:8k86740993263383119982125814479969170733249.80256:1000:1000/25618349742367789923500971806976186312991575987
Two RAID6 sets on 2 separate channels, two synced bonnie++ (sum them for total)
26a-xfs-1638448G:8k89490993237473118594522818039972048234251.20256:1000:1000/25618305781964239919967931783478186811991382188
26b-xfs-1638448G:8k90047993239053122382228825249967200433261.00256:1000:1000/25618750792299429923380961840377208329991389888
One RAID6+0 single bonnie++
26a-xfs-1638448G:8k879859967018666364015428194299131301362314.20256:1000:1000/25625470962846729935294992418996219745992814296
One RAID6+0 two synched bonnie++
26a-xfs-1638448G:8k86618993397303219280724809239959482829236.80256:1000:1000/2561096282196467991211280105547818356899956876
26a-xfs-1638448G:8k91131993424063319337724796769959529629238.00256:1000:1000/2561091982192437991216776105498117701799952378
One RAID6+0 single bonnie++ with various read-aheads
26a-xfs-25648G:8k91066996518336225003331803659852667431328.90256:1000:1000/25625009972807099934734992476397230272992730996
26a-xfs-102448G:8k86843997389226829038434816659970724734346.30256:1000:1000/256254599726082399329269724512972096411002744796
26a-xfs-204848G:8k880909966784863355722408205999102012547301.60256:1000:1000/256252319730643210034218992406898208868992674296
26a-xfs-409648G:8k863209970298664363922428177599115203553298.60256:1000:1000/25625272962525329933905992401398229937992669494
26a-xfs-819248G:8k776349964942462372040427647499125410758329.60256:1000:1000/256251199730881610035807992442397203230992621396
26a-xfs-1638448G:8k855649965330860363821437619799131642562340.10256:1000:1000/25625196972713759935953992368898189150992521395
26a-xfs-3276848G:8k882849971984966370491418355799133353463283.00256:1000:1000/256252039724960610032702962465296204294992696495
26a-xfs-6553648G:8k895309967768161373245438195599133970564328.50256:1000:1000/256252129725800410034146992453697233819992838097
 
final set on a single logical RAID6 drive
26a-xfs-25648G:8k88749997301666625161731797709955257330285.70256:1000:1000/25626020983392539935210992498498242407992792596
26a-xfs-102448G:8k87167997736166628965135795829874004534302.70256:1000:1000/25625547982770779932649962400897206898992733596
26a-xfs-204848G:8k91034997447167333779939802449998811146284.60256:1000:1000/2562551997281377100317009624695972331471002706695
26a-xfs-307248G:8k86817997426196832749940824039988264841300.20256:1000:1000/256254269726226510031940972458497223224992806796
26a-xfs-409648G:8k90731997325296733983238815379992137043297.20256:1000:1000/25625642982799509932657972374298230243992650094
26a-xfs-614448G:8k91508997043456733892941828369997752945280.10256:1000:1000/256257029933958899311319624482972220501002642495
26a-xfs-819248G:8k903749974544769346980387597599102920947294.30256:1000:1000/256253119727142110030969962439196186307992452995
26a-xfs-1228848G:8k899049970191869340218387961999104512149297.80256:1000:1000/256254619826670699330869724120972084451002898197
26a-xfs-1638448G:8k889609972716571347926418202699105220550292.50256:1000:1000/25625471982671569935033992409797207177992718496
26a-xfs-2457648G:8k896349969518863344629408280299112464753282.60256:1000:1000/256255999727938510031047952414497227649992673995
26a-xfs-3276848G:8k833219972213067317217398298699115613355299.20256:1000:1000/25625504972538469931864952451397233390992706794
26a-xfs-4915248G:8k913139973523867328848378163699108686651298.50256:1000:1000/256255129727089110032813962479998234087992646296
26a-xfs-6553648G:8k913799970146169328827388225999111050253284.40256:1000:1000/25625635982656149931014972456997232559992761296

Worth noting how much sensible the XFS file create and delete time is, under RHEL6 / SL6 (result lines beginning 26a or 26b), compared with the poor XFS times in earlier RHEL5 / SL5 systems (also see my comparison of XFS and ext4 filesystems). Create time is around 6 times shorter, delete time 15 times shorter (check).

L.S.Lowe