Creating RAID logical drives at the RAID firmware level

A fresh delivered Infortrend RAID needs to have some preparation steps performed before it is ready to be formatted. (For that, see LocalGridRaidFormat).

Assigning channel IDs (for SCSI-based RAIDs)

For Infortrend SCSI RAIDs, assigning SCSI channel IDs has to be done before the LUNs are mapped. (Assigning channel IDs later would unmap any existing LUN mapping).

For a RAID which is accessed as one or more SCSI devices, that is, cabled from a SCSI HBA card in the host server, it's necessary to pre-assign a SCSI ID to the channels. This is a number from 0-15 except 7, and has to be unique on each SCSI channel. Our RAIDs have two channel interfaces, and the SCSI HBA cards on the server generally have two channels. So in the case of a single RAID box, one can simply connect the two channels via two cables to the two RAID channels. Where there are more RAIDs, one can daisy-chain the RAIDs on these channels. Provided each RAID box is only attached to each of those daisy-chains once, it's sensible to give the RAID channels the same SCSI ID. If you put both channels of a RAID box on the same daisy-chain, that wouldn't be possible of course, but why would you do that.

So for our RAID f14, I used the RAID firmware tool with the Edit Channels function to set the SCSI ID on both the channels to 14. I've been able to use the same manner of numbering for all our RAIDs: f2-f15.

Our RAID boxes f16 onwards use SAS connections and therefore don't have SCSI IDs. Good planning!

Raid level and size

With RAID-6 available on all recent RAID units, this is the raid level of choice for us. Older units (f9 and earlier) had RAID-5 capability but not RAID-6.

With RAID-5, one drive can fail and the data is still accessible. But if another disk fails before the faulty disk has been replaced OR before the raid data has been re-built on the replacement disk, then the data is lost. As the re-build process can exercise parts of the disk system which may not be often accessed, this is a particularly vulnerable time. It's good to minimise the overall time between initial disk failure and rebuild-complete. So for RAID-5, we always have a designated Spare drive: one that is known to the RAID firmware to be available to rebuild-on immediately an active disk fails.

For RAID-6, two drives can be in a failed state and the RAID still operates with data intact. This reduces the vulnerability and usually one doesn't have a spare drive. Nevertheless, one still wants the re-build time to be reasonably short, which means not too many disks in one RAID set. The size of each physical disk clearly also influences the rebuild-time too, but that's a pre-purchase decision.

Creating Logical Drives

For our 24-disk RAID-6 units, to avoid unreasonably long rebuild times, we use the RAID firmware tool with the Edit Logical drives function to create two Logical Drives (out of physical disks 1-12, and 13-24), rather than one. These then operate independently as two separate RAID sets as far as RAID failure is concerned. There is no designated spare drive. The individual RAID sets can be thought of as 10 data disks and 2 parity disks, though this is simplistic because the data and the parity information are actually striped over the 12 disks. It takes around 8 hours to create each Logical Drive. It takes longer if you try to do both in parallel.

Splitting the logical drives

Logical Drives of 10TB are quite acceptable and could have a full 10TB filesystem put on them, as we do with local non-GRID filesystems, but in line with our previous practice we currently want to split these into two 5TB filesystems. There are two ways of doing that: splitting the Logical Drive within the RAID firmware, so that it presents as a separate LUN device to the server, or GPT-partitioning the drive into several Linux partitions so that it presents several linux partitions on the same LUN. We have always done the first of these, and this is arguably safer as the effect of corruption of one Linux partition table is then more limited in scope. There are also performance considerations.

So within each RAID-set or Logical Drive, we use the RAID firmware tool to split the logical drive into two areas, so each can be accessed by a scsi LUN. The RAID firmware calls this action Partition Logical Drive, but this is not to be confused with DOS/GPT partitions. For a 12-disk RAID-6 set using 1 TB drives, the full RAID set has 9536090 MBytes, so each area is 4768045 MBytes. It's sufficient with this firmware simply to re-set the size of the original area to the new smaller size, and the second area is automatically created with the remaining size.

Assigning Host LUNs

We then have to assign the way these areas to channels and LUNs, using the RAID firmware tool, with the Edit Host luns function. We use the following mapping:

   Channel 0 LUN 0 -->  Logical Drive 0 part 0
   Channel 0 LUN 1 -->  Logical Drive 0 part 1
   Channel 1 LUN 0 -->  Logical Drive 1 part 0
   Channel 1 LUN 1 -->  Logical Drive 1 part 1

-- LawrenceLowe - 04 Mar 2010


This topic: Computing > WebHome > LocalGridTopics > LocalGridRaidCreate
Topic revision: r1 - 04 Mar 2010 - _47C_61UK_47O_61eScience_47OU_61Birmingham_47L_61ParticlePhysics_47CN_61lawrence_32lowe
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback