TWiki> Computing Web>LocalGridTopics>LocalGridRaidCreate (28 Jun 2010, _47C_61UK_47O_61eScience_47OU_61Birmingham_47L_61ParticlePhysics_47CN_61lawrence_32lowe? ) EditAttach

Creating RAID logical drives at the RAID firmware level

A fresh delivered Infortrend RAID needs to have some preparation steps performed before it is ready to be formatted. (For that, see LocalGridRaidFormat).

Administrative steps

Some initial steps are done using the RAID firmware tool, which is a built-in piece of code that can be accessed via the front panel, via a RS232 interface, or via an IP address.

First the RAID is set to pick up its administration IP address via DHCP, using the front panel menu. (The default otherwise is an internal 10.x.x.x address). On the front panel, go to Configuration parameters → Communication parameters → LAN IP configuration → press enter several times till you see View Statistics, then go down to View & Setup IP address, and down to change the first digit to D in place of an IP address, which expands to DHCP Client. Press and hold to get that stored, and then use System Functions to reset the controller. Make sure the subnet [47] DHCP server will respond to the MAC address.

After that, it's possible to configure by telnet TUI (all raids). There's also a web interface (ok on recent raids, not recommended before f16). I find the TUI quicker. You can have one TUI session at a time.

A new RAID needs to be assigned a name, like f14 filer, and a fresh password. The password is slightly different on each RAID box, and this has stood me in good stead to prevent re-configuring the wrong RAID when the firmware interface wasn't on the IP address I expected it to be. Also email-alerts are set-up, to notify conditions like failed drive.

To configure the password, I go into System Functions → Change password. To give the RAID a meaningful name, I use Configuration parameters → Controller parameters → Controller name. In the same sub-menu, I change the Password validation timeout to 5 minutes.

Email alerts used to be set up using an ftp client (NB, in active mode) to upload a cfg/agent.ini file. This no longer works in the same way for f16 on, so the GUI has to be used.

Assigning channel IDs (for SCSI-attach RAIDs)

For Infortrend SCSI-attach RAIDs, assigning SCSI channel IDs has to be done before the LUNs are mapped. (Assigning channel IDs later would unmap any existing LUN mapping). For our grid storage, raids f9, and f12 to f15, are SCSI-attach RAIDs.

For a RAID which is accessed as one or more SCSI devices, that is, cabled from a SCSI HBA card in the host server, it's necessary to pre-assign a SCSI ID to the channels. This is a number from 0-15 except 7, and has to be unique on each SCSI channel. Our RAIDs have two channel interfaces, and the SCSI HBA cards on the server generally have two channels. So in the case of a single RAID box, one can simply connect the two channels via two cables to the two RAID channels. Where there are more RAIDs, one can daisy-chain the RAIDs on these channels. Provided each RAID box is only attached to each of those daisy-chains once, it's sensible to give the RAID channels the same SCSI ID. If you put both channels of a RAID box on the same daisy-chain, that wouldn't be possible of course, but why would you do that.

So for our RAID f14, I used the RAID firmware tool with the Edit Channels function to set the SCSI ID on both the channels to 14. I've been able to use the same manner of numbering for all our RAIDs: f2-f15.

Our RAID boxes f16 onwards use SAS connections and therefore don't have SCSI IDs. Good planning!

Setting the channel transfer rate (for Infortrend SCSI-attach RAIDs)

Setting the channel transfer rate is important, not only to get optimum transfer speeds, but also, for this hardware combination, in order to support LUNs and filesystems of over 2 TB in size! I've noted this oddity elsewhere.

Infortrend SCSI-attach RAID controllers come with 80 MHz as the default transfer rate. Using the RAID firmware tool with the Edit Channels function Sync transfer Clock, I set the rate to 160 MHz. Wide transfer (which means 16 parallel bits) is also in effect, so this corresponds to a rate of 320 MBytes/second. The scsi terminator function also needs to be set to enabled if there is nothing attached to the SCSI-Out connector on this RAID channel.

Raid level and size

With RAID-6 available on all recent RAID units, this is the raid level of choice for us. Older units (f9 and earlier) had RAID-5 capability but not RAID-6.

With RAID-5, one disk can fail and the data is still accessible. But if another disk fails before the faulty disk has been replaced OR before the raid data has been re-built on the replacement disk, then the data is lost. As the re-build process can exercise parts of the disk system which may not be often accessed, this is a particularly vulnerable time. It's good to minimise the overall time between initial disk failure and rebuild-complete. The formula for calculating the overall probability of data loss per unit time of a RAID takes all these factors into account. So for RAID-5, we always have a designated Spare disk: one that is known to the RAID firmware to be available to rebuild-on immediately an active disk fails.

For RAID-6, two disks can be in a failed state and the RAID still operates with data intact. This reduces the vulnerability and usually one doesn't have a nominated spare disk. Nevertheless, one still wants the re-build time to be reasonably short, which means not too many disks in one RAID set. The size of each physical disk clearly also influences the rebuild-time too, but that's a pre-purchase decision.

Creating Logical Drives

For our 24-disk RAID-6 units, to avoid unreasonably long rebuild times, we use the RAID firmware tool with the Edit Logical drives function to create two Logical Drives (out of physical disks 1-12, and 13-24), rather than one. These then operate independently as two separate RAID sets as far as RAID failure is concerned. There is no designated spare disk. The individual RAID sets can be thought of as 10 data disks and 2 parity disks, though this is simplistic because the data and the parity information are actually striped over the 12 disks. It takes around 8 hours to create each Logical Drive. It takes longer if you try to do both in parallel.

Splitting the logical drives

Logical Drives of 10TB are quite acceptable and could have a full 10TB filesystem put on them, as we do with local non-GRID filesystems, but in line with our previous practice for Grid storage we currently want to split these into two 5TB filesystems. There are two ways of doing that: splitting the Logical Drive within the RAID firmware, so that it presents as a separate LUN device to the server, or GPT-partitioning the drive into several Linux partitions so that it presents several linux partitions on the same LUN. We have always done the first of these, and this is arguably safer as the effect of corruption of one Linux partition table is then more limited in scope. There are also performance considerations.

So within each RAID-set or Logical Drive, we use the RAID firmware tool to split the logical drive into two areas, so each can be accessed by a scsi LUN. The RAID firmware calls this action Partition Logical Drive, but this is not to be confused with DOS/GPT partitions. For a 12-disk RAID-6 set using 1 TB drives, the full RAID set has 9536090 MiBytes, so each area is 4768045 MiBytes. It's sufficient with this firmware simply to re-set the size of the original area to the new smaller size, and the second area is automatically created with the remaining size.

[Useless fact: accessible block numbers (512-byte blocks) on these LUNs range from 0 to 9764956159 (as determined by dd with bs=512 count=1 seek= nnn).]

Assigning Host LUNs

We then have to assign the way these areas to channels and LUNs, using the RAID firmware tool, with the Edit Host luns function. We use the following mapping:

   Channel 0 LUN 0 -->  Logical Drive 0 part 0
   Channel 0 LUN 1 -->  Logical Drive 0 part 1
   Channel 1 LUN 0 -->  Logical Drive 1 part 0
   Channel 1 LUN 1 -->  Logical Drive 1 part 1

-- LawrenceLowe - 04 Mar 2010

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 28 Jun 2010 - _47C_61UK_47O_61eScience_47OU_61Birmingham_47L_61ParticlePhysics_47CN_61lawrence_32lowe?
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback