Searching: Power - 135 Results Found

VTrak Ex10 VTrak Drive Size Coercion Implementation
Ken Chou posted this 28 May 2008

VTrak Drive Size Coercion Implementation

Introduction

Drive Size Coercion is aligning the physical drive size to a suitable boundary for the purpose of raid configuration. This is the size accounted for the creation of array and logical drives.

Scope

This document briefly describes the implementation of drive size coercion of VTrak controllers.

Overview

Drives From different vendors come with different number of LBA’s for same given size. Aligning these drive sizes to a common boundary makes it possible for raid configuration to use the hot spares and drives for replacement with minor capacity difference.

Terminology

  • LD: Logical Drive
  • LBA: Logical Block Address  (synonym to 512 bytes)
  • Sector: 1 logical block – 512 bytes
  • PD: Physical Drive
  • Capacity: Drive size (after or before coercion)
  • LED: light emitting diode
  • BP: Back plane
  • FRU: Field Replaceable Unit
  • SEP: Storage Enclosure Processor
  • SEMB: Storage Enclosure Management Bridge- In the scope of this document the terms “SEMB”, “controller”, “mother board” and “SEP driver” are synonymous.
  • POR: Power On Reset
  • BGA: Back ground activity – used with reference to scheduler that runs in the context of application to control some of the back ground activities in VTrak controllers.
  • MDD/DDF: configuration Data on disk.
  • Gigabyte: In the context of this document, refers to 1000 * 1000 * 1000 bytes and note that this is not equal to 1024 * 1024 * 1024 bytes.
  • OEM: Original Equipment Manufacturer.

 

Requirements

Key requirements for drive capacity coercion with VTrak controllers.

  • An option to disable coercion
  • Truncate drive size to the nearest GByte boundary
  • Coercion based on drive capacity
  • Use a coercion table

Please note: Since alternate methods of coercion could truncate drive size differently, a previously coerced or truncated drive capacity holds good irrespective of any new coercion method enabled.  The following sections describe the above features in detail.

 

Coercion Options

No Coercion

The VTrak controller settings provide option to enable or disable coercion. If disabled, no coercion method will be applied to the drives in controller. This configurable size for the raid purposes with the drive capacity after DDF/MDD size. This option is persistent, as controller settings are stored in NVRAM.

Coercion aligning to GigaByte boundary

This is the default method if coercion is enabled. This method truncates the drive capacity after MDD size to nearest gigabyte boundary. If drive capacity is less than 1 gigabyte, no coercion is applied.

The formula for this method is given below:

Coerced Drive Capacity = ((Drive Capacity – MDD size)/ GigaByteInSectors ) * GigaByteInSectors

Where GigaByteInSectors = 0x1DCD65 ( 0x1DCD65h = 1953125 sectors = 1000000000 bytes)

Coercion aligning to 10GB boundary

This method truncates the drive size for raid purposes to 10GB boundary.

The formula for this method is given below:

Coerced Drive Capacity = ((Drive Capacity – MDD size)/ 10GigaByteInSectors ) * 10GigaByteInSectors

Where 10GigaByteInSectors = 0x12a05f2 (0x12a05f2= 19531250 sectors = 10000000000 bytes)

Coercion based on drive Size or Group rounding

If this method is chosen (also called group rounding method), coercion is based on drive capacity. The drive is truncated to 1GB less than the drive’s closest 10GB/5GB boundary.

Today’s drive are of greatly varying capacity for given size range. The actual problem is that even though all the vendors claim the same size, the margin varies between couple of MBs (30MB) to GBs (2GBs). For example Seagate claims drive size to be 80GB, however actual size is only couple of MB more than 80GB. Maxtor and Hitachi are almost 2GB above 80GB.

When looking at drive size after DDF reserved space, Seagate’s only 79GB, Maxtor and Hitachi both are 81GB. This means that even having 10GB coercion will not solve the problem, since Seagate is going to be at 70GB and both Maxtor and Hitachi will be at 80GB. Keeping this scenario in mind, to make drives from different vendors to be usable in raid set-up, the alignment boundary is chosen to be 1GB less than the closest 10GB/5GB boundary of the disk size.

Here is a more detailed example: Drives which are above 80GB and below 85GB would be coerced to 79GB. Drives above 85GB and below 90GB would be coerced to 84GB and so on. If this method is applied to drives smaller than 50GB, wasted space would be considerable. However drives greater than 50GB and less than 1TB, this coercion method should be fine.

Coercion Table Method

This method is based on the coercion table residing in the flash. The default coercion table is described below. The coercion table is searched to find the entry that matches the requested drive capacity. Corresponding coercion factor is used get the coerced size as given below:

Coerced Drive Capacity = (Drive Capacity – MDD Size) / Coercion Factor) * Coercion Factor

Currently there is no interfaces or APIs provided to accept an OEM table. Therefore the default coercion table is used and shown below:

Entry Number Drive Size Coercion Boundary
1 0-19GB 1GB
2 20-39GB 20GB
3 40-59GB 40GB
4 60-79GB 60GB
5 80-99GB 80GB
6 100-119GB 100GB
7 120-159GB 120GB
8 160-199GB 160GB
9 200-249GB 200GB
10 250-299GB 250GB
11

300-319GB

300GB
12 320-359GB 320GB
13 360-399GB 360GB
14 400-449GB 400GB
15 450-599GB 450GB
16 600-799GB 600GB
17 800-999GB 600GB
18 1TB > 1TB

 

Conclusion

With different coercion options, you can choose the best method that will allow ease of RAID configuration with drives of different capacity.

Legacy Other Best practices for E-Class VTrak RAID
Ken Chou posted this 13 May 2008

Abstract

Reliable operation of Promise Technology storage devices is dependent upon correct assembly, cabling, and configuration. Incorrect assembly or cabling can result in various intermittent errors that can appear as drive failure or path loss. Incorrect configuration can result in data loss and poor performance.

It is also important that only fully qualified hardware is used. A list of fully qualified hardware, can be found on the Promise Technology website. Please check the download section for the latest VTrak EClass compatibility list.

Checklist

Assembly

Ensure that the following are correctly seated and locked in place. It’s recommended to reseat all removable FRU before installation.

  • Controllers
  • Power supplies
  • Fan trays

Ensure that all cables are fully seated and locked in place

  • SAS cables to HBA (Host Bus Adapter) snapped into place
  • Ensure there is no debris in the MiniSAS connector when connecting cables
  • MiniSAS cables between JBODs or JBODs and the RAID Head network connectors snapped into place
  • Power cables are fully seated with wire lock in place
  • Drive trays should be fully seated and locked
  • For adjacent drive trays, one tray needs to be seated and locked before the next drive tray is inserted

Code Set

Firmware for RAID head and JBOD expansion should have the latest qualified specified builds

  • Firmware/Software for RAID head is version 3.29.0000.00 or newer
  • Expander on JBODs known as SEP should be version 1.07.0000.04
    (check from JBOD IO Module RS232 connection using “enclosure –v” command or via CLI using “sasdiag” command)

Configuration

Array and LD configuration is set by the customer according to their requirements or use the recommend Apple Scripts. The following steps are to ensure LD integrity.

  • A full initialization should be run on every LD to ensure media is free of errors and parity is established. Initialization is data destructive.
  • Synchronization will build parity on read. Synchronization is not data destructive.
  • Recommend to finish all BGA (Background Activity) before running Final Cut Pro. BGA will increase overhead and can result in dropped frames.

Error Notification

Make sure one of the provided error notification methods is used

  • Set Email or other notification

 

General Troubleshooting

Known Issues

  • Low performance in the Mac OS environment may be cased by not having both ethernet management ports on the VTrak EClass attached and configured to access your network. Please be sure both ethernet management ports are attached and active.
    This issue is fixed in firmware update SR 2.4 and later.

Low Performance when using custom settings

  • Ensure LUN Affinity is enabled and LDs are load balanced
  • Check BBU hold time. Marginal BBU with less that 72 hour hold time will trigger the Adaptive Write Cache feature and will set the LD Write Cache to Write Thru thus impacting write performance
  • Ensure MDD LD (Leopard XSAN environment) has the following cache settings
    • Readpolicy set to ReadCache
    • Writepolicy set to Write Thru
  • Check PD “stats” command to insure that a particular PD is not the root cause. Check:
    • Average Response Time
    • Maximum Response Time
  • Make sure you have at least two FC paths going to each host
  • Verify FC Current Transfer Speed is set to 4Gbit

Controller Fail Over potential root cause

  • Controller Failure o Very rare seldom seen
  • Broken Path RAID Head to JBOD
    • Cable on respective SAS domain has been broken
    • Link counter errors on expander (cable etc)
  • Broke Path from RAID Head to AAMUX dual path o One of the 2 controllers can see the PD the other cannot
  • Controller in maintenance mode
    • Most of the above
    • Memory mismatch (we ship 2 GB so unlikely unless controller purchased outside the Apple channel)
    • Firmware mismatch, replacement controller could have shipped with newer or older firmware (this may happen)

Controller Fail Over diagnosis

  • Connect both Serial Management ports to each respective serial console to interrogate which controller is in maintenance mode. Reference Quick Install Guide for instructions.
  • If the controller is maintenance mode issue the “event” command to understand the root cause.

Power up Sequence

  • Power the JBOD first then power up the RAID Head. Not doing so can lead to offline LD (reference Promise or Apple KB on how to force PDs online). Use extra caution; forcing a PD online with stale data can lead to data loss.

Debug Prints for Promise FAE

  • Enable debug prints to the “warning” state on both controller via CLI using “debug” command
  • Have two Terminal consoles open and capture output of each console to a file
  • Save “SubSystemInfo” and send information to FAE
    • From CLI use "export" command (type "? export" for instructions)
    • From WebPAM PRO. Click on IP Icon then click on “Save”
    • See link below for detailed instructions to save Subsystem Information.

Link Counters on JBOD Expanders

  • Check respective IO Modules on JBODs to insure there are no errors being encountered on the links (for additional insight reference “Link Error Workaround Procedure for VTrak E-Class”).
  • From CLI on the RAID Head using “sasdiag” command or from each respective RS232 serial port on each JBOD IO Module (JBOD CLI) make sure there are no link errors being reported. If link errors are present clear the link errors using “link –a clear” command. After Link errors are cleared generate IO either from host side or via BGA if there is no data on the LD (Background Activity such as Init, Sync, Media Patrol)
  • Issue “link” command again. Link output via CLI should be free of errors
  • Repeat procedure on each respective JBOD IO Module
  • If Link errors persist do the following
    1. Power down VTrak Ex10 and JBOD and reseat the JBOD IO Module
    2. Replace cable
    3. If the issue continues replace JBOD IO module with a known good working module
    4. The end result should be Link output via CLI being free of errors

End Notes

The steps above will help ensure data integrity and protect against unnecessary and preventable storage problems. If assembly errors are found in a storage chassis already configured, they should be corrected as soon as possible.

Close