Searching: Power - 135 Results Found

VTrak Other Clustering with the 15200
Ken Chou posted this 18 November 2005

Product: VTrak 15200

Platform: Independent

How to configure a two node iSCSI configuration

We do not support clustering with the 15200 as we found it not to be reliable in our testing.    I recommend you persuade your customer to migrate to a VTrak M500i system that is capable of clustering (up to 8-nodes for Windows, RedHat, and SuSE).

Regardless, you can study the steps below on how to configure a two node iSCSI configuration and assist your customer on the functionality aspects.    But he will need to use the 15200 at his own risk.

 

iSCSI Cluster Testing 2 Node:

Note: This is a bare minimum setup.  The Client Test System would normally run the HCT Cluster Storage Block Device, not the PDC.

 

PART I Testing – Cluster Hardware

 

To CONFIGURE the test for Cluster Storage Block Hardware testing:

1.       Begin with the configuration for hardware testing.

      Computer 1 = Cluster Node 1 [Dell PowerEdge 800 Server]

      Computer 2 = Cluster Node 2 [Dell PowerEdge 800 Server]

      Computer 3 = Primary Domain Controller [Dell PowerEdge 800 Server]

2.       Connect nodes 1 and 2 to a 10/100 based switch.  This will be the  "Private Network", e.g. 30.0.0.x

3.       Connect nodes 1 and 2, and the PDC to the Gb LAN switch.  This will be the "Public Network", e.g. 40.0.0.x

4.       Connect nodes 1 and 2 iSCSI, to the Gb LAN switch.  This will be the "iSCSI Network", e.g. 20.0.0.x.

5.       Attach both Promise VTrak iSCSI ports to the Gb LAN switch.

6.       Connect the "Public Network" & "iSCSI Network" to a common public network switch (Nortel 1Gb Ethernet Switch). 

7.       The "iSCSI Network" and "Public Network" can be configured as separate private networks within the same LAN switch.

8.       From a factory defaulted Promise VTrak configure the following Disk Arrays:

1.       Disk Array 1 = RAID 5, 64KB SBS, WriteBack Cache using PD: 1~4 with 1x10GB LD

2.       Disk Array 2 = RAID 1, 64KB SBS, WriteBack Cache using PD: 5~6 with 1x10GB LD

3.       Disk Array 3 = RAID 0, 64KB SBS, WriteBack Cache using PD: 7~10 with 1x10GB LD

9.       Enable auto logon in the iSCSI Initiator software.

10.   Login all Nodes to the VTrak iSCSI RAID Systems.

11.   From node 1, use the New Partition Wizard within Disk Management to create three 6 GB partitions. Create the first partition on disk 1, and name the partition E. Create the second partition on disk 2, and name the partition F. Create the third partition on disk 3 and name the partition G. Make all three NTFS partitions, and enable file and folder compression.

12.   Restart node 2.

13.   On node 2, open Disk Management and verify that disks 1, 2, and 3 are formatted properly.  Do not rewrite signatures, repartition, or reformat.

14.   On node 2, name the partition on drive 1 E, name the partition on drive 2 F, and name the partition on drive 3 G.

 

To RUN the test for Cluster Storage Block Hardware testing:

15.   Install the HCT 12.1.01 on nodes 1 and 2. PDC is not needed to run the HCT Cluster Storage Block Hardware.

16.   During installation, you are prompted to select a test kit.  De-select any pre-selected test categories.   In the Custom Setup window, expand Test Categories, expand Storage Controllers and Devices, expand Storage Block Devices, and then select This feature, and all subfeatures, will be installed on the local hard drive.

When the HCT kit is installed, restart the computer.

17.   Click Start; point to All Programs; click HCT; and then click HCT 12.1.

18.   When Test Manager is finished scanning the system for new hardware, the Test Selection Wizard pane appears. Confirm that Storage Block Devices is not selected in the Categories column.

19.   Click Next.

20.   On the left pane, in the Device Name column, select the MS iSCSI Initiator or iSCSI HBA that is connected to the RAID system.

When the installation is complete, click Finish.

21.   On node 1, start the HCT Test Manager and click Client Server Phase 1: Client, and then click Add Selection. Manually enter the number of nodes in parameter #2, click OK and Start Tests.

22.   On node 2, start the HCT Test Manager and click Client Server Phase 1: Server, and then click Add SelectionManually enter the name of node 1 in parameter #2, click OK and Start Tests.   

23.   Verify that all nodes passed.

 

 

PART II Testing – Cluster Device Services

 

To CONFIGURE the test for Cluster Storage Block Device testing:

1.       Begin with the configuration for hardware testing.

      Computer 1 = Cluster Node 1 [Dell PowerEdge 800 Server]

      Computer 2 = Cluster Node 2 [Dell PowerEdge 800 Server]

      Computer 3 = Primary Domain Controller [Dell PowerEdge 800 Server]

 

To CONFIGURE the tests for Cluster Storage Block Device Services testing:

2.       From a factory defaulted Promise VTrak configure the following Disk Arrays:

A.      Disk Array 1 = RAID 5, 64KB SBS, WriteBack Cache using PD: 1~4 with 3x10GB LD.  (Need 1 LD for each Node, and 1 LD for Quorum)

3.       Enable auto logon in the iSCSI Initiator software.

4.       Login all Nodes to the VTrak iSCSI RAID Systems.

5.       On node 1, use Windows Disk Management to reformat disk 1-3 with the NTFS file system. Do not select the Quick Format option. You should select the Enable File and Folder Compression option.

6.       Verify that nodes 1and 2 recognize the shared disks. (Do not rewrite signatures, repartition, or reformat)

7.       From node 1, click Start, point to All Programs, click Administrative Tools, and then click Cluster Administrator.

8.       In the Cluster Administrator window, from the File menu, select Open Connection, and then select Create new cluster.

9.       Follow the instructions in the New Server Cluster Wizard.

10.   While the cluster is being analyzed, the Analyzing Cluster page displays the results of the analysis. If a red "X" character is displayed on the Analyzing Cluster page, click the Back button to correct the information.

11.   Provide a static IP address for the cluster. This IP address must be valid on the "Public Network."  The Cluster IP will be, e.g. 40.0.0.x.

12.   When you are prompted to enter log on information, enter the username and password for the domain account that the cluster service will run under.

13.   This account is given administrator rights on nodes 1and 2.

14.   Verify that the information on the Proposed Cluster Configuration page is correct, and then click Next to create the cluster.

15.   While the cluster is being created, the Creating Cluster page displays the results of the creation process. If a red "X" character is displayed on the Proposed Cluster Configuration page, click the Back button to correct the information.

16.   Once the cluster is created, click Finish to exit the New Server Cluster Wizard.

17.   Once the cluster is created, right click on the Cluster name (icon on top of the tree), select Properties, select Network Priority Tab, move the "Private Network" connection to the top of the list, then select Properties.

18.   Select Radio button Internal Cluster Communications Only (Private Network), click Apply, and then click OK twice.

19.   If the "iSCSI Network" is listed in the Network Priority Tab, select "iSCSI Network" connection, select Properties, then uncheck enable this network for Cluster use. This will remove the "iSCSI Network" from the list.  Click Apply, then OK twice.

20.   Use Cluster Administrator to start the New Server Cluster Wizard on nodes 2.

21.   For the Action, select Add nodes to cluster.

22.   On the Select Computers page, enter the name of node 2 and then click Add.

23.   While the cluster is being analyzed, the Analyzing Configuration page displays the results of the analysis. If a red "X" character is displayed on the Analyzing Configuration page, click the Back button to correct the information.

24.   When you are prompted to enter log on information, enter the username and password for the domain account that the cluster service will run under.

25.   Verify that the information on the Proposed Cluster Configuration page is correct, and then click Next to create the cluster.

26.   While the cluster is being analyzed, the Adding Nodes to Cluster page displays the results of the analysis. If a red "X" character is displayed on the Adding Nodes to Cluster page, click the Back button to correct the information.

27.   Once the node has been added to the cluster, click Finish to exit the New Server Cluster Wizard.

 

To RUN the tests for Cluster Storage Block Device Services testing:

28.   Install the HCT 12.1.01 on the PDC (not on any of the cluster nodes).

29.   During installation, you are prompted to select a test kit.  De-select any pre-selected test categories.   In the Custom Setup window, expand Test Categories, expand Storage Controllers and Devices, expand Storage Block Devices, and then select This feature, and all subfeatures, will be installed on the local hard drive.

30.   When the HCT kit is installed, restart the computer.

31.   Click Start, point to All Programs, click HCT, and then click HCT 12.1.

32.   When Test Manager finishes scanning the system to detect hardware, the Test Selection Wizard pane appears. Make sure that Storage Block Devices is selected in the Categories column. The text will change to Remote Test Host.

33.   Click Finish.

34.   Select the device name of the HBA that is connected to the RAID system.

35.   When the installation is complete, click Finish.

36.   Run the Simple Crash N Node test and the Simple Move N Node test.

 

 

Notes:

1.

       If you have problems adding a node to the cluster, use this command from the command prompt:

                        cluster node /forcecleanup 

For a list of cluster commands, enter cluster /?.

2.

       Enable Auto Logon for all Nodes

3.

       Disable screensavers on all Nodes.

4.

       Set the Power scheme settings to Always On, and set the Settings for Always On power scheme to Never.

5.

       To create a Domain Controller, type dcpromo from the run command.

6.

       There is no need to enter the gateway for "Private, Public, or iSCSI Networks."

VTrak Other WebPAM Events
Ken Chou posted this 18 November 2005

Product: Vtrak 15100

Platform: Independent

Additional Description: See attachment

WebPAM Events

NS - Not Supported

NT - Not Tested

S3.1 – Supported in SR3.1

Severity Level is according to WebPAM not related with firmware

 

Event ID

Event Description

Severity Level

02.00.0000.00

02.01.0000.00

02.03.0000.15

Event Subscription & Email

Event frame

Event Subscription & EmailEvent frameEvent Subscription & EmailEvent frame

           
 

Controller Events

458753

Data Parity Error

 

Yes

Yes

YesYesYesYes

458754

Command Parity Error

 

Yes

Yes

YesYesYesYes

458755

Bus Reset

 

Yes

Yes

YesYesYesYes

458756

Unrecoverable Error

 

Yes

Yes

YesYesYesYes

458757

Abort task

 

Yes

Yes

YesYesYesYes

458759

Clear ACA

 

Yes

Yes

YesYesYesYes

458761

LUN Reset

 

Yes

Yes

YesYesYesYes

458762

Initiator Error

 

Yes

Yes

YesYesYesYes

458763

Illegal Secondary Identify

 

Yes

Yes

YesYesYesYes

458764

Message Parity Error

 

Yes

Yes

YesYesYesYes

458765

Bus Reboot

 

Yes

Yes

YesYesYesYes

           
 

Ultratrak Events

       

101

Ultratrak Connection Connected

 

Yes

Yes

YesYesYesYes

102

Ultratrak Connection Lost

 

Yes

Yes

YesYesYesYes

           
 

Disk Events

       

851969

Fatal Error

 

NS

/S3.1

Yes

NS

YesYesYes

851970

Disk Down

Critical

Yes

Yes

(Same as Disk Removed)

Yes

NS

(Same as Disk Removed)

YesNS

(Same as Disk Removed)

851978

Disk Plugged In

 

Yes

Yes

YesYesYesYes

851979

Disk Removed

 

Yes

NS

YesYesYesYes

851980

Disk Access Retry

 

Yes

Yes

YesYesYesYes

851982

Disk CRC Error

 

Yes

Yes

YesYesYesYes

1048576

Disk S.M.A.R.T. Failed

Critical

Yes

Yes

YesYesYesYes

851981

Disk ECC Error

 

Yes

Yes

YesYesYesYes

851983

Disk Bad Sector

Critical

Yes

Yes

YesYesYesYes

851984

Disk Reserve Sector Error

 

Yes

Yes

YesYesYesYes

851985

Disk Time out

 

Yes

Yes

YesYesYesYes

1835010

Wipe Out Boot Sector

Critical

NS/

S3.1

Yes

NS

YesYesYes

1835009

Wipe Out Reserved Sector

Critical

NS/

S3.1

Yes

NS

YesYesYes
           
 

DiskArray Events

       

1245184

Create Disk Array

 

NS

(See Create LD)

NS

NS

(See Create LD)

NSYesYes

1245185

Delete Disk Array

 

NS

(See Delete LD)

NS

NS

(See Delete LD)

NSYesYes

589828

Disk Array Critical

Critical

Yes

Yes

YesYesYesYes

589827

Disk Array Offline

Critical

Yes

Yes

YesYesYesYes

786432

Migration Started

 

Yes

Yes

YesYesYesYes

786433

Migration Completed

 

Yes

Yes

YesYesYesYes

786436

Migration Stopped

 

Yes

Yes

YesYesYesYes

983040

Rebuild Started

 

Yes

Yes

YesYesYesYes

983041

Rebuild Completed

 

Yes

Yes

YesYesYesYes

983047

Rebuild Aborted

 

Yes

Yes

YesYesYesYes

196608

Redundancy Check Started

 

NS/

S3.1

Yes

NS

YesYesYes

196609

Redundancy Check Completed

 

NS/

S3.1

Yes

NS

YesYesYes

196614

Redundancy Check Aborted

 

NS/

S3.1

Yes

NS

YesYesYes

196610

Redundancy Check Paused

 

NS/

S3.1

Yes

NS

YesYesYes

196611

Redundancy Check Resumed

 

NS/

S3.1

Yes

NS

YesYesYes

196615

Redundancy Check Inconsistent Stripe

Critical

NS/

S3.1

Yes

NS

YesYesYes

1179648

Synchronization Started

 

Yes

Yes

YesYesYesYes

1179649

Synchronization Completed

 

Yes

Yes

YesYesYesYes

1179653

Synchronization Aborted

 

Yes

Yes

YesYesYesYes

1179650

Synchronization Paused

 

NS/

S3.1

Yes

NS

YesYesYes

1179651

Synchronization Resumed

 

NS/

S3.1

Yes

NS

YesYesYes

1769472

PDM Started

 

NS/

S3.1

Yes

NS

YesYesYes

1769473

PDM Completed

 

NS/

S3.1

Yes

NS

YesYesYes

1769477

PDM Aborted

 

NS/

S3.1

Yes

NS

YesYesYes

524288

Full Initialization Start

 

Yes

Yes

YesYesYesYes

524289

Full Initialization Completed

 

Yes

Yes

YesYesYesYes

524292

Full Initialization Stopped

 

Yes

Yes

YesYesYesYes

524295

Quick Initialization Start

 

Yes

Yes

YesYesYesYes

524296

Quick Initialization Completed

 

Yes

Yes

YesYesYesYes

524299

Quick Initialization Stopped

 

Yes

Yes

YesYesYesYes

99

Percent Completed

 

Yes

Yes

YesYesYesYes

           
 

LogicalDrive Events

       

589834

Create Logical Drive

 

Yes

Yes

YesYesYesYes

589835

Delete Logical Drive

 

Yes

Yes

YesYesYesYes

589832

Quick Initialization Start

 

Yes

Yes

YesYesYesYes

589833

Quick Initialization Completed

 

Yes

Yes

YesYesYesYes

           
 

Enclosure Events

       

917508

Enclosure Power Down

 

Yes

Yes

YesYesYesYes

917507

Enclosure Power Up

 

Yes

Yes

YesYesYesYes

917504

Power Supply Not Present

 

NS/

S3.1

Yes

NS

YesYesYes

1114115

Enclosure Over Temperature

Critical

Yes

Yes

YesYesYesYes

327681

Enclosure Fan Stop

Critical

Yes

NS

Yes

NSYesNS

327685

Cooling Device Inserted

 

NS/

S3.1

NS/

S3.1

NS

YesYesYes

327686

Cooling Device Removed

 

NS/

S3.1

NS/

S3.1

NS

YesYesYes

1114114

Enclosure 3.3V Out of Range

 

Yes

NS

Yes

NSYesNS

1114113

Enclosure 5.0V Out of Range

Critical

Yes

Yes

YesYesYesYes

1114112

Enclosure 12.0V Out of Range

Critical

Yes

Yes

YesYesYesYes

65548

Battery Inserted

 

NS/

S3.1

NS/

S3.1

NSNS

YesYes

65547

Battery Removed

 

NS/

S3.1

NS/

S3.1

NSNS

YesYes

65536

Battery Temperature Out of Range

 

Yes

Yes

YesYesYesYes

65537

Battery Temperature Back to Normal

 

Yes

Yes

YesYesYesYes

65538

Battery Capacity Below Threshold

 

Yes

Yes

YesYesYesYes

65539

Battery Capacity Normal

 

Yes

Yes

YesYesYesYes

65540

Battery Discharging

 

Yes

Yes

YesYesYesYes

65541

Battery Charging

 

Yes

Yes

YesYesYesYes

65542

Battery Maintenance Mode

 

Yes

Yes

YesYesYesYes

65546

Battery Life Ended

 

Yes

Yes

YesYesYesYes

Close