You are on page 1of 9

5 Disk Array

5.1 reconfigScanDisk

This script can be run only on software older than 06MW03.x.

Normally, this test is not run in the field. Applications/Install reconfig will
NOTE: internally/automatically run this to setup RAID. This programs sets up RAID (and then
tests the RAID setup).

This erases all data on the Scan Disk.

On the DARC,

1. Open a shell window.


2. Type: rsh darc<Enter>
3. Type: /usr/g/scripts/reconfigScanDisk<Enter>

Each action appears on screen and OK should appear next to the action once it has completed
successfully.

5.2 Verification of the Two High Speed Disks

1. Type: rsh darc<Enter>


2. Type: cat /proc/scsi/scsi/<Enter>

The output is shown below.

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: MITSUMI Model: CD-ROM SR244W Rev: T01A
Type: CD-ROM ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: SEAGATE Model: ST336753LW Rev: 0005
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi1 Channel: 00 Id: 01 Lun: 00
Vendor: SEAGATE Model: ST336753LW Rev: 0005
Type: Direct-Access ANSI SCSI revision: 03

5.3 Disk Array Drive Test

This test lights the drive to identify it.

1. Type: rsh darc<Enter>


2. Type: su -<Enter>
3. Type: #bigguy<Enter>
4. Type: dd if=/dev/sda of=/dev/null,
To verify the first Raw Data Drive in the Disk Array.

- OR -

Type: dd if=/dev/sdb of=/dev/null,

To verify the second Raw Data Drive in the Disk Array.

These two tests take around 10 minutes, each. You will not see any output
NOTE:
while they are running.

Example Output:

71687372+0 records in

71687372+0 records out

6 Other

6.1 sg_map -i

NOTE: ”sg” stands for “SCSI Generic”

This test applies to emulated SCSI devices like the DARC CD-ROM.

1. Open a Unix shell as root.


2. Type: rsh darc<Enter>
3. Type: sg_map -i<Enter>

The output is shown below.

/dev/sg0 /dev/scd0 MITSUMI CD-ROM SR244W T01A (CD-ROM in the DARC)

/dev/sg1 /dev/sda SEAGATE ST336753LW 0005 (1st Raw Data Disk)

/dev/sg2 /dev/sdb SEAGATE ST336753LW 0005 (2nd Raw Data Disk)

6.2 Raid Devices

6.2.1 Verify Raid Device Is Running

1. Type: rsh darc<Enter>.


2. Type: cat /proc/mdstat<Enter>.

Personalities : [raid0]

read_ahead 1024 sectors

md0 : active raid0 sdb1[1] sda1[0] The device is named ‘md0’. Active
71681792 blocks 4k chunks means the device is available

unused devices: <none>

6.2.2 Verifying Raid Device Is Mounted on /raw_data

1. Type: rsh darc<Enter>.


2. Type: df -h /raw_data<Enter>.

Filesystem Size Used Avail Use% Mounted on

/dev/md0 68G 4.4G 63G 7% /raw_data

re-raid Disk Array Tests - GOC3/GOC4/AIO


Last Revised: 14 May 2008

1 Overview
The DARC/DARC2 Node mounts the Disk Array. The -c (create) command wipes out all Scan
(Patient) Data and reconfigures the Scan Disk. The -c and -a tests must be run with Application
Software down. The -q (query) test may be run with Application Software up or down.

Additional commands have been provided to see if the RAID is mounted and the disks are present,
in Section 4.

For All-In-One Console, there is no DARC node presence. CDIP card and scan data
disks are mounted in Host computer.
NOTE:
All the commands ran on DARC node still can be ran on Host computer. It’s not
necessary to run “rsh darc” for AIO console.

2 Determine Software
Confirm the Host Application Software is equal to or greater than 06MW03.4. If the software is less
than 06MW03.4, do NOT perform this procedure.

1. Open a Unix Shell and type the following to verify Applications software version information.
Examples are provided. The examples are not actual output.

{ctuser@hostname} swhwinfo

(year)MW(fiscal week).(fiscal day).hardware revision info here

Output must NOT be: 06MWxx.xhardware revision info (or greater)


2. If the software version is equal to or greater than 06MWxx.x continue. If it is less than
06MWxx.x, do not continue.

3 Procedure
The Disk Array is configured with 2 disks (internal to the SDDA or the DARC2). As required,
perform the create, assemble or query commands. Visually verify the entire output is successful
from the command script performed. The final output line ‘gre-raid success’ does not mean the 2
Disk Array RAID is good. Shutdown Application Software if it is up. Open a Unix Shell and type the
following:

{ctuser@hostname} cleanMon

3.1 Create (-c) Script

The sudo gre-raid -c (create) script erases all data on the Scan Disk. Patient
information WILL be lost. Make certain to back-up all patient data prior to
performing this test.

With Application Software down, open a Unix Shell and type the following:

{ctuser@hostname} rsh darc

[ctuser@darc ~]$ sudo gre-raid -c

************************** Warning ******************************


* If a new disk array is created, all scan data on the current *
* disk array will be lost. *
*****************************************************************

Are you sure you want to create a new disk array (yes/no)? yes

Table 1: Example Create Command Output

{ctuser@hostname}[1] rsh darc


Last login: Mon Jul 24 17:03:43 from oc
[ctuser@darc ~]$ sudo gre-raid -c
gre-raid: built Jun 22 2006 20:34:02

/dev/sd<x>: scanning disk array...done

Device Bus ID Mfg Model Fwrev Serial-number


/dev/sda 0 0 SEAGATE ST336754LW 0003 3KQ1H79N00007634SPRT
/dev/sdb 0 1 SEAGATE ST336754LW 0003 3KQ1JP5300007634RNN5

Adaptec driver not found. Continuing...


dasType: non-VDAS
/usr/g/config/gre-raid-2.cfg: RAID-0 profile

/raw_data: unmounting filesystem...done

/dev/md0: stopping...done

/var/log/messages: checking for SCSI errors...done


************************** Warning ******************************
* If a new disk array is created, all scan data on the current *
* disk array will be lost. *
*****************************************************************
Are you sure you want to create a new disk array (yes/no)? yes

/var/log/messages: 0 drive(s) found with SCSI errors found since last


start

/dev/sda: testing...drive_spec...52.0MB/sec...done

/dev/sdb: testing...drive_spec...51.9MB/sec...done

/dev/sda: partitioning...done

/dev/sdb: partitioning...done

/dev/md0: creating disk array...done

/dev/md0: RAID-0 active with (2) drives, 256k chunk size, and 69GB
capacity

/dev/md0: creating filesystem...done

/raw_data: mounting /dev/md0 filesystem...done

/raw_data: testing...170.1MB/sec...done

/raw_data: 0.0% used, 68.2GB avail

gre-raid: success

[ctuser@darc ~]$
Verify the create partitions diagnostic passes and all output looks good.

Perform the assemble and query commands if the create has been successful.

3.2 Assemble (-a) Script

With Application Software down, open a Unix Shell and type the following:

{ctuser@hostname} rsh darc

{ctuser@darc} sudo gre-raid -a

Table 2: Example Assemble Command Output

{ctuser@hostname}[1] rsh darc


Last login: Mon Jul 24 17:03:43 from oc
[ctuser@darc ~]$ sudo gre-raid -a
gre-raid: built Jun 22 2006 20:34:02

/dev/sd<x>: scanning disk array...done

Device Bus ID Mfg Model Fwrev Serial-number


/dev/sda 0 0 SEAGATE ST336754LW 0003 3KQ1H79N00007634SPRT
/dev/sdb 0 1 SEAGATE ST336754LW 0003 3KQ1JP5300007634RNN5

Adaptec driver not found. Continuing...


dasType: non-VDAS
/usr/g/config/gre-raid-2.cfg: RAID-0 profile

/raw_data: unmounting filesystem...done

/dev/md0: stopping...done

/var/log/messages: checking for SCSI errors...done

/var/log/messages: 0 drive(s) found with SCSI errors found since last


start

/dev/sda: testing...drive_spec...52.4MB/sec...done

/dev/sdb: testing...drive_spec...52.4MB/sec...done

/dev/md0: starting...done

/dev/md0: RAID-0 active with (2) drives, 256k chunk size, and 69GB
capacity
/raw_data: mounting /dev/md0 filesystem...done

/raw_data: testing...176.1MB/sec...done

/raw_data: 0.0% used, 68.2GB avail

gre-raid: success

[ctuser@darc ~]$

Verify assemble diagnostic passes and all output looks good.

Perform the query command if the assemble has been successful.

3.3 Query (-q) Script

With Application Software up or down, open a Unix Shell and type the following:

A query can be performed at any time with Application software up or down. The
NOTE: preferred method is to test with Application software down. The Operator Console can
not be scanning or manipulating Scan Data during the testing.

{ctuser@hostname} rsh darc

{ctuser@darc} sudo gre-raid -q

Table 3: Example Query Command Output

[ctuser@darc ~]$ sudo gre-raid -q


gre-raid: built Jun 22 2006 20:34:02

/dev/sd< x>: scanning disk array...done

Device Bus ID Mfg Model Fwrev Serial-number


/dev/sda 0 0 SEAGATE ST336754LW 0003 3KQ1H79N00007634SPRT
/dev/sdb 0 1 SEAGATE ST336754LW 0003 3KQ1JP5300007634RNN5

Adaptec driver not found. Continuing...


dasType: non-VDAS
/usr/g/config/gre-raid-2.cfg: RAID-0 profile

/var/log/messages: checking for SCSI errors...done


/var/log/messages: 0 drive(s) found with SCSI errors found since last
start

/dev/sda: testing...drive_spec...52.7MB/sec...done

/dev/sdb: testing...drive_spec...52.4MB/sec...done

/raw_data: testing...158.3MB/sec...done

/raw_data: 0.0% used, 68.2GB avail

gre-raid: success

[ctuser@darc ~]$

Verify query diagnostic passes and all output looks good.

{ctuser@darc} exit

{ctuser@hostname}

4 Verify Disk Present and Mounted


Open a Unix Shell and type the following commands to verify two 36 GByte disks (alternate disk
manufacturers may be shown) are present. Additionally, /dev/md0 can be checked to see if
/raw_data is mounted using the df or df /raw_data command.

{ctuser@hostname} rsh darc

{ctuser@darc} df /raw_data

{ctuser@darc} df

{ctuser@darc} cat /proc/scsi/scsi

Table 4: Verify Disks and Mount State Command Example Output

[ctuser@darc ~]$ df /raw_data


Filesystem 1K-blocks Used Available Use% Mounted on

/dev/md0 71550208 288 71549920 1% /raw_data

[ctuser@darc ~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 4182720 2768432 1414288 67% /
/dev/hda1 101086 4939 90928 6% /boot
none 972692 0 972692 0% /dev/shm
/dev/hda7 30580776 147212 30433564 1% /usr/g
/dev/md0 71550208 288 71549920 1% /raw_data
[ctuser@darc ~]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00

Vendor: SEAGATE Model: ST336754LW Rev: 0003

Type: Direct-Access ANSI SCSI revision: 03

Host: scsi0 Channel: 00 Id: 01 Lun: 00

Vendor: SEAGATE Model: ST336754LW Rev: 0003

Type: Direct-Access ANSI SCSI revision: 03

You might also like