Sunday, July 13, 2008

NaviCli Commands

NaviCli Commands



http://209.85.175.104/search?q=cache:cES52TrCErMJ:hubpages.com/hub/NaviCli-Commands&hl=ja&ct=clnk&cd=9

navicli

Sends status or configuration requests to the storage system via

command line

Description The navicli command sends storage system management and

configuration requests to an API (application programming interface)

on a local or remote server.

User Access Anyone that can log in to the host running the Navisphere CLI.

Format The navicli command is used as follows:

navicli -help

or

navicli [-d device] [-h hostname] [ -help] [-m] {-np} [-p] [-t timeout]

-[v|q] CMD [optional-command-switches]

where the navicli switches are

-help

Displays the help screen and does not start the navicli process.

-d device (required only for storage systems other than CX-Series or

FC4700-Series)

Specifies the communication channel to the storage system, for

pre-FC4700 SPs only. (An CX-Series or FC4700-Series SP is a host

that you specify with the -h switch. The CLI ignores the -d switch

for CX-Series or FC4700-Series.)

If you do not specify the -d switch, the environment variable

RaidAgentDevice is the default value for the device. For any

storage system that requires a communications channel, if

RaidAgentDevice is not set and you omit the -d switch, the CLI

will return an error message.

The -d switch overrides the RaidAgentDevice environment

variable. You can get a list of RAID devices present using the

getagent command with navicli. The device name is listed as the

"Node" in the output from getagent.

The device name format of the listed devices varies as explained later in

this section.

-h hostname

Specifies the hostname of the storage-system SP or server. For a

CX-Series or FC-Series storage system, each SP is a host, therefore

the hostname is the IP address or network name of the

destination SP. For pre-FC4700 storage systems, the hostname is

the server hostname or network address. The environment

variable RaidAgentHost is the default for the server if the -h

switch is not present. The -h switch overrides the RaidAgentHost

setting. If RaidAgentHost is not set and the -h switch is not

present, the local host name is used.

-m

Suppresses output except for values. This option is most useful

when used as part of a script.

-np

Suppresses polling on initial contact with the Agent. This switch

significantly increases performance when dealing with large or

multiple storage systems. The Agent automatically polls unless

this switch is specified.

When the -np switch is set, get commands may return stale data and set

commands may erase previously changed settings. Use caution when the

-np switch is set.

-p

Parses the entered CMD without making a network call to the

API. If the string does not parse correctly, an error message prints

to stderr; otherwise a message verifying that the string parsed

correctly prints to stdout.

-q

Suppresses error messages. This switch is useful when included

as part of a script.

-t timeout

Sets the timeout value. timeout specifies the value in seconds. The

default timeout value is 240 seconds unless the

RaidAgentTimeout environment variable (described on the

following page) is set differently.The timeout value includes the

Agent, and storage-system software execution time). The timeout

applies to all commands except firmware.

-v

Enables verbose error descriptions. This is the default unless -q is

specified.

CMD

One of a set of commands used with the navicli command to

configure and manage a storage system.

CMD Switches The CMD switches are described on the pages that follow.

Spaces in Arguments Normally, each argument to a CLI command is composed of

numbers, letters, and other valid printable characters for the

operating system on which the CLI is running. For example

navicli -h server1 convertEMlog -pathname G:\logs\c4.log -20

If a filename or other argument (such as a Storage Group name)

includes a space, you must enclose the entire string that includes the

name in quotation marks ("xx xx"). For example, if the filename in the

above were c 4.log, the valid command line would be

navicli -h server1 convertEMlog -pathname "G:\logs\c 4.log" -20

RaidAgent Environment Variables

The environment variables RaidAgentHost, RaidAgentDevice, and

RaidAgentTimeout contain default values for the server, the device,

and the timeout values respectively if you omit the -h, -d, and -t

switches from the navicli command. See the -h, -d, and -t switch

descriptions on pages 2-12 and 2-13. If you omit both

RaidAgentDevice and the -d switch, the CLI will show an error on

any command that needs device information. If you do not set

RaidAgentTimeout is and omit the -t switch, the default timeout

value is 25 seconds.

Examples The following examples show sample environment variable settings.

The commands in the following examples must be executed from the root

directory.

set RaidAgentDevice \\.\A-9162244 ↵ (Windows device name)

set RaidAgentHost server1 ↵ (or for CX/FC4700, ss1_spa ↵ )

set RaidAgentTimeout 45

Error Codes The navicli command might return any of a numeric sequence of

errors explained in Appendix A.

Operating System Device Names and the CLI Communications Channel

When you issue management CLI commands to a storage system

other than CX-Series or FC4700 series, you must include the -d switch

(or rely on an environment variable) to specify the device by which

the Agent will communicate with the storage system.

The device name format you use to specify the device varies with the

operating system, as shown following.

For any operating system, for any storage system whose CLI

commands require a communication channel, you can discover the

device name of the communication channel using the getagent

function, as follows:

navicli getagent -node -desc

Device Name Format

For Powerpath devices:

PwrP: hdiskn

where:

n is the number AIX assigns to the hdisk (LUN).

For ATF (Application Transparent Failover) or other non-Powerpath

devices:

spn

where:

n is the number that AIX® assigns to the pseudo-device, which

represents the pair of SPs in the storage system. To find out

which number AIX assigns to pseudo-devices, use the command

lsdev -Ccarray.

You can discover the communications channel device name using the

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

HP-UX® Device Name Format

For PowerPath devices:

PwrP: cDtSdL

For other devices:

cDtSdL

where for Fibre Channel disks:

D is the number of the fibre bus on the fibre adapter to which the

storage system is connected.

S is the FC-AL address ID (decimal) of the storage system's SP that

is connected to fibre bus cD.

L is the ID of a LUN owned by the SP identified by tS.

and for SCSI disks:

D is the number of the SCSI bus on the SCSI-2 adapter to which the

storage system is connected. This number can range from 0

through 7.

S is the SCSI ID of the storage system's SP that connected to SCSI

bus cD. If the adapter identified by cD provides a narrow SCSI

bus, this SCSI ID can range from 0 through 7; and if the adapter

provides a wide SCSI bus, this SCSI ID can range from 0 through

15.

L is the ID of a LUN owned by the SP identified by tS.

You can discover the communications channel device name using the

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

IRIX Device Name Format

For a directly attached device (no switches):

scCdTlL

where

C is the SP controller number IRIX® assigns (use hinv -v to

identify).

T is the SCSI target ID of the SP, as determined by the AL-PA.

L is the LUN number.

For example, sc1d2l3

For a SAN (switched) device:

W_lunL_cCpP

where

W is part of the WWN Node Name of the storage system (use the

CLI command getarray to identify).

L is the LUN number

C is the SP controller number IRIX assigns (use hinv -v to identify).

P is part of the WWN port name.

For example, 50:06:01:60:77:02:C7:A7_lun26_c1_p50:21:01_59:77:030

You can discover the communications channel device name using the

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

Linux® Device Name Format

For PowerPath devices:

PwrP: sgX

For other devices:

sgX

where

X is the letter or number that specifies the sg device

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

Novell NetWare Device Name Format

For Powerpath devices:

PwrP: Vm-Ad-Di:l

For ATF (Application Transparent Failover) devices:

V6F1-Ad-Di:l

For other devices:

Vm-Ad-Di:l

where

m is the manufacturer ID.

d is the load instance of the driver.

i is the target ID.

l is the LUN number.

Hyphens and colons are required. A sample NetWare® device name

is V596-A2-D0:2.

You can discover the communications channel device name using the

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

Solaris Device Name Format

For PowerPath devices:

PwrP:cDtSdLsP

For ATF (Application Transparent Failover) devices:

clsp X

For non Powerpath and non-ATF devices:

cDtSdLsP

where for Fibre Channel disks:

D is the controller number the Solaris operating system assigns.

S is the FC-AL or target address ID (decimal) of the storage

system's SP that is connected to fibre bus cD

L is the ID of a LUN owned by the SP identified by tS.

P is the partition (slice) number.

X is the SP number assigned using the clsp software.

A sample Solaris device name is PwrP:c1t0d0.

and where for SCSI disks:

D is the controller number the Solaris operating system assigns.

S is the SCSI ID of the storage system's SP that is connected to cD.

L is the ID of a LUN owned by the SP identified by tS. This ID can

range from 0 through 7.

P is the partition (slice) number.

You can discover the communications channel device name using the

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

Windows® Device Name Format

For PowerPath devices:

PwrP: SCSI2:0:1:0

For ATF (Application Transparent Failover) devices:

\\.\atf_sp0a

In a non-clustered environment without failover software:

SCSI2:0:1:0

In a clustered environment:

A-9162244

You can discover the communications channel device name using the

navicli getagent command as shown earlier and on page 2-66. This

applies to systems other than CX-Series and FC4700-Series only.

alpa Command

Background

The CLI alpa commands (Arbitrated Loop Physical Address) get and

set the port ID SCSI IDs on an SP. These IDs are required for I/O to

the SP. The port IDs are 0, 1, 2, or 3 for CX-Series, 0 or 1 for

FC4700-Series.

We suggest you use a unique SCSI ID for each SP port in your

installation. For example, on the first storage system, for ports 0 and

1, you can specify SCSI IDs 0 and 1, respectively. On the second

storage system, for the ports you can specify IDs 2 and 3 respectively,

and so on.

Description The navicli alpa command -get function lists ALPA information for

an CX-Series or FC4700-Series SP. The information includes the port

ID, the SP ID (A or B), and the SCSI ID associated with the port. Issue

the command to the SP for which this information is needed. In an

CX-Series or FC4700-Series system, each SP is a host, addressable by

its hostname (-h switch).

User Access You must have a user entry in the Navisphere Agent configuration

file.

Format alpa -get is used with the navicli (described on page 2-12) as follows:

alpa -get

Conventions and

Recommendations

If you really need to change an SP Port ID, you can do so with the

alpa -set function.

Example navicli -h payroll_storage_spa alpa -get

Storage Processor:SP A

Port ID: 0

ALPA Value: 0

For SP A in an CX-Series or FC4700-Series storage system (SP

hostname payroll_storage_spa), this command lists the SCSI ID

associated with Port 0.

Output See above. If the version of Base Software running on the SP does not

support this command, a Not supported error message is printed

to stderr. Other errors:

VALID_VALUES_0_3

alpa -set (CX-Series/FC4700-Series only)

Updates the SCSI ID associated with an SP Port

Description The navicli alpa command with the -set switch updates the SCSI ID

value for the given port on an CX-Series or FC4700-Series SP.

! CAUTION

The SP Port ID SCSI IDs are initially set by EMC service personnel

to work at your site. Do not change any value unless you are

installing a new SP and need to change its SCSI IDs from the SP

ship values of 0 and 1.

If you change any value, after you confirm, the SP will restart and

use the new values.

The software will not let you select a SCSI ID out of range (0-255) or a

duplicate ID on a storage system. - If you omit the -o (override)

switch, then the CLI prompts for confirmation:

Changing the ALPA value of port port-id from old-SCSI-ID

to new-SCSI-ID (y/n)

To confirm, answer y; to take no action, answer n.

User Access You must have a user entry in the Navisphere Agent configuration

file.

Format alpa -set is used with the navicli (described on page 2-12) as follows:

alpa -set -spa|-spb -portid portid SCSI-ID [-o]

where

-sp a|-spb

Specifies the SP: A or B.

-portid portid SCSI-ID

Specifies the new SCSI ID to associate with the port. Port IDs

range from 0 through 3 (CX-Series) or 0-1 (FC4700-Series). Valid

port SCSI IDs range from 0 through 255.

-o

Overrides; does not prompt for confirmation.

Conventions and

Recommendations

See the cautions above.

Example navicli -h ss1_SPA alpa -set -spa -portid 1 1

This operation will cause a storage system reboot!

Dou you wish to continue(y/n)? y

For SP A, this command changes the SCSI ID associated with Port 1

to 1.

Output See above. If the version of Base Software running on the SP does not

support this command, a Not supported error message is printed

to stderr. Other errors:

VALID_VALUES_0_255

VALID_VALUES_0_3

arraycommpath

Sets a communication path to a storage system that has no LUNs

bound, or removes such a path (non-Access Logix only)

Description The navicli arraycommpath command creates or removes a

communication path between the server and storage system.

Generally this command is needed when you want to configure a

storage system that has no LUNs bound. The setting should be 0 for

ATF.

This command works for unshared storage systems (direct

connection to host, without a switch) only. For a shared storage

system (uses Access Logix, and connects via switch), use the

command storagegroup sethost -arraycompath.

Changing the arraycommpath setting may force the storage system to

reboot. The CLI will issue a confirmation request that indicates

whether a storage-system reboot is required.However, the CLI will

not issue a confirmation request first if you include the -o switch.

! CAUTION

Do not issue this command unless you know the appropriate value

for the type of failover software running on the attached servers.

The command is designed for transition from one type of failover

software to another. Changing the arraycommpath mode to the

wrong value will make the storage system inaccessible.

To discover the current setting of arraycommpath, use the command

without an argument.

User Access You must have a user entry in the Navisphere Agent configuration

file.

Format arraycommpath is used with navicli (described on page 2-12) as

follows:

arraycommpath [ 0 | 1] [-o]

where

0

Disables a communication path. Use this mode with ATF.

1

Enables a communication path. Do not use this mode with ATF.

-o

Executes the command without prompting for confirmation.

Conventions and

Recommendations

None.

Example navicli -h server1 arraycommpath 1

WARNING: Previous setting will be discarded!

In order for this command to execute, write cache

must be disabled and memory size set to zero:

navicli -d devicename -h hostname setcache -wc 0.

This operation will cause a storage system reboot!

DO YOU WISH TO CONTINUE? (y/n) y

For host server1, this command sets a communication path between

server and storage system. In this example, the confirmation message

shows that the change, if confirmed, will cause a storage system

reboot.

Output If the version of Core or Base Software running on the SP does not

support this command, a Not supported error message is printed

to stderr. Other errors:

Valid values are 0 or 1

arrayname

Renames a storage system

Description The navicli arrayname command changes the name of a storage

system. If you omit a storage-system name, the command returns the

current storage-system name (in the format host-device or array

name).

The CLI prompts for confirmation as follows.

User Access You must have a user entry in the Navisphere Agent configuration

file. For a storage system other than CX-Series or FC4700-Series, if

configuration access control is enabled, you must issue the command

from a trusted server (see accesscontrol (FC4500 and FC5300) on

page 3-3).

Format arrayname is used with navicli (described on page 2-12) as follows:

arrayname NewArrayName

where

NewArrayName Specifies the new name for the storage system.

Conventions and

Recommendations

None

Examples navicli -d c1t0d0s1 -h server1 arrayname users11

Change the arrayname from ss1 to users11? (y/n) y

This command changes the name of the storage system whose device

name is c1t0d0s1 to users11.

Output If the version of Core or Base Software running on the SP does not

support this command, a Not Supported error message is printed

to stderr.

baseuuid

Gets or sets the storage-system unique unit identifier (UUID) base

for use by a server running Tru64.

Description The navicli baseuuid command gets or sets the base of the unique

unit ID as required for servers in a Tru64 cluster. The Tru64 operating

system uses the UUID (which it creates by combining the base and an

offset which is the storage-system LUN ID) to manage the LUNs in a

cluster. You can also use the UUID to manage the LUN.

To discover the UUID base, use the command without an argument.

User Access You must have a user entry in the Navisphere Agent configuration

file. For a storage system other than CX-Series or FC4700-Series, if

configuration access control is enabled, you must issue the command

from a trusted server (see accesscontrol (FC4500 and FC5300) on

page 3-3).

Format baseuuid is used with navicli (described on page 2-12) as follows:

baseuuid [uuid-base-value]

where

uuid-base-value Specifies the UUID base. The UUID base must be in

the range from 0 to 32767. new name for the storage system.

Conventions and

Recommendations

None

Examples navicli -h ss1_spa baseuuid

Base UUID of the array: 12345

navicli -h ss1_spa baseuuid 34

Base UUID of the array is set to 34.

These commands get, then set the UUID base of the storage system

containing the SP ss1_spa.

Output If the version of Core or Base Software running on the SP does not

support this command, a Not Supported error message is printed

to stderr.

bind (RAID Group)

Creates a LUN within an existing RAID Group

Description The bind command, when executed on RAID Group storage systems,

binds a LUN within an existing RAID Group. You can create a RAID

group with the createrg command (page 2-57).

For a CX-Series or FC4700-Series storage system, each SP is a host (-h

switch) to which you address the bind command. Ensure that you

address the bind command to the SP that you want to own the LUN.

User Access You must have a user entry in the Navisphere Agent configuration

file. For a storage system other than a CX-Series or FC4700-Series, if

configuration access control is enabled, you must issue the command

from a trusted server (see accesscontrol (FC4500 and FC5300) on

page 3-3).

Format bind is used with navicli (described on page 2-12) as follows:

bind raid-type [lun] -rg rgID

[-aa auto_assignment] [-cap capacity] [-elsz stripe-element-size]

[-n min_latency_reads] [-offset stripe-number] [-pl placement]

[-r rebuild-priority] [-rc read-cache] [-sp a|b] [-sq size-qualifier]

[-v verify-priority] [-wc write-cache]

where

raid-type

Specifies the RAID type for the LUN.

r0 = RAID 0

r1 = RAID 1

r3 = RAID 3

r5 = RAID 5

r1_0 = RAID 1/0

id = individual disk

hs = hot spare

lun

Specifies a decimal number to assign to the LUN (valid range is

shown on page 2-5). If not specified, the lowest available number

(the default value) is assigned to the LUN.

-rg rgID

The RAID Group identification number. The RAID group must

already exist. You can create a RAID group with the createrg

command (page 2-57).

rgID specifies a number between 0 and the maximum number of

RAID Groups supported by the RAID Group storage system.

The optional switches are

-aa auto_assignment

Enables or disables auto-assignment functionality for the LUN.

auto_assignment specifies functionality as follows:

0 = Disables auto-assignment

1 = Enables auto-assignment (the default)

-cap capacity

Sets the capacity of usable space in the LUN. (The default is full

space available.)

-elsz stripe-element-size

Sets the stripe element size, which is the number of 512-byte disk

blocks per physical disk within a LUN. The stripe size itself

equals the stripe element size multiplied by the number of disks

in the LUN. The stripe element size value can be any number

greater than zero. The stripe element size can affect the

performance of a RAID 5 or RAID 1/0 LUN. A RAID 3 LUN has a

fixed stripe element size of one sector. stripe-element-size does not

apply to a RAID 1 LUN, individual unit, or hot spare.

The smaller the stripe element size, the more efficient the

distribution of data read or written. However, if the stripe

element size is too small for a single I/O operation, the operation

requires access to another stripe element, which causes the

hardware to read and/or write from two disk modules instead of

one. Generally, we recommend the smallest stripe element size be

the smallest even multiple of 16 sectors that rarely forces access to

another stripe element. The default stripe element size is 128

sectors.

To bind a RAID 3 LUN on storage system other than a

CX-Series/FC4700-Series, use the -elsz 1 switch to specify the

element size. An element size of 1 is the only valid size for a

RAID 3 LUN with a non-FC4700 storage systems. Note that this

element size is not valid for CX-Series/FC4700-Series storage

systems.

-n min_latency_reads (FC 5400/5500 storage systems only)

Enables or disables minimal latency reads for RAID 3 only.

min_latency_reads specifies functionality as follows:

0 = Disables minimal latency reads (default)

1 = Enables minimal latency reads

-offset stripe-number

Sets the LUN's starting Logical Block Address (LBA) to begin at

stripe-number on the RAID Group. Depending on where the

operating system writes its internal tables on the LUN, this may

improve performance. The default is the lowest available disk

address. Use a decimal, not hexadecimal, number. Hexadecimal

numbers are not allowed. You can use the getlun -offset switch to

learn the offset of an existing LUN.

-pl placement

Places a LUN in a RAID Group. placement specifies which

algorithm to use as follows:

bf = best fit algorithm (the default)

ff = first fit algorithm

-r rebuild-priority

Sets the rebuild priority: the priority at which to reconstruct data

on either a hot spare or a new disk module that replaces a failed

disk module in a LUN. This priority determines the resources that

the SP devotes to rebuilding instead of to normal I/O activity. It

applies to all RAID LUNs except RAID 0. Valid values are ASAP,

High, Medium, and Low. The default is ASAP. A rebuild priority

of ASAP rebuilds the disk module as quickly as possible, but

degrades response time somewhat. If your site requires fast

response time and you want to minimize degradation to normal

I/O activity, you can extend the rebuild over a longer period of

time by specifying the Low rebuild priority.

You cannot change the rebuild priority for a LUN that is in the process of

rebuilding. You can change the value, but it will have no effect.

The time it takes to actually rebuild a LUN can vary significantly,

especially for a RAID 1/0 LUN, since a RAID 1/0 LUN with n

disk modules can continue functioning with up to as many as n/2

failed drive modules and only one drive at a time is rebuilt.

-rc read-cache

Enables or disables read cache functionality for this specific LUN.

read-cache values:

0 = Disables read cache

1 = Enables read cache (the default)

-sp a|b

Sets the default owner of the LUN: SP A or SP B. The default is

the current SP.

-sq size-qualifier

Sets the default size qualifier. size-qualifier specifies the qualifier as

follows:

mb = megabytes

gb = gigabytes (the default)

sc = stripe count

bc = block count

-v verify-priority

Sets the priority at which the data on each LUN is verified if an SP

fails and is replaced during I/O operations. verify-priority

specifies the priority. Valid values are ASAP, High, Medium, and

Low. A lower priority verifies LUN data more slowly and has less

impact on overall system performance. The default is High.

-wc write-cache

Enables or disables the storage-system write cache for this

specific LUN. write-cache values:

0 = Disables write cache

1 = Enables write cache (the default). Note that you cannot

enable the write cache for a RAID 3 LUN.

Conventions and

Recommendations

The following table identifies the parameters you must configure for

each RAID type you bind.

LUN Parameters for Different RAID Types

To allow for good choices of available disk modules, bind LUNs in

this order:

First - RAID 1/0

Second - RAID 5, RAID 3, and RAID 0

Third - RAID 1

Fourth - Individual units and Hot Spares

Before you start binding disk modules, read the restrictions and

recommendations in the following table.

RAID Type Bind Parameters

RAID 5, RAID 1/0 LUN number

Rebuild time

Stripe size

Read and/or write caching

RAID 3 LUN number

Rebuild time

RAID 3

RAID 1 LUN number

Rebuild time

Read and/or write caching

RAID 0 LUN number

Stripe size

Read and/or write caching

Disk (individual disk unit) LUN number

Read and/or write caching

Hot Spare LUN number

LUN to Bind Restrictions and Recommendations

Any LUN You can bind only unbound disk modules. All disk modules in a

LUN must have the same capacity to fully use the modules'

storage space.

RAID 5* You must bind a minimum of three disk modules and no more

than sixteen disk modules. We recommend you bind five

modules for more efficient use of disk space.

RAID 3 You must bind exactly five or nine disk modules in a storage

system with Fibre Channel disks.You cannot bind a RAID 3 LUN

until you have allocated storage-system memory for the LUN.

IMPORTANT RAID 3 does not allow caching, therefore, when

binding RAID 3 LUNs, the -c cache-flags switch does not apply.

When you check status on a RAID 3 LUN, caching always

appears disabled.

RAID 1 You must bind exactly two disk modules.

RAID 0 You must bind a minimum of three disk modules, and no more

than sixteen disk modules.

RAID 1/0 You must bind a minimum of four disk modules, and an even

number of modules, but no more than sixteen modules.

Navisphere Manager pairs modules into mirrored images in the

order in which you select them. The first and second modules

you select are a pair of mirrored images; the third and fourth

modules you select are another pair of mirrored images; and so

on. The first module you select in each pair is the primary

image, and the second module is the secondary image.

Individual disk unit None

Hot Spare You cannot bind any of the following disk modules as a hot

spare on storage system with Fibre Channel disks

For a write cache of 256 Mbytes or less: DPE disks 0 through 4

For a write cache larger than 256 Mbytes: DPE disks 0 through

8.

The capacity of a disk module bound as a hot spare must be at

least as great as the capacity of the largest disk module that it

might replace.

IMPORTANT The -c cache-flags switch does not apply to a hot

spare. When you check status on a hot spare, caching always

appears disabled.

* You can bind one less module per LUN than you will eventually use.

However, the LUN will operate in a degraded mode until a module is

installed in the empty slot and the storage system integrates it into the

LUN.

You can enable either read or write caching, both read and write

caching, or neither type of caching for any type of LUN except for a

RAID 3 LUN or a hot spare. For caching to occur, storage-system

caching must also be enabled (page 2-30 for RAID group storage

systems). Whether you should use caching for a specific LUN

depends on the RAID type. Recommendations for caching with

different RAID types follow.

Caching Recommendations for Different RAID Types

IMPORTANT LUNs cannot use write caching until you enable the

storage-system write cache (page 2-175). LUNs cannot use read caching until

you enable the read cache for the SP that owns the LUN (page 2-30 for

RAID group storage systems). When binding RAID 1/0 LUNs, the order

in which you select modules is important. The first module you select is the

first part of the mirror's primary image and the second module is the first

part of the mirror's secondary image; the third module is the second part of

the primary image, the fourth module is the second part of the secondary

image; and so on for any other modules in the LUN. In other words, the first

and second modules selected are a pair of peer image parts, the third and

fourth modules are another pair of peer image parts, and so on.

When a LUN is assigned to an SP, its state becomes assigned, and the

state of its disk modules become enabled when you use the SP that

owns the LUN as the communications path to the chassis

If you need to change the Description (RAID type) or Stripe Size

parameters, you must unbind the LUN (page 2-194), and then rebind

it with the correct parameters. If you need to change the maximum

rebuild time, read cache state, or write cache state, use the chglun

command (page 2-44).

Examples navicli -h ss1_spa createrg 0 0_0_1 0_0_2 0_0_3

0_0_4 0_0_5

For a CX-Series or FC4700-Series, this command creates RAID Group

0 from the five specified disks (disks 0 through 4 in the DPE

enclosure). The new RAID Group has no RAID type until it is bound

(next).

navicli -h ss1_spa bind r5 0 -rg 0

This command binds a LUN of type RAID 5 with LUN ID 0 on RAID

Group 0. The LUN occupies all space on RAID Group 0, since the

bind command did not include the -cap switch. By default, read and

write caching are enabled on the new LUN.

Output There is no output. Errors are printed to stderr.

Wednesday, July 2, 2008

Google社の基盤技術「MapReduce」についてエンジニアに聞く

http://techon.nikkeibp.co.jp/article/NEWS/20080627/154025/

Google社の基盤技術「MapReduce」についてエンジニアに聞く
ソフトウエア OS インタフェース ゲーム データベース サーバ ファイル・システム オープンソース 通信 ネットワーク 企業・市場動向 教育 コンテンツ デジタル家電 ブラウザ 大学 実装
2008/06/27 17:11


Senior Software EngineerのChristophe Bisciglia氏

 米Google Inc.で大規模並列プログラミングの教育コース「Academic Cluster Computing Initiative(ACCI)」の立案者であるSenior Software EngineerのChristophe Bisciglia氏に,インタビューする機会を得た。Bisciglia氏は現在,中国・上海でサービスの開発に携わっている。

――Googleのサービスを支える技術として,ACCIで教えているのは何か。
Christophe Bisciglia氏 並列処理のプログラミング/モデルである「MapReduce」と分散ファイル・システムである「GFS(Google File System)」だ。これらを実装した「Hadoop」というオープンソースのソフトウエアを教材に使っている。

――並列プログラミングとなると,習得が難しいのでは。
Christophe Bisciglia氏 対象は大学3~4年生で,それほど難しいものじゃない。MapReduce自体は単一のマシンでも動作する。モデルとして,それがスケールすることを保証できるというものだ。

――しかし,MapReduceがベースとする関数型プログラミング・モデルは,決して単純じゃなくて習得もしにくい。
Christophe Bisciglia氏 確かにMapReduceは,関数型プログラミングからヒントを得ている。しかしMLやSchemeみたいな関数型言語ではない。GUIなどにオブジェクト指向プログラミングが向くように,ものすごい量のデータを処理するのに向いた方式だ。大量のデータから適切なものを取り出したり,合成したりする場面に関数型プログラミングを適用した。
 MapReduceを使うことで,複雑な処理は開発者から見えなくなる。例えばインデクスの作成過程では,それぞれの言葉をキー,値をURLとした「キーと値」のペアを作る。それぞれのペアは独立していて,ほかのペアと依存関係がない。だから同時並列に処理ができる。これがMapフェーズで起きることだ。ReduceフェーズではこうしたMapフェーズで作ったデータをまとめる作業を実施する。

――そうするとReduceフェーズは,並列化しにくいのでは。
Christophe Bisciglia氏 その通りだ。Reduceフェーズでも並列化の恩恵を受けるのは,例えば検索のリクエストが複数あれば,それぞれのReduceフェーズが独立した存在になることだ。典型的な処理の場合,Mapフェーズの方が作業量が多い。だからReduceフェーズの並列性が低くても,バランスがとれている。

――インデクス作成以外ではどのようなアプリケーションがあるのか。
Christophe Bisciglia氏 ログ作成やWebアプリケーションなども向いている。GoogleDocsなどを考えれば,個々のユーザーの操作は独立しているからね。基本は大量のデータを保存するプロセスや,データを取り出す処理が向いている。そういう意味では,センサ・ネットワークなどの処理にも向くだろう。交通量データや風速データなど,大量のデータから意味を取り出す作業は依存関係が存在しないからだ。逆に言えば,リレーショナル・データベースの処理などはスケールさせにくい。

――スケールがしにくいということは,クラウド・コンピューティングに向かないということにならないか。
Christophe Bisciglia氏 クラウド・コンピューティングはMapReduceによる大量のデータ格納/合成プロセスと,AjaxによるWebベースのインタフェースでデータを入力するという組み合わせで成り立っている。ほとんどのアプリケーションはクラウド・コンピューティングに移行できるだろう。
 クラウド・コンピューティングには,複数ユーザーが共同作業をするためのプラットフォームになるという,ローカルだけの処理にはない魅力がある。例えば「Google Docs」は,一つの文書を複数ユーザーで編集できる。新しいデバイスやOSに対応するのも容易だ。Webブラウザーさえあればよいのだから。
 ただグラフィックス処理を大量に必要とするタイプのゲームなどを,クラウド・コンピューティングで実現するのは難しいだろう。ゲームと言っても,最近流行しているMMORPG(Massively Multiplayer Online Role-Playing Game)のようなものはクラウド・コンピューティングに向く。ゲームの大半はクラウド・コンピューティングで実現可能だ。既に,5年前には考えられなかったような多くのアプリケーションがクラウド・コンピューティングで動いている。

――Google File Systemはどうやって並列性を確保しているのか。
Christophe Bisciglia氏 ファイル・システムのメタデータを管理する「マスター」と,実データを管理する「チャンク・サーバー」に分かれている。ファイルをブロック単位で分けて,マスターがどのチャンク・サーバーにそのブロックを割り当てているかを管理する。ブロックのサイズは64Mバイトで,大体3~4個のサーバーに複製を置く。マスターも複製を作る。一見するとマスターにリクエストが集中しそうなモデルだが,マスター・サーバーの処理時間は短く,たいした負荷にはならない。

――最近Google社が提供を始めたアプリケーション・ソフトウエアの基盤サービスである「Google App Engine」は,GFSとMapReduceを組み合わせて実現しているのか。
Christophe Bisciglia氏 基本的にはGoogle App EngineはデータAPIだ。大きなテーブルを大量のユーザーがアクセスできるスケーラビリティを確保する。ローレベルのファイル・システムにはGFSを使っている。