Thursday, 1 December 2011

SAN - Switch configuration backup and restore

Login as admin on switch by using ssh and run following command to backup switch configuration:


admin> configupload -all -p scp 192.168.1.50,root,/tmp/switch_backup.txt

This command will backup switch configuration and scp config file to 192.168.1.50 server under /tmp directory
Restore configuration:

Login as admin on switch by using ssh and run following command:

admin> configdownload -all -p scp 192.168.1.50,root,/tmp/switch_backup.txt

REDHAT - Configure channel bonding


  • Create a file in the /etc/sysconfig/network-scripts/ directory called ifcfg-bond0 with following contents

DEVICE=bond0
BOOTPROTO=none
BROADCAST=192.168.1.255
IPADDR=192.168.1.115
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
GATEWAY=192.168.1.1
TYPE=Ethernet
PEERDNS=yes
USERCTL=no

  • update /etc/sysconfig/network-scripts/ifcfg-eth0 file:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

  • - update /etc/sysconfig/network-scripts/ifcfg-eth1 file:

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

  • Update /etc/modules.conf file and add the following line:
alias bond0 bonding

  • Once /etc/modules.conf is configured run ifup command to bring up the bonding interface
# ifup eth1

HPUX - Replace mirrored root disk on PA RISC server


Before shutdown the machine and physically replace the disk

1. Check for a successful make_recovery tape

2. Find the defective drive:

>ioscan -fnC disk
> lvlnboot –v vg00 ( capture the output to a file for comparison )
>vgdisplay -v
>ll /dev/dsk | grep “Minor #”
Note: in this particular case the defective drive is /dev/dsk/c1t2d0, root secondary pv

NOTE: If disk is showing no H/W or has a lot of stale extend, do not try dd or diskinfo on it, it may hang
3. Find the lvols on defective pv and make sure all are mirrored (ex: check free extent on both and compare).

>pvdisplay –v /dev/dsk/ c1t2d0
Note: in this particular case lvol1 to lvol9

4. Reduce the mirrors:
  
   >lvreduce -m 0 /dev/vg00/lvol1 /dev/dsk/c1t2d0 up to lvol9

5. Check the successful mirror reduction:
  
    >lvdisplay -v /dev/vg00/lvol1 up to lvol9

6. Remove the physical volume from the volume group:

    >vgreduce vg00 /dev/dsk/c1t2d0

7. Check with:

    # vgdisplay -v vg00


Shutdown the machine and physically replace the disk

8. Replace the disk. (You can use dd if=/dev/rdsk/cXtYdZ of=/dev/null bs=1204k count=100 to identify disk if it is claimed or, if NOH/W, to isolate the disk by issuing this command on disk on the same path with an scid+1 or -1)

9. Check connectivity with disk and create device files:

    >ioscan -fnC disk

10. Initialize the raw device as a boot disk:

    >pvcreate -B /dev/rdsk/c1t2d0

11. Extend the volume group by adding the physical volume:

    > vgextend /dev/vg00 /dev/dsk/c1t2d0
    > mkboot -l /dev/rdsk/c1t2d0
    > mkboot -a "hpux -lq" /dev/rdsk/c1t2d0

12. Mirror all logical volumes, which have been previously reduced:
   
   >lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c1t2d0 up to lvol9

NOTE: make sure lvol1, lvol2 and lvol3 are FIRST mirrored in this exact sequence

13. Do:

> lvlnboot –Rv vg00

NOTE: check with lvlboot –v pay close attention to boot: lvol1, you can compare to previous lvlnboot –v if you kept an output

14. Check that you have successfully increased the number of mirror copies:

    >lvdisplay -v /dev/vg00/lvol1 up to lvol9


15. Finally, check the status of the volume group:

    # vgdisplay -v vg00 | more


If hard drive showing NO_HW then no need to split the mirror physically changes the disk and then:

  • vgcfgrestore -n /dev/vg00 /dev/rdsk/c1t2d0
  • vgchange -a y /dev/vg00
  • vgsync /dev/vg00
  • mkboot -l /dev/rdsk/c1t2d0
  • mkboot -a "hpux -lq" /dev/rdsk/c1t2d0
  • lvlnboot –Rv vg00
  • lvdisplay -v /dev/vg00/lvol1 up to lvol9
  • vgdisplay -v vg00 | more

Solaris - Configure MPxIO


In general, multipathing is a method for redundancy and automatic fail-over that provides at least two physical paths to a target resource. Multipathing allows for re-routing in the event of component failure, enabling higher availability for storage resources. Multipathing also allows for the parallel routing of data, which can result in faster throughput and increased scalability.

The Solaris I/O multipathing feature is a multipathing solution for storage devices that is part of the Solaris operating environment. This feature was formerly known as Sun StorEdge Traffic Manager (STMS) or MPxIO.

Solaris Fibre Channel and Storage Multipathing software enables FC connectivity for the Solaris hosts. The software resides on the server and identifies the storage and switch devices on your SAN. It allows you to attach either loop or fabric SAN storage devices while providing a standard interface with which to manage them.

Multipathing is disabled by default for FC devices on SPARC based systems, but is enabled by default on x86 based systems.
Note - The multipathing feature is not available for parallel SCSI devices but is available for FC disk devices. Multipathing is not supported on tape drives or libraries or on IP over FC.

Example device name with multipath disabled:
/dev/dsk/c1t1d0s0

Example device name with multipath enabled:

/dev/dsk/c3t2000002037CD9F72d0s0

Enabling MPxIO -

MPxIO has a configuration file located @ /kernel/drv/fp.conf. This file is used to enable MPxIO and if needed exclude the internal disks from MPxIO
stmsboot command is also used to enables/disables/updates MPxIO configuration.

Enable MPxIO in /kernel/drv/fp.conf

1. Edit /kernel/drv/fp.conf file and have below entry uncommented.
mpxio-disable="no"; and change "no" to "yes" and save and exit. In this case you dont need to run stmsboot command, otherwise:

2. After editing fp.conf and having above entry in it active execute below command.

#stmsboot -u
<<<<< Caution: It ask for reboot so to enable MPxIO server needs downtime.

OK. So now after reboot you will have to verify if MPxIO is running or not.


# format
Searching for disks...done

c2t60050768018A8023B80000000000013Ad0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Bd0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Cd0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Dd0: configured with capacity of 16.00GB
c2t60050768018A8023B80000000000013Ed0: configured with capacity of 16.00GB
c2t60050768018A8023B80000000000013Fd0: configured with capacity of 16.00GB

There are various commands by which you can manage your storage disks few of them are as listed -
1. To Display Paths
# mpathadm list lu
/dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
Total Path Count: 8
Operational Path Count: 8
2. Show detailed information about a disk/LUN
#mpathadm show lu /dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2

<<<<<<< You can find more details of specific LUN >>>>>>>
3. Display world wide port names/Fiber card Firmware level -
# fcinfo hba-port
HBA Port WWN: 10000000c9446e11 <----------- WWPN
OS Device Name: /dev/cfg/c4
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.90a7 (C2D3.90A7)
FCode/BIOS Version: Boot:3.20 Fcode:1.40a0
Serial Number: BG50103047
Driver Name: emlxs
Driver Version: 2.31p (2008.12.11.10.30)
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 1Gb
Node WWN: 20000000c9446e11
#fcinfo hba-port -l <<<<< Good Comand for debugging >>>>>>
#fcinfo remote-port -sl -p 10000000c9446e11
Lists all remote ports as well as the link statistics and scsi-target information

Solaris - Configure IPMP


  • On Solaris machine check which interfaces currently have IP address configured. In this example we assume that e1000g0 has current IP address. To verify:

#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.75.109.50 netmask ffffff00 broadcast 192.75.109.255 ether 0:14:4f:a7:1c:3c

To check what other interfaces you have:

#dladm show-dev
e1000g0 link: up speed: 1000 Mbps duplex: full
e1000g1 link: down speed: 1000 Mbps duplex: full
e1000g2 link: down speed: 0 Mbps duplex: half
e1000g3 link: down speed: 0 Mbps duplex: half

If any of the interfaces has cable connected it will show the speed and duplex.

Add existing interface in IPMP group:

# ifconfig e1000g0 group ipmpgrp

Bring up second interface:

# ifconfig e1000g1 plumb
# ifconfig e1000g1 group ipmpgrp

To make it persistent across reboots:

vi /etc/hostname.e1000g0 and add the following:
<hostname> netmask + broadcast + group ipmpgrp up

NOTE: Please replace <hostname> with actual machine name.

Updated other interface, hostname.e1000g1 file

vi /etc/hostname.e1000g1
group ipmpgrp standby up

To verify that IPMP has been setup, please do the following
[NOT recommended for production servers]

# if_mpadm -d e1000g0 [Detach interface and force failover]

IP address should attach with e1000g1 interface.

# if_mpadm -r e1000g0 [Reattach interface and failback]

IP address should be back on e1000g0 interface

Solaris - Restore non-global zone configuration


global# zonecfg -z my-zone -f /tmp/myzone.config

Solaris - Backup non-global zone backup


global# zonecfg -z my-zone export > /tmp/myzone.config

Solaris - Add a network interface to a non-global zone


global# zonecfg -z my-zone
zonecfg:zone-name> add net
zonecfg:zone-name:net> set physical=e1000g1
zonecfg:zone-name:net> set address=192.168.1.50
zonecfg:zone-name:net> end

Solaris - Add UFS file system in a non-global zone


global# newfs /dev/rdsk/c1t0d0s0
global# zonecfg -z my-zone
zonecfg:my-zone> add fs
zonecfg:my-zone:fs> set dir=/usr/mystuff
zonecfg:my-zone:fs> set special=/dev/dsk/c1t0d0s0
zonecfg:my-zone:fs> set raw=/dev/rdsk/c1t0d0s0
zonecfg:my-zone:fs> set type=ufs
zonecfg:my-zone:fs> end

Solaris - Clone an existing zone

To create a clone of my-zone to zone1 ( a new zone)

Halt source zone (my-zone)

# zoneadm -z my-zone halt
# zonecfg -z my-zone export -f /export/zones/master

This will make a file master and put all my-zone configuration in it. Modify this file and create a new zone:

# zonecfg -z zone1 -f /export/zones/master

Install a new zone, zone1 clone my-zone

# zoneadm -z zone1 clone my-zone

Solaris - Restore a file or directory from flash archive

Flash archives (flar files) are useful to create new servers or recover old ones, but sometimes only need few files or directories from the image.  Here is how to recover individual files from the archive.

First, split the flar file into its components.  This will put everything in tempdir.
# mkdir /tmp/tempdir

Go to the directory where flash (flar file) exist:

# flar -s -d /tmp/tempdir flasharchive.flash

Optionally, list the contents of the archive.  This assumes the flar file was created with the -c, or compress, option:
# cat archive | uncompress | cpio -it

Extract the files.  The flar command doesn't put the initial /, so this will extract the files into the current directory:

# cat archive | uncompress | cpio -idm "opt/sec_mgmt/*"

Solaris - Replace faulty mirorred root disk


Before shutdown the machine and physically replace the disk

1. Make a backup of the following files
# cp /etc/vfstab /etc/vfstab.before_mirror
# metastat –c > /var/crash/metastatC.out
# metastat > /var/crash/metastat.out
# metadb > /var/crash/metadb.out
# cp /etc/system /var/crash

In this procedure we will assume that /dev/dsk/c0t1d0 disk failed.

2. Check defective drive:

# iostat –En /dev/dsk/c0t1d0 [You will see errors]

c0t1d0 Soft Errors: 0 Hard Errors: 102 Transport Errors: 231
Vendor: SEAGATE Product: ST914602SSUN146G Revision: 0400 Serial No: 070490N5V8
Size: 146.80GB <146800115712 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 102 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

# cfgadm –al

Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 disk connected configured unknown
c0::dsk/c0t1d0 disk connected configured unknown

# metastat –c [It will show that bad disk is in maintenance state]

d7 m 10GB d17 d27 (maint)
d17 s 10GB c0t0d0s7
d27 s 10GB c0t1d0s7 (maint)
d4 m 40GB d14 d24 (maint)
d14 s 40GB c0t0d0s4
d24 s 40GB c0t1d0s4 (maint)
d3 m 40GB d13 d23 (maint)
d13 s 40GB c0t0d0s3
d23 s 40GB c0t1d0s3 (maint)
d1 m 2.0GB d11 d21 (maint)
d11 s 2.0GB c0t0d0s1
d21 s 2.0GB c0t1d0s1 (maint)
d0 m 3.0GB d10 d20 (maint)
d10 s 3.0GB c0t0d0s0
d20 s 3.0GB c0t1d0s0 (maint)
d5 m 40GB d15 d25 (maint)
d15 s 40GB c0t0d0s5
d25 s 40GB c0t1d0s5 (maint)

3. Remove mirror information from bad disk

# metadb -d /dev/dsk/c0t1d0s6 /dev/dsk/c0t1d0s6
# metadetach -f d5 d25
# metadetach -f d0 d20
# metadetach -f d1 d21
# metadetach -f d3 d23
# metadetach -f d4 d24
# metadetach -f d7 d27
# metaclear d25
# metaclear d20
# metaclear d21
# metaclear d23
# metaclear d24
# metaclear d27

4. Check the successful mirror reduction:
  
# metastat –c
# metadb

5. Unconfigure disk in Solaris

# cfgadm -c unconfigure c0::dsk/c0t1d0

Shutdown the machine and physically replace the disk

6. Shutdown the machine and physically replace the faulty disk. [On some servers we do not need to shutdown the machine, disk are hot swappable, please consult machine documentation]
# init 5

7. Configure new disk

# cfgadm -c configure c0::dsk/c0t1d0 

8. Verify that disk is visible and there is no error

# echo | format [It will show you c0t1d0 disk]
# iostat –En /dev/dsk/c0t1d0
# cfgadm -al

9. Copy partition table from root disk [in this case we assume it is /dev/dsk/c0t0d0]

# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

10. Install boot block
   
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s2

11. Create state database replicas on new disk

# metadb –a –c 2 c0t1d0s6 [Verify from old metadb output that how many replicas you need on new disk and on which slice. If need three replicas run same command with –c 3]

 12. Check that replicas created

# metadb [It should show you same number of replicas on both disks and on same slice]

13. Create meta devices on new disk
# metainit -f d20 1 1 c0t1d0s0
# metainit -f d21 1 1 c0t1d0s1
# metainit -f d23 1 1 c0t1d0s3
# metainit -f d24 1 1 c0t1d0s4
# metainit -f d25 1 1 c0t1d0s5
# metainit -f d27 1 1 c0t1d0s7

14. Create mirror or synchronize data on new disk
# metattach d0 d20
# metattach d5 d25
# metattach d1 d21
# metattach d3 d23
# metattach d4 d24
# metattach d7 d27

15. Check that mirror is sync’ing

# metastat –c [It will tell you how much data has been sync’ed on each slice]

Solaris - Root disk mirroring by using SVM


Prerequisites
First, you need to identify the disks that you want to create mirrors with. You can do this by using the format command to find the disks in question.
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <DEFAULT cyl 17845 alt 2 hd 255 sec 63>
/pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
1. c0t1d0 <DEFAULT cyl 17845 alt 2 hd 255 sec 63>
/pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0
In my example, I'm mirroring the root partitions along with the other partitions from the disk drive. My drives are c0t0d0 and c0t1d0.
Procedure for Mirroring root
First, partition your primary drive, typically the one that the Solaris OS is currently running on. (In my case, this is drive 0, c0t0d0.)
You will need one partition that is about 10 Mbyte for the meta database.
Once you are satisfied with the partition that you have created, ensure that you label the disk, and then perform the following steps to transfer the same partitioning table.
Transfer the partition table from one drive to another.
# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
Note: Notice the use of s2, which is typically the overlap partition; if you changed this on the disk, please substitute the proper slice in its place.
Now that you have the two disks looking the same, execute the following:
# metadb -a -c 3 -f c0t0d0s7 c0t1d0s7
The -c 3 creates three copies of the metastat database in this space, just in case a single copy gets corrupted (which is never good).
We will initialize the disk that makes up the root partition by doing the following. I'm using s0 because this is my root partition; you can substitute where appropriate.
# metainit -f d11 1 1 c0t0d0s0
# metainit -f d12 1 1 c0t1d0s0
Now we will create the actual mirror:
# metainit d10 -m d11
After you have completed the preceding steps, you need to run the following command, which will automatically update /etc/system and /etc/vfstab to let it know that you are using a metadevice as your root disk.
# metaroot d10
Once command finished, reboot the machine:

# metattach d10 d12
To check on the status of the mirror, you can do the following:
# metastat d10
You will want to update the Openboot with the prior alias for the boot devices. You can do this by doing the following:
# ls -l /dev/dsk/c0t0d0s0
You output will look similar to the following
lrwxrwxrwx 1 root root 42 Jul 12 2007 /dev/dsk/c0t0d0s0 -> ../../devices/pci@1e,600000/ide@d/sd@0,0:a
You will need to update the bold part above, with your output. You will then run the following command from the OS
# eeprom "nvramrc=devalias mirror /pci@1e,600000/ide@d/disk@0,0:a devalias mirror-a /pci@1e,600000/ide@d/disk@1,0:a"
# eeprom boot-device="mirror mirror-a"
# eeprom "use-nvramrc?=true"
The below commands for doing this are from the OK prompt, don't do this else wise.
OK> nvalias mirror /pci@1e,600000/ide@d/disk@0,0:a mirror-a /pci@1e,600000/ide@d/disk@1,0:a
OK> setenv boot-device mirror mirror-a
If you are mirroring just the two internal drives, you will want to add the following line to /etc/system to allow it to boot from a single drive. This will bypass the SVM Quorum rule
set md:mirrored_root_flag = 1
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s

HPUX - Maximum number of Luns supported


10.2 11     11i V1 /   11i     V2                     11i V3
Acitve LUN devices 768 2400 8192    16 million
LVM PV's 65280 65280 65280    16 million