Tuesday, November 26, 2013

LVM with HP-UX 11i v3

Creating a Mount Point after Presenting a LUN from Storage 

 

1.  Identify newly added LUN.
2.  Create Physical volume(PV)
3.  Create Volume Group(VG)
4.  Create Logical Volume(LV)
5.  Create File system
6.  Mount File system
7.  Entries in /etc/fstab file.

Here are the Explanations with commands to perform above plan of action:

Identify newly added LUN

Create a LUN from storage and present to the server on which you want to assign a new mount point. To detect new LUN on the server use below command, it will show you all disks presented to the server till now.

#ioscan –fnNC disk





 





Here:

f:- Generate a full listing, displaying the module's class, instance number, hardware path, driver,  software state, hardware type, and a brief description.
n:- Generate a full listing, displaying the module's class, instance number, hardware path, driver, software state, hardware type, and a brief description.
C:- strict the output listing to those devices belonging to the specified class
N:- Display the agile view of the system hardware.

Below command shows the difference between persistent DFS and Legacy DSF. In next steps we are going to use persistent DSF.

 #ioscan –m dsf

Persistent DSF           Legacy DSF(s)
========================================
/dev/pt/pt4                         /dev/rscsi/c0t0d0
                                          /dev/rscsi/c2t0d0
                                          /dev/rscsi/c4t0d0
                                          /dev/rscsi/c6t0d0
/dev/rdisk/disk41             /dev/rdsk/c1t0d0
                                          /dev/rdsk/c3t0d0                
                                          /dev/rdsk/c5t0d0
                                          /dev/rdsk/c7t0d0
/dev/rdisk/disk42              /dev/rdsk/c1t0d1
                                          /dev/rdsk/c3t0d1
                                          /dev/rdsk/c5t0d1
                                          /dev/rdsk/c7t0d1

 To find which disks are not used in the LVM.
 #pvdisplay –l  /dev/disk/*

/dev/disk/disk41:LVM_Disk=no
/dev/disk/disk42:LVM_Disk=yes
/dev/disk/disk43:LVM_Disk=yes
/dev/disk/disk44:LVM_Disk=yes
/dev/disk/disk45:LVM_Disk=yes

From the above output we are able to find disk41 is not used in LVM. So we proceed with disk41. And cross check with the size of disk.

#diskinfo /dev/rdisk/disk41

SCSI describe of /dev/rdisk/disk41:
             vendor: HP
         product id: OPEN-V
               type: direct access
               size: 56691712 Kbytes
   bytes per sector: 512

Output suggests that it is the same size of disk for which we are looking for. So proceed to next step.

Create Physical volume(PV)

A disk has to be initialized before LVM can use it.
 
 
#pvcreate /dev/rdisk/disk41

Physical volume "/dev/rdisk/disk41" has been successfully created.

 If disk41 already initialized before then you will get below error message

# pvcreate: The Physical Volume already belongs to a Volume Group

If you are sure the disk is free you can force the initialization using the -f option:

#pvcreate –f /dev/rdisk/disk41

Create Volume Group(VG)


Select a unique minor number for the VG:

# ll /dev/*/group

crw-r--r-- 1 root sys 64 0x000000 Apr 4 2010 /dev/vg00/group
crw-r--r-- 1 root sys 64 0x010000 Oct 26 15:52 /dev/vg01/group
crw-r--r-- 1 root sys 64 0x020000 Aug 2 15:49 /dev/vg02/group

Create the VG control file (group file):

# mkdir /dev/vg03

# mknod /dev/vg03/group c 64 0x030000

Create the VG
#vgcreate  -s 256 /dev/vg03 /dev/disk/disk41

Volume group "/dev/vg03" has been successfully created.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

 s: Size of physical extend(PE) in MB.

If you have 2 or more PV to add in a VG, you can add them in one go, just adding next to disk41 with a space.
#vgcreate –s 256 /dev/vg03 /dev/disk/disk41 /dev/disk/disk40

To display VG information 
 
#vgdisplay  -v /dev/vg03

--- Volume groups ---
VG Name                           /dev/vg03
VG Write Access                read/write
VG Status                           available
Max LV                             255
Cur LV                              1
Open LV                           1
Max PV                             16
Cur PV                              1
Act PV                              1
Max PE per PV                1727
VGDA                               2
PE Size (Mbytes)             256
Total PE                           216
Alloc PE                           0
Free PE                            216
Total PVG                         0
Total Spare PVs                0
Total Spare PVs in use      0
VG Version                       1.0
VG Max Size                    6908g
VG Max Extents               27632

Create Logical Volume(LV)

To create a LV from a VG (option: L- assigns Size in MB; l - Assigns size in Number of PE, n – assigns name to LV)

# lvcreate  -L 55040 –n /dev/vg03/lvol1 /dev/vg03

Logical volume "/dev/vg03/lvol1" has been successfully created with character device "/dev/vg03/lvol1"
Logical volume "/dev/vg03/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

 To display LV information

# lvdisplay -v /dev/vg03/lvol1

--- Logical volumes ---
LV Name                          /dev/vg03/lvol1
VG Name                         /dev/vg03
LV Permission                 read/write
LV Status                          available/syncd
Mirror copies                    0
Consistency Recovery     MWC
Schedule                           parallel
LV Size (Mbytes)             55040
Current LE                        215
Allocated PE                     215
Stripes                               0
Stripe Size (Kbytes)          0
Bad block                          on
Allocation                         strict
IO Timeout (Seconds)      default

Create File system

 You can use newfs to put a FS onto the LV:

# newfs  -F vxfs /dev/vg03/rlvol1

F: - File system type either hfs or vxfs. Nowadays it is always recommended to use a VxFS (=JFS) filesystem.

Mount File system

Mounting created File System

#mkdir /data


#mount /dev/vg03/lvol1 /data

Use the bdf command to see the mounted file systems

#bdf

Entries in /etc/fstab file

Make entries in /etc/fstab file to make mount point permanent between reboots. You can do this with below command or open this file with vi editor and add entries at the end.

# echo “/dev/vg03/lvol1  /data vxfs defaults 0 2” >> /etc/fstab


#vi /etc/fstab

# System /etc/fstab file.  Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /home vxfs delaylog 0 2
/dev/vg00/lvol5 /opt vxfs delaylog 0 2
/dev/vg00/lvol6 /tmp vxfs delaylog 0 2
/dev/vg00/lvol7 /var vxfs delaylog 0 2
/dev/vg00/lvol8 /usr vxfs delaylog 0 2
/dev/vg03/lvol1 /data vxfs defaults 0 2
 
 

Verifying Which Ports Are Listening

nmap -sT -O localhost
The output of this command looks like the following:
Starting nmap V. 3.00 ( www.insecure.org/nmap/ )
Interesting ports on localhost.localdomain (127.0.0.1):
(The 1596 ports scanned but not shown below are in state: closed)
Port       State       Service
22/tcp     open        ssh
111/tcp    open        sunrpc
515/tcp    open        printer
834/tcp    open        unknown
6000/tcp   open        X11
Remote OS guesses: Linux Kernel 2.4.0 or Gentoo 1.2 Linux 2.4.19 rc1-rc7)

Nmap run completed -- 1 IP address (1 host up) scanned in 5 seconds
 
 
 

Redhat / CentOS / Fedora Linux Open Port


Open flle /etc/sysconfig/iptables:
# vi /etc/sysconfig/iptables

Append rule as follows:
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

Save and close the file. Restart iptables:
# /etc/init.d/iptables restart

 Open port 110

Append rule as follows:
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 110 -j ACCEPT

Open port 143

Append rule as follows:
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 143 -j ACCEPT

 Restart iptables service

Type the following command:
# service iptables restart

Verify that port is open

Run following command:
netstat -tulpn | less