Showing posts with label iscsi. Show all posts
Showing posts with label iscsi. Show all posts

Sunday, 13 June 2010

Multipathing and Multiple Connections Per Session - Two sides of the same iSCSI coin?

One again a record breaking title for a post! lets hope my google-fu is not
effected by long titles...or I'm in real trouble ;)

So I was working today on something that envolved me testing iSCSI functionality with Windows Server 2008.
While I was waiting for the VM to come up, I set about testing the iSCSI initiator within Windows 7.

What interested me most was a feature called "MCS" which stands for Multiple Connections Per Session and is defined within RFC-3720 and as such a a protocol level feature that allows features we have previously seen with MPIO.

Here is how to get there:

Load the iscsi software from Control Panel->Administative Tools->iSCSI Initiator:
Pic1:




Select the Target from the list click "properties"
Pic2:




Select the MCS policy you wish to have, I selected "fail over only" which is the same
as "fixed" in MPIO world.


Pic3:




You probably will only have one session at the moment, therfore click "add"
Dont click "connect"!

Pic4:



Click "Advanced"
Here is where you pick the other iSCSI target portal.

Pic5:






And thats great! we have a redundant path to our iSCSI targets..but notice this button:

Pic6:




Hmm MPIO is not avalible within Windows 7, which is fine as MCS pretty much gets us to the same place (Inface some say MCS is better) however with Windows Server 2008 we have the option of MPIO so lets give it a go!

First thing to remember is that MPIO is a driver thing so if you have an EMC,3par,netapp,Dell etc device they all have MPIO driver for Windows 2008 so you need to follow their instructions (and look for DSM instructions), here we are using Windows 2008 Software iSCSI Initiator and Windows Server 2008 native MPIO driver.

When you install/start iscsi on windows server 2008 it asks you to install MPIO, if you said no..or just forgot install MPIO like this:

From the "Add features Wizard"
Pic1:



Once installed select MPIO from Control Panel click "Add support for iSCSI devices"
then reboot (p.s. here is where you would add the 3rd Party DSM drive btw)

Pic2:



Go Back to the iscsi Initiator (within Administrative tools)
Pic3:



Select the target click properties
Pic4:


Highlight the sessions click "Devices..."

Pic5:



Click MPIO and select the Policy you want
Pic6:





Hope that helps someone out there!


Sources:
http://www.ietf.org/rfc/rfc3720.txt

http://www.windowsitpro.com/article/virtualization2/Q-With-iSCSI-what-s-the-difference-between-Multipath-I-O-MPIO-and-multiple-connections-per-session-MCS-.aspx

Sunday, 16 May 2010

VMware Storage Alphabet Soup and Making the Most of VMwares Multipathing

Having recently moved into an enviroment where the storage is a little alien to me, I thought would be helpful to buff up on some storage knowledge and thought it might help some readers too.
Here is a diagram of a midrange san:

(Thanks Virtualgeek for this picture)


See the two items list as "Data Processor(head) A" and "Data Processor(head) B"?
Traditionally if you are using Active/Active Processor array you should use "Fixed" as the Multipathing method and In an Active/Passive array use "MRU".

However this changed with:
ALUA:Symmetric Logical Unit Access
Essentially in midrange san enviroments (EMC Clariion etc), this allows an unoptimized and an optimized path to a lun through different heads.

ESX(4) the HBA is aware of optimized and unoptimized paths as it knows which head has control of the LUN!
Suddenly we can use MRU with Active/Active heads.

MRU
Most recently used:Use the Optimized Path unless it is not avalible then use the Unoptimized path (ESX 4.0/vSphere only)

Fixed: Always use this LUN unless it is unavalible.

NMP:Native MultiPath Driver:

MMP:Multipath Plugin (EMC Powerpath)

Round Robin: Within ESX server's iSCSI HBA it sends 4000 IO blocks down one path then moves to the next path.

Custom Policy
:
Use the following commmand to tweak the iSCSI HBA
esxcfg-mpath --lun vmhba32:0:8 --policy custom --custom-hba-policy any --custom-max-blocks 1024 --custom-max-commands 50 --custom-target-policy any


References:
http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_iscsi_san_cfg.pdf
http://www.vmware.com/pdf/vi3_35_25_roundrobin.pdf
http://virtualgeek.typepad.com/virtual_geek/2009/09/a-couple-important-alua-and-srm-notes.html
http://virtualgeek.typepad.com/virtual_geek/2008/08/celerra-virtual.html

Tuesday, 9 March 2010

Samba Cluster with GFS 2, Centos 5, iSCSI and Openfiler

Another awesome lab/demo for you today ;)

But seriously, after finding the general documentation to be a bit lacking regarding clustering (especially with regards to the extra quorum vote)

Heres hoping that this lab will allow you to work out how clusters work and implement it within your company.

A diagram for your viewing pleasure:





Part1
VMware Lab Setup
Node Setup
iSCSI setup
Quorum Setup

Helpful Commands:

system-config-network
edit /etc/hosts
service network restart
yum groupinstall "Clustering"
yum groupinstall "Cluster Storage"
yum groupinstall "Windows File Server"
chkconfig --del smb
yum install iscsi-initiator-utils
service iscsi start
iscsiadm -m discovery -t sendtargets -p 192.168.1.3
service iscsi restart
fdisk -l
mkqdisk -c /dev/sdb -l quorum
luci_admin init

Samba Cluster with GFS 2, Centos 5, iSCSI and Openfiler - Part 1 from Richard Vimeo on Vimeo.




Part2
GFS2 Setup
Configuring using Luci
Quorum setup cont

Helpful Commands:

mkfs.gfs2 -p lock_dlm -t cluster1:sanvol1 -j 4 /dev/sdc
mkdir /san
mkdir /san/sanvol1
service ricci restart
service qdiskd restart
chkconfig luci on
chkconfig qdiskd on
(do node2)

use luci to create cluster


Quorum parameters:
interval=1
votes=1
tko=10
min score=1
heuristics=ping -c2 -t1 192.168.1.3

mount /dev/sdc /san/sanvol1
gfs2_tool list
gfs2_tool df
umount /san/sanvol1

cman_tool status


Samba Cluster with GFS 2, Centos 5, iSCSI and Openfiler - Part 2 from Richard Vimeo on Vimeo.



Part 3
Configuring Fencing, Failover Domain, Resources
and Services.

Helpful Commands:

Configure Resources:
IP
GFS
Samba

Configure failover Domains

Configure Shared Fencing Device (then nodes)

Add Services

workgroup = cookie
server string = Samba Server Version %v
bind interfaces only = yes
interfaces = 10.0.1.100
netbios name = cluster1
local master = no
domain master = no
preferred master = no
password server = None
guest ok = yes
guest account = root
security = SHARE
dns proxy = no




[sanvol]
comment = High Availability Samba Service
browsable = yes
writable = yes
public = yes
path = /san/sanvol1
guest ok=yes
create mask=0777

smbpasswd -a root

scp /etc/samba/smb.conf.cluster1 node2:/etc/samba/

restart smb

redo services - ip-GFS-samba

soft reboot


Samba Cluster with GFS 2, Centos 5, iSCSI and Openfiler - Part 3 from Richard Vimeo on Vimeo.




Part 4
Testing!

Samba Cluster with GFS 2, Centos 5, iSCSI and Openfiler - Part 4 from Richard Vimeo on Vimeo.






Enjoy!

Friday, 13 November 2009

VMware Vsphere Lab-How to Part 3

Part 3 covers:
1)OpenFiler Setup for ESX server
2)iSCSI HBA setup (ESX)
3)Vconverter
4)Vmotion setup
5)Live Vmotion!

Vsphere within VMware Workstation 7 Part 3 from Richard Vimeo on Vimeo.