Showing posts with label Netapp. Show all posts
Showing posts with label Netapp. Show all posts

Friday, December 4, 2009

NFS: 898: RPC error 13 (RPC was aborted due to timeout) trying to get port for Mount Program (100005)

I was trying to mount NFS datastore on vSphare 4.0 from my netapp sim 7.3 version and I am getting following error message while mounting under vmkernel

WARNING: NFS: 898: RPC error 13 (RPC was aborted due to timeout) trying to get port for Mount Program (100005) Version (3) Protocol (TCP) on Server (x.x.101.124)

[root@xxx ~]# esxcfg-route

VMkernel default gateway is x.x.100.1

[root@xxx ~]# vmkping -D

PING x.x.100.140 (x.x.100.140): 56 data bytes

64 bytes from x.x.100.140: icmp_seq=0 ttl=64 time=0.434 ms

64 bytes from x.x.100.140: icmp_seq=1 ttl=64 time=0.049 ms

64 bytes from x.x.100.140: icmp_seq=2 ttl=64 time=0.046 ms

--- x.x.100.140 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.046/0.176/0.434 ms PING x.x.100.1 (x.x.100.1): 56 data bytes

64 bytes from x.x.100.1: icmp_seq=0 ttl=255 time=0.999 ms

64 bytes from x.x.100.1: icmp_seq=1 ttl=255 time=0.837 ms

64 bytes from x.x.100.1: icmp_seq=2 ttl=255 time=0.763 ms

--- x.x.100.1 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.763/0.866/0.999 ms [root@xxx]# ping x.x.101.124 PING x.x.101.124 (10.1.101.124) 56(84) bytes of data.

64 bytes from x.1.101.124: icmp_seq=1 ttl=254 time=0.507 ms

64 bytes from x.1.101.124: icmp_seq=2 ttl=254 time=0.554 ms

64 bytes from x.1.101.124: icmp_seq=3 ttl=254 time=0.644 ms

--- x.1.101.124 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.507/0.568/0.644/0.060 ms

netappsim1> exportfs

/vol/vol0/home -sec=sys,rw,nosuid

/vol/vol0 -sec=sys,rw,anon=0,nosuid

/vol/vol1 -sec=sys,rw,root=x.x.130.17

Well there are few guidelines which should be followed while mounting NFS datastore on ESX host.

Here the list of requirement

1. Only root should have access to the NFS volume.

2. Only one ESX host should be able to mount NFS datastore.

How do we approach to achieve this ?

1. Create a volume on NetAPP filer choosing one of the aggregate. Once the volume is create we need to export the volume.

2. This can be done from CLI and to do it from CLI

srm-protect> exportfs -p rw= 10.21.64.34,root=10.21.64.34 /vol/vol2

srm-protect> exportfs

/vol/vol0/home  -sec=sys,rw,nosuid

/vol/vol0       -sec=sys,rw,anon=0,nosuid

/vol/vol1       -sec=sys,rw=10.21.64.34,root=10.21.64.34

Here something to remember that the IP address 10.21.64.34 is of VMKERNAL which should be created before mounting NFS volume over ESX host.

While creating VMKERNEL ensure that IP should be in same subnet that’s of NFS server or else you will have error above. After changing IP subnet I was able to mount the NFS datastore

3. From “Filer view ” NFS > Manage Exports:

clip_image002

In the Export Options window, the Read-Write Access and Security

check boxes are already selected. You will need to also select the Root Access

check box as shown here. Then click Next:

clip_image004

Leave the Export Path at the default setting:

clip_image006

In the Read-Write Hosts window, click on Add to explicitly add a RW host.Here you need to mention IP address of VMKERNEL and not service console of the ESX host or else the rights will not be applied .

clip_image008

Populate the Host to Add with the (VMkernel) IP address:

clip_image010

The Read-Write Hosts window should now include my VMkernel I

address. Click Next to move to the Root Hosts window:

clip_image012

Populate the Root Hosts exactly the same way as the Read-Write

Hosts by clicking on the Add button. This should again be the VMkernel/IP

Storage IP address. When this is populated, click Next:

clip_image014

At the Security screen, leave the security flavour at the default of

Unix Style and click Next to continue:

clip_image016

Monday, November 23, 2009

PANIC: SWARM REPLAY: log update failed

We have trying to implement SRM and I setup NetAPP 7.3 simulator on Ubantu . During the setup Netapp assign only 3 disk of size .

clip_image002

These agrr is intall by default when  simulator is installed. These are 120MB*3  disk. Make sure when the installation is completed you add extra disk to the aggr or else you will land up situation which I have stated above.

When you get above error  boot the system into maintenance mode and then create a new root volume. To get into maintenace mode re-run the setup and then “floppy boot” to Yes. And also 4a will intialize all the disks and you will loose data.

clip_image004

I have booted the system into maintenance mode and set the aggr0 (root volume) offline

clip_image006

Now create the new root volume with following command

clip_image008

Reboot the system and run ./runsim.sh . If you have followed all the steps then you should be able to get sim online. I got following error message as I did not followed 4a thing.

root@netappsim1:/sim# ./runsim.sh

runsim.sh script version Script version 22 (18/Sep/2007)

This session is logged in /sim/sessionlogs/log

NetApp Release 7.3: Thu Jul 24 12:55:28 PDT 2008

Copyright (c) 1992-2008 Network Appliance, Inc.

Starting boot on Thu Nov 12 14:30:07 GMT 2009

Thu Nov 12 14:30:15 GMT [fmmb.current.lock.disk:info]: Disk v4.18 is a local HA mailbox disk.

Thu Nov 12 14:30:15 GMT [fmmb.current.lock.disk:info]: Disk v4.17 is a local HA mailbox disk.

Thu Nov 12 14:30:15 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.

Thu Nov 12 14:30:16 GMT [raid.vol.replay.nvram:info]: Performing raid replay on volume(s)

Restoring parity from NVRAM

Thu Nov 12 14:30:16 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks.

Thu Nov 12 14:30:16 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes.

Replaying WAFL log

.........

Thu Nov 12 14:30:20 GMT [rc:notice]: The system was down for 542 seconds

Thu Nov 12 14:30:20 GMT [javavm.javaDisabled:warning]: Java disabled: Missing /etc/java/rt131.jar.

Thu Nov 12 14:30:20 GMT [dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives

Thu Nov 12 14:30:20 GMT [sfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk shelves.

Thu Nov 12 14:30:21 GMT [netif.linkUp:info]: Ethernet ns0: Link up.

Thu Nov 12 14:30:21 GMT [rc:info]: relog syslog Thu Nov 12 13:50:59 GMT [sysconfig.sysconfigtab.openFailed:notice]: sysconfig: table of valid configurations (/etc/sys

Thu Nov 12 14:30:21 GMT [rc:info]: relog syslog Thu Nov 12 14:00:00 GMT [kern.uptime.filer:info]:   2:00pm up  2:09 0 NFS ops, 0 CIFS ops, 0 HTTP ops, 0 FCP ops, 0 iS

Thu Nov 12 14:30:21 GMT [httpd.servlet.jvm.down:warning]: Java Virtual Machine is inaccessible. FilerView cannot start until you resolve this problem.

Thu Nov 12 14:30:21 GMT [sysconfig.sysconfigtab.openFailed:notice]: sysconfig: table of valid configurations (/etc/sysconfigtab) is missing.

Thu Nov 12 14:30:21 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too short (SNMPv3 requires at least 8 characters).

Thu Nov 12 14:30:22 GMT [mgr.boot.disk_done:info]: NetApp Release 7.3 boot complete. Last disk update written at Thu Nov 12 14:21:08 GMT 2009

Thu Nov 12 14:30:22 GMT [mgr.boot.reason_ok:notice]: System rebooted after power-on.

Thu Nov 12 14:30:22 GMT [perf.archive.start:info]: Performance archiver started. Sampling 20 objects and 187 counters.

Check if  jave is disabled

filer> java

Java is not enabled.

If java is not enabled then FilerView wont work you need to re-install the simulator image.

I will explain about simulator reinstall using NFS /CFS once I tested it. Keep following my blog

Friday, July 3, 2009

How present NetAPP ISCSI LUN to The Windows Host

I was give assignment to present ISCSI LUN to windows box. I have never done this before. We have to present storage from NetAPP filer FAS3050.This windows box was suppose to run SQL database. Earlier to present storage from filer ,MS ISCSI initiator along with SNAP mirroring were used. It was costing more than HBA’s .So we connected Qlogic HBA “QLE4062C ” and tested it from BIOS (Ctrl + S) to the filer.

1. Once windows is installed then make sure that we have driver for Qlogic card is installed. Because driver don’t get installed by default. Once driver is installed then you can find Qlogic card under device manager

clip_image002

2. After Qlogic card is installed then we have can also test the connectivity to the filer from card using device
clip_image004

3. But this is not the end of the story . My real pain started how to enter target IP .After searching Qlogic website I found that  “SANSurger HBA Manger “ is my friend

clip_image006

http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/SearchByProduct.aspx?ProductCategory=82&Product=1037&Os=64

4. Download and Install SANSurfer iSCSI HBA Manager in client mode

clip_image008

5. Make sure you choose “iSCSI  GUI and Agent” so that you can have agent and we can also do some additional troubleshooting in case there is problem

clip_image010

6. It will ask for destination to install

clip_image012

7. Select for all the Users Profile

clip_image014

8. Once it is installed lunch it using “local host”

clip_image016

9. Once it is connected it will show all the physical card as well as port

clip_image018

You have to expand the card to see all the ports. Before we configure the lun , we have to ensure that SAN admin has created Lun for WINDOWS. We also have to share IQN with SAN admin

10. You can find the link status and IQN number from this, This needs to be shared with Storage person

clip_image020

11. Select wizard from the top and select “General Configuration Wizard”. Select HBA port and then next

clip_image022\

12. It will give driver version and MAC address

clip_image024

13. Provide IP address for the HBA

clip_image026

14. Don’t choose anything if you are not using IPV6

clip_image028

15. Don’t select iSNS if you are not uins Storage Network Server

clip_image030

16. This part is very crucial . We have to add the target IP address of the filer. Click on the green + tab. It will ask for target IP address for

clip_image032

clip_image034

17. Select next after adding. It will give summary

clip_image036

18. Once we hit “Next” then it will give warning message

clip_image038

19. Now it is very important steps. It will ask for password and the password is “config”. This information you can find in admin guide. This is not root password not anything but default password for SAN surfer manager

clip_image040

. Once it finished you can find space under “Disk Management”