Saturday, May 31, 2008

Best practices for deploying Citrix on vmware ESX

There is a discussion going on dutch site about how to run citrix on VMWare. They came up with conclusion that if we disable balloon driver on the vm's which get installed as a part of the vmtools it will perform better. KB article from VMWare explains how to disable the balloon driver


The goods:
Virtual Infrastructure 3
Windows 2003 Std (or Enterprise) Edition R2 (x86, not x64)
Citrix Presentation Server 4.0 (yes, I know, the old one ;))
The tips:
First this: it all depends on the applications used! Context switches is the key here...
Use Windows 2003, not Windows 2000
Don’t P2V your servers, but use clean templates
Make sure the correct HAL (single or multi) is installed in the virtual machine. Otherwise, your vCPU will spike.
Always assign 1vCPU. If necessary, add a 2nd vCPU. Do not use 4 vCPUs!
Use 2 GB to start. Scale up to +-4 GB of vRAM if necessary
Use 1 .vmdk for your system partition (C:\ or other remapped drive letter) and 1 separate .vmdk for your program files.
Put the page file on the 2nd .vmdk
Important: disconnect any .iso file in your virtual CD-Rom
Use roaming profiles and cleanup your profiles at logoff
Disable sound for your published apps
Install the UPH service (download it here)
User sessions: for me, 30 users on a VM is the sweet spot. Do not expect to get as many users on it as on a physical box!
Scale out, not up. A major advantage of VM is to clone/NewSID/sysprep existing servers and put them into your existing Citrix farm. Just stop & disable your IMA service, clean up your RMLocalDB (if you use enterprise) and NewSid the thing. Refer to this support article for more info.
Use dual core or quad core systems. This because ESX will have more CPU to schedule its vCPUs on.
Don’t ever use a 2 vCPU Citrix virtual machine in a 2 pCPU physical machine!
Do not install the memory ballooning driver while installing the VMware Tools
Do not use a complete installation Vmware tools: there is an issue with roaming profiles and the shared folders component. See my previous article for more info.
Disable COM ports, hyperthreading, visual effects & use speedscreen technology where possible.
Use snapshots when installing applications or patching your servers (yes! With VMware you can do this!). In case of disaster, you can still revert to the original working server without using backups. Make sure all snapshots are removed ASAP when finished!
Always check that there are no snapshot leftovers (f.e. the infamous _VCB-BACKUP_ when using VCB)
Don’t forget you can use DRS rules to run your Citrix servers on separate physical hosts.
Check out this vmworld 2006 presentation
And last but not least: do not forget to read ESX's (excellent) performance tuning white paper.

Wednesday, May 28, 2008

Scalable Storage Performance with VMware ESX Server 3.5 - VMware VROOM!

VMWare came with the performance blog about “Scalable Storage Performance with VMware ESX Server 3.5” But I have some serious question about the way they concluded this. I asked them

Your statement “The maximum supported value is most commonly 256. For an I/O group (ESX Server(s) – LUN), it is important that the number of active SCSI commands does not exceed this value” what does it mean in terms of VMDK? How many VMDK I should be placing so that it should not exceed this value? My understanding about the queue depth is I can change the queue depth at the host level to match that with Frame and I can still push that many I/O from the host. For example if we see the queue depth of Frame around 1000 and if you have set the host at around 540 then you can get into potential problem. I can see some SCSI abort into my vmkernal logs. To fix this I can change the queue depth at host and that can take care of this error.

I also would like to know what was the block size of the VMDK and was it aligned during your test ?

We at VMware often get questions about how aggressively physical systems can be consolidated. Scalability on heavily-consolidated systems is not just a nice feature of VMware ESX Server, but is a requirement to support demanding applications in modern datacenters. With the launch of VI3 with ESX Server 3.5 we’ve further improved the efficiency of our storage system. For non-clustered environments, we’ve already shown in this comparison paper that our system overheads are negligible compared to physical devices. In this article we’d like to cover the scalable performance of VMFS, our clustered file system.

ESX Server enables multiple hosts to reliably share the same physical storage through its highly optimized storage stack and the VMFS file system. There are many benefits to a shared storage infrastructure, such as consolidation and live migration, but people commonly wonder about performance. While it is always desirable to squeeze the most performance out of the storage system, care should be taken not to severely over-commit the available resources, which can lead to performance degradation. Specifically, the primary factors that affect the shared storage performance of an ESX Server cluster are as follows:

1.The number of outstanding SCSI commands going to a shared LUN

SCSI allows multiple commands to be active on a link, and SCSI drivers support a configurable parameter called “queue depth” to control this. The maximum supported value is most commonly 256. For an I/O group (ESX Server(s) – LUN), it is important that the number of active SCSI commands does not exceed this value, otherwise the commands will get queued. Excessive queuing leads to increased latencies and potentially a drop in throughput. The number of commands queued per ESX Server host can be derived using the esxtop command.

2.SCSI reservations


VMFS is a clustered file system and uses SCSI reservations to implement on-disk locks. Administrative operations, such as creating/deleting a virtual disk, extending a VMFS volume, or creating/deleting snapshots, result in metadata updates to the file system using locks, and hence result in SCSI reservations. A reservation causes the LUN to be available exclusively to a single ESX Server host for a brief period of time. It is therefore preferable that administrators perform the above-mentioned operations during off-peak hours, especially if there will be many of them.

3.Storage device capabilities

The capabilities of the storage array play a role in how well performance scales with multiple ESX Servers. The capabilities include the maximum LUN queue depth, the cache size, the number of sequential streams, and other vendor-specific enhancements. Our results have shown that most modern Fibre Channel storage arrays have enough capacity to provide good performance in an ESX Server cluster.

We’re glad to share with you some results from our storage scalability experiments. Our hardware setup includes 64 blades running VMware ESX Server 3.5. They are connected to a storage array via 2Gbps Fibre Channel links. All hosts share a single VMFS volume, and virtual machines running IOmeter generate a heavy I/O load to that one volume. The queue depth for the Fibre Channel HBA is set to 32 on each ESX Server host, which is exactly how many commands are configured to be generated by all virtual machines on a single host. We measure two things:

•Aggregate Throughput - the sum of the throughput across all virtual machines on all hosts

•Average Latency - the end-to-end average delay per command as seen by any virtual machine in the cluster


It is clear from Figure 1 that except for sequential read there is no drop in aggregate throughput as we scale the number of hosts. The reason sequential read drops is that the sequential streams coming in from different ESX Server hosts are no longer sequential when intermixed at the storage array, and thus become random. Writes generally do better than reads because they are absorbed by the write cache and flushed to disks in the background

Figure 2 illustrates the effect of commands from all ESX Server hosts reaching the shared LUN on the storage array. Each ESX Server host generates 32 commands, hence at eight hosts we have reached the recommended maximum per LUN of 256. Beyond this point, latencies climb upwards of 100 msec, and could affect applications that are sensitive to latencies, although there is no drop in aggregate throughput.

These experiments represent a specific configuration with an aggressive I/O rate. Virtual machines deployed in typical customer environments may not have as high a rate and therefore may be able to scale further. In general, because of varying block sizes, access patterns, and number of outstanding commands, the results you see in your VMware environment will depend on the types of applications running. The results will also depend on the capabilities of your storage and whether it is tuned for the block sizes in your application. Also, processing very small commands adds some compute overhead in any system, be it virtualized or otherwise. Overall, the ESX Server storage stack is well tuned to run a majority of applications. If you are using iSCSI or NFS, this comparison paper nicely outlines how ESX Server can efficiently make use of the full Ethernet link speed for most block sizes.We’re always pleased to show the scalability of VMware Infrastructure 3, and the file system that supports the VI3 features is a good example. Look for more details on storage and VMFS performance in the form of whitepapers and presentations from VMware and its partners in the coming weeks.

Sizing the lun for VMware host

One of the blogger came up with size the lun and came up with formula
30 x (your average disk size) + 30Gb VM swap + 15% of (30 x your average disk size) = calculated LUN size.
He taken into account of disk queuing which I suppose is disk I/O but fail to consider about queue depth at SP and FC level. To me while determining lun sizes and how many VMDK you are going to place it all depends upon the math of queue depth and you can place as many as you can till you oversubscribe the queue depth. For determining lun size queue depth would be biggest factor and it can avoid any bottleneck as well as SCSI abort.

Friday, May 23, 2008

Sunday, May 18, 2008

Robocopy to copy drives across the machine

I was trying to do perform copy of the C: and D: of my virtual machine so that I can fix the alignment the disk by reformatting it as per VMWare recommendation.


From DISKPART :

create partition primary align=64


Well how should I make the drives blank? Thought of many option like NT backup and restore. I wanted to give a try for robocopy but was not sure how actually it works. Searched through the web and found some old document from MS for NT version. Contacted my GURU and he told me to use robocopy with following option

robocopy /mir /copyall /L:C:Logfile /R:1 <Source> <Destination> . Well I never tried this option because I was not sure if I am going to waste my time. I wanted to blog it so that if tomorrow I need to I can have somewhere rather digging though my notes. I also read somewhere that guy tried this robocopy /mir k: \Mybookworldpublic /XD dirs $RECYCLE.BIN /XN /XO on his vista PC and it worked. Microsoft tell that it only available with NT now , humm quite confused. If you want to try you can do it at your own risk *grinned* .


Here some of connected from Microsoft old document


=

NEW FEATURES IN THIS VERSION


Restartable Copies :

Specify /Z on the command line and failed file copies will restart from close to the

point of failure, which can save a lot of time for large files. Previous versions of

Robocopy would always restart failed copies from the beginning of the file.


%Copied Progress Indications :

By default, the program now regularly displays %copied during each file copy.

Specify /NP (No Progress) on the command line to turn this feature off.


Wait for sharename creation :

Specify /TBD (sharename To Be Defined) on the command line to request this.

Robocopy will then wait for a sharename to be created by retrying on error 67.

Previous versions would always quit on this error.


Force creation of short file names.

Specify /FAT on the command line to tell Robocopy to create destination files with

names that conform to the traditional 8.3 file name format, rather than long file

names. This can be useful when copying to downlevel file systems.


INTRODUCTION


Robocopy is a Win32 console-mode application designed to simplify the task of maintaining an identical copy of a directory tree in multiple locations, either on the same machine, or in separate network locations. Robocopy effectively provides file replication on demand.


Robocopy is robust. If Robocopy encounters recoverable network errors whilst scanning directories or copying files it will simply wait a while and retry the operation. You can control the time between retries and the number of retries to attempt before giving up.


Robocopy is efficient. By default, if a file exists in both the source and destination, Robocopy will only copy the file if the two files have different timestamps, or different sizes. This saves time if the source and destination are separated by a slow network link. Optionally, you may specify that copies are restartable in the event of a copy failure to save even more time if your network links are unreliable.


Robocopy is flexible. You can choose to copy a single directory, or walk a directory tree. You can specify multiple filenames and wildcards to select which source files are candidates for copying. You can exclude source files from being copied by name, path, wildcard, or file attributes. You can exclude directories from being walked by name or path. You can chose to copy only files with the Archive attribute set, and you can choose whether or not to turn off the Archive bit in the source file after copying. The program classifies files by whether or not they exist in the source directory, the destination directory, or both. In the latter case the program further classifies files by comparing time stamps and file sizes between the source file and the corresponding destination file. You have complete control over which of these file classes will be copied. You can also choose to move files rather than copy them. And you can also choose to purge (delete) destination files and directories that no longer exist in the source, and thereby maintain the destination as an exact replica of the source.


Robocopy is informative. Robocopy produces console output (which can be redirected to a disk file for later perusal) which lists the directories processed, which files are copied (and why), network errors, and incompatibilities between the source and destination directory tree structures. Optionally, you can also ask Robocopy to show estimated time of arrival of copied files, list which files are skipped (and why), and highlight differences in the structure of the source and destination trees that might merit further investigation, or require housekeeping. By default, Robocopy displays copy progress indication (% copied) for each file.


Finally, Robocopy runs fine as a scheduled job. Just configure the Schedule service to log on as a user who has appropriate access to the source and destination directories, and specify remote directories as UNC names on the scheduled Robocopy command line.


COMMAND LINE USAGE


Run Robocopy with no command line arguments for brief usage instructions:


-------------------------------------------------------------------------------

ROBOCOPY v 1.70 : Robust File Copy for Windows NT : by kevina@microsoft.com

-------------------------------------------------------------------------------


Started : Wed Aug 28 01:23:45 1996


Usage : ROBOCOPY source destination [file [file]...] [options]


source : Source Directory (drive:path or \serversharepath).

destination : Destination Dir (drive:path or \serversharepath).

file : File(s) to copy (names/wildcards - default is "*.*").


options : /S : copy Subdirectories, but not empty ones.

/E : copy subdirectories, including Empty ones.


/T : Timestamp all destination files, including skipped files.


/R:n : number of Retries on failed copies - default is 1 million.

/W:n : Wait time between retries - default is 30 seconds.

/REG : Save /R:n and /W:n in the Registry as default settings.


/TBD : wait for sharenames To Be Defined (retry error 67).

/FAT : create destination files using 8.3 FAT file names only.


/X : report all eXtra files, not just those selected.

/V : produce Verbose output, showing skipped files.

/L : List only - don't copy, timestamp or delete any files.


/A+:[R][A][S][H] : add the given Attributes to copied files.

/A-:[R][A][S][H] : remove the given Attributes from copied files.


/XA:[R][A][S][H] : eXclude files with any of the given Attributes

/A : copy only files with the Archive attribute set.

/M : like /A, but remove Archive attribute from source files.


/XF file [file]... : eXclude Files matching given names/paths/wildcards.

/XD dirs [dirs]... : eXclude Directories matching given names/paths.


/XC /XN /XO : eXclude Changed Newer Older files.

/XX /XL : eXclude eXtra Lonely files and dirs.

/IS : Include Same files.


/Z : Copies files in restartable mode.

/NP : No Progress - don't display % copied.

/ETA : show Estimated Time of Arrival of copied files.

/MOVE : Move files and dirs (delete from source after copying).

/PURGE : delete dest files/dirs that no longer exist in source.


USAGE NOTES


Use within a Unix Shell


All Robocopy switches can be specified Unix-style (e.g. you can use -ETA instead of /ETA), and source and destination directory paths can be specified using the Unix-style delimiter "/" rather than the native Windows-style delimiter "".


The only restriction is that any argument that starts with a "/" is taken to be a switch if it only contains a single "/". Thus //server/share/dir and /download/test are treated as paths, but /dir is treated as a switch. This is to avoid any possible confusion between switches and single-level paths subordinate to the root of a drive. To specify such a directory as an argument, use an alternate expression for its path, such as X:/dir or //server/C$/dir.


Use with Windows 95 and Windows NT 3.5x


Robocopy is a Unicode application, and will not run under Windows 95, because Windows 95 does not provide full Unicode support.


Also, Robocopy uses the new CopyFileEx() Win32 API, which is specific to Windows NT 4.0, and therefore will not run under Windows NT 3.5x.


Walking a Directory Tree


By default Robocopy will only process the single source directory specified on the command line. To process the whole directory tree beneath the source directory, you should use the /S switch or the /E switch. Both these options will walk the source tree, the only difference being that /S will refrain from creating new empty directories in the destination tree.


Retries


Most file system operations that that fail and return an error will cause the program to wait and then retry the operation until it succeeds or the retry limit is reached. By default there will be 30 seconds between retries, and up to one million (1000000) retries. Use /W:n to change the wait time in seconds between retries, and /R:n to change the retry limit, where n is a positive decimal integer, or zero if you do not want retries or wait times between retries. If invalid values are given for /W or /R, the respective default value is used.


To change the default retry parameters, use the /REG switch on a valid Robocopy command that specifies non-default values for /W and /R. When /REG is used, the values you specify for /W and /R will be stored in the Registry and used as default values for /W and /R in future Robocopy runs where /W and /R are not specified on the command line. Simply specify /W and/or /R on the command line to override the stored settings.


Note that certain errors in some operations will not be retried, where practical experience indicates this would be futile. For example, an error of "Network Name Not Found" usually indicates that a remote computer exists, but does not have a sharepoint with the given name. As manual intervention will be required to correct this (by creating a suitable sharename on the remote computer), this error is generally not retried, and the attempted operation fails.


However, in some instances, this might not be the correct action for this error. For example, in a software publishing environment it is quite common to delete a sharename, update the contents of the sharepoint, then recreate the sharename. In this situation you should use the /TBD switch, which instructs the program to retry when it encounters a "network name cannot be found." error, on the assumption that we are waiting for the sharename to be defined.


Normally Robocopy will restart failed copies from the beginning of the file. You can override this default behaviour by specifying the /Z switch, which requests restartable copies - with /Z, failed copies will restart from close the point of failure rather than the beginning of the file. There is one exception to this - if the file's size or timestamp is modified between retries, the file has obviously been changed, and the copy is restarted from the beginning of the file.


Specifying File and Directory Names


By default, Robocopy assumes that any non-switch command-line argument is some form of file name, path, or wildcard. These may be intermingled with switch arguments, but the command line is easier to read if they are grouped together. The actual meaning of such a name depends on where they appear in the command in relation to any /XF or /XD switches.


The command line is parsed from left to right. There must be two non-switch arguments before any /XF or /XD switch, and these are taken to be the pathnames of the source and destination directories respectively.


Thereafter any non-switch argument is taken to be an Include Filespec - either a filename or wildcard (but not a path) naming one or more files (or sets of files) to include and consider as candidates for copying, until a /XF or /XD switch is found.


Note that if no Include Filespecs are found in the command line, a default of "*.*" (all files) is assumed. Also note that these Include Filespecs must be specified as individual arguments separated from other arguments by white space, and not appended to the source or destination directory pathnames as in, for example, the Xcopy command.


/XF (eXclude Files) informs the program that subsequent filenames, paths and wildcards are Files to exclude from copying (Exclude Filespecs rather than Include Filespecs), until a subsequent /XD switch is found.


/XD (eXclude Directories) informs the program that subsequent filenames and paths are Directories to exclude from copying (Exclude Dirspecs rather than Include or Exclude Filespecs), until a subsequent /XF switch is found.


Note the subtle differences in allowed values here :


Arguments Names Paths Wildcards

Source Directory Yes Yes NO

Destination Directory Yes Yes NO

Include Filespecs Yes NO Yes

Exclude Filespecs Yes Yes Yes

Exclude Dirspecs Yes Yes NO


Example : ROBOCOPY c:source d:dest *.c* /XF *.cpp /S /XD bin c:sourceunwanted


This command would cause Robocopy to walk the directory tree whose root is c:source, except subdirectories named "bin", and the subdirectory c:sourceunwanted. Files whose extensions begin with "c" will be copied, except those whose extension is ".cpp".


Robocopy File Classes


For each directory processed Robocopy constructs a list of files matching the Include Filespecs, in both the source and destination directories. The program then cross-references these lists, seeing which files exist where, comparing file times and sizes where possible, and places each selected file in one of the following classes :


Exists in Exists in Source/Dest Source/Dest

File Class Source Destination File Times File Sizes

Lonely Yes No n/a n/a

Same Yes Yes Equal Equal

Changed Yes Yes Equal Different

Newer Yes Yes Source > Dest n/a

Older Yes Yes Source < Dest n/a

Extra No Yes n/a n/a

Mismatched Yes ( file) Yes (directory) n/a n/a


By default, Changed, Newer and Older files will be considered to be candidates for copying (subject to further filtering described below), Same files will be skipped (not copied), and Extra and Mismatched files (and directories) will simply be reported in the output log.


Use the following switches to override this default behaviour :

/XL eXclude Lonely files

/IS Include Same files

/XC eXclude Changed files

/XN eXclude Newer files

/XO eXclude Older files


Use the following switch to suppress the reporting and processing of Extra files :

/XX eXclude eXtra files


To just make sure the destination tree includes the current version of all source files, you do not need to specify any of these arguments. Robocopy's default behaviour will be all you need for most situations.


Use /XO with caution. If you terminate Robocopy whilst it is copying, any incompletely copied file will almost certainly have a later file time than the source file. If you restart the same copy, Robocopy will see this file as an Older file, and will skip it if you use /XO. Bear this in mind if you are using /XO and need to kill a copy. The most appropriate use for /XO is to synchronise two directory trees that can each be updated simultaneously in disjoint areas. Using /XO and copying first in one direction, and then the other, will ensure that the latest files are present in both directory trees.


Note that specifying /XL restricts the program to copying files from the source directory tree only if a corresponding file of the same name already exists in the destination. This provides a convenient mechanism for maintaining a copy of a selected subset of the source tree.


Also note that specifying /IS on its own with no other selection switches forces a total refresh of the destination tree, should you ever need to do this.


File Times and File Names


It is recommended that you make sure that both the source and destination directories reside on an NTFS partition, wherever possible. Copying to downlevel file systems (HPFS or FAT) will work, but you may lose long filenames, and file times may suffer from rounding errors on the destination. This is of course due to the superior capabilities of the NTFS file system.


For example, file time granularity is 100 nanoseconds on NTFS, but only 2 seconds on FAT, so copying a file from NTFS to FAT always causes file times to be rounded to a value that the FAT file system can handle. Accordingly, Robocopy considers file times to be identical if they are within two seconds of each other. Without this 2-second leeway, the program might erroneously classify many otherwise unmodified files as Older or Newer files, which would result in a great deal of unnecessary copying of these unchanged files.


Sometimes this handling of file times needs to be overridden. For example, assume an NTFS tree is copied temporarily to FAT, then later the FAT tree (and all its rounded FAT file times) is copied to a local NTFS drive. Later, there may be need to recreate the original tree exactly. Refreshing the whole tree would do the job, but it would inefficient for a large tree. In such a scenario you should use the /T switch to force the copying of just file times for Same files, rather than the whole file that would be copied if /IS was used.


When the destination is on a FAT or HPFS partition, you may also experience problems when copying files and directories with long names, or whose names included extended Unicode characters. To overcome these problems, use the /FAT switch. This tells Robocopy to create destination files using system-generated names in the standard 8.3 FAT file system format, rather than trying to create long or extended filenames on downlevel file systems.


Attribute Processing


By default, Robocopy ignores source file attributes when selecting files to copy - any file matching other specified conditions will be copied regardless of its attribute settings.


The /A and /M switches both modify this behaviour, and cause only those source files with the Archive attribute set to be selected for copying. After copying the Archive attribute of the source file is left unmodified (still set) if /A was used, or reset (turned off) if /M was used.


Furthermore, the /XA:[R][A][S][H] (eXclude Attributes) switch can be used to exclude files from being copied if one or more of the given attributes is set. For example, you could specify /XA:R to prevent Read-only files from being copied. Similarly, /XA:SH would prevent any files with either or both of the System or Hidden attributes set from being copied.


After a file is successfully copied to the destination, the destination file's attributes are set to match those of the source file, except for the Archive bit, which is always set (turned on). This is to identify newly copied files and make it easy to back them up.


To modify this default behaviour, /A+:[R][A][S][H] (Attribute add) and /A-:[R][A][S][H] (Attribute subtract) can be used. For example, /A-:A would cause the Archive attribute to be reset, and /A+:R would render all copied files Read-only in the destination.


The exact order of attribute operations on newly copied destination files is as follows :

1. Attributes are copied from the corresponding source file.

2. The Archive attribute is set (turned on).

3. Attributes specified by /A+:[R][A][S][H] are set (turned on).

4. Attributes specified by /A-:[R][A][S][H] are reset (turned off).


Moving Files


Rather than copying files, it is often desirable to move them instead, especially if disk space is at a premium on your network. Robocopy's /MOVE switch provides this facility. It causes source files to be deleted from the source directory tree after they have been successfully copied to the destination.


Note that even with /MOVE specified, Robocopy will only delete those source files that it successfully copies to the destination. This applies even to skipped Same files as there is no absolute guarantee that a skipped source file is identical to its corresponding destination file, even if the file times and sizes are identical, until immediately following a successful copy.


Therefore it is perfectly normal for files and directories to remain in the source tree even after a Robocopy has walked the tree with /MOVE specified. The user must decide whether or not it is safe to delete the remaining entries, and, if so, delete them manually.


True Replication


If you require the destination directory tree to be maintained as an exact mirror of the source directory tree, and have files and directories deleted from the destination whenever they disappear from the source, you can use the /PURGE switch (at your own risk).


/PURGE causes Robocopy to delete ALL Extra and Mismatched destination files and directories. After a Mismatched destination entry has been deleted, the corresponding source entry is then treated as a Lonely file or directory, and processed accordingly.


Where /PURGE results in the deletion of an Extra or Mismatched destination directory, the entire directory tree, including all subordinate files and subdirectories, will be deleted.


You should use /PURGE with extreme caution. If you specify /PURGE along with an incorrect (but existing) destination directory, Robocopy WILL DELETE lots of data from the destination very quickly. You use the /PURGE option at your own risk.


Note that the /XX switch excludes Extra files from further processing, so /PURGE will have no effect if /XX is also used.


Scheduling Robocopy


You can use the Windows NT AT command, or the Resource Kit SOON command, with the

Windows NT Schedule Service to create Robocopy jobs that run regularly in the background,

to automatically maintain local mirrors of remote directory trees.


By default the Schedule Service logs on as the System Account for the local system, which

has no network access. Scheduled jobs run in the same context as the Schedule Service. So,

in order to successfully schedule a Robocopy job, some extra configuration is required. There

are basically two options :


Option 1) Leave the Schedule Service running in the context of the local System Account,

and schedule batch files of the following form :

NET USE \remoteserverIPC$ /USER:userid password

ROBOCOPY \remoteserversourcepath \localserverdestpath ...

NET USE \remoteserverIPC$ /DEL

Within each such job, credentials with remote servers are established by connecting to the

IPC$ (Inter Process Communication) share on remote machines using an appropriate user

account, rather than as the local System account. The disadvantage of this approach is that

user passwords are stored in the batch files, but they can be protected by storing the batch

files on an NTFS partition, and applying appropriate NTFS file security to the batch files.


Option 2) In Control Panel/Services/Schedule Service/Startup configure the Schedule

Service to use a "real" user account that you have created in User Manager, by specifying an

appropriate User Id and Password in the "Log On As" section. Then you should Stop and

Start the Schedule Service to get it running in the new user context. Once the nominated

user has been granted appropriate access to source and destination servers, you can now

schedule Robocopy jobs to copy files between them.


Finally, as drive mappings can be changed by users, it is generally best to use UNC names

for source and destination directories in scheduled Robocopy jobs, as these explicity specify

file locations, and are more reliable. I.e. rather than scheduling a command of the form :

ROBOCOPY X:source Y:dest ...

for increased reliability, you should use commands of the form :

ROBOCOPY \server1share1source \server2share2dest ...


AT and the Schedule Service are documented in the Windows NT Commands on-line help.

SOON is documented in the Windows NT Resource Kit on-line documentation.


THE OUTPUT LOG


Robocopy outputs a log of files and directories processed to its console window. This output can be redirected to a disk file if required.


Each log line starts with a brief text tag, which is formatted according to the following rules :


*CAPITALS indicate an anomaly that should be investigated.


Leading Caps indicate a file that was selected for copying.


lowercase indicates a file that was skipped.

(lowercase tags will only be seen if /V is used).


Tags that indicate copying are left-aligned, tags that indicate skipping are right-aligned, tags that indicate anomalies are placed further to the left than other tags, and error messages always start in column 1. This arrangement simplifies the task of scanning through even a verbose listing, and makes it easier to spot new downloads, anomalies, and network errors.


If few files are copied, the left margin of the output log will be mostly blank. Copied files and anomalies are easily-spotted non-blank entries in the left margin of the output log.


Disregarding error reporting and retries, at most one line of output is produced per source file or directory. Lines for directories show the number of files matching the Include Filespecs in the directory (where known), and the full pathname of the directory. Lines for files indicate what was done with the file, the size of the file, and its name.


One line is also output for each Mismatched file and directory, and also for each Extra file and directory in the destination. These resemble lines for normal directories and file, except that lines for Extra files include the file's full pathname, as an aid to rapid housekeeping.


By default, only Extra files that match the Include Filespecs on the command line will be reported. The rationale for this is that you are probably not interested in spurious *.TXT files in the destination when you are refreshing *.CPP files. However, if you really do need a list of all extra files in the destination, irrespective of their type, you can obtain one by using /X.


By default, no output is produced for skipped files. To obtain a verbose listing which shows all source files matching the Include Filespecs, including skipped files, use the /V switch.


By default, Robocopy will output copy progress information, in the form of a display of the percentage of the file copied so far (% copied). You can use the /NP switch to suppress the display of %copied progress information - useful when output is redirected to a disk file.


To see the start time of each file copy, and the estimated time that the copy should complete (based on the observed throughput of previous copies), use the /ETA switch. These times are displayed after the file name in the format "hh:mm -> hh:mm" (start time -> ETA).


Finally, to obtain a list the files that would be copied, without committing to the overhead of actually doing the copying, you can use the /L switch.


The following tags apply to files :


File Tag Meaning


*MISMATCH The source file corresponds to a destination directory of the same name.

The source file is skipped as it cannot be copied.

Housekeeping of the destination tree is recommended.


*EXTRA File The file exists in the destination but not the source (an Extra file).

Some housekeeping of the destination tree may be required.


New File The file exists in the source but not in the destination (a Lonely file).

The file is copied because /XL was not used. To skip this file, use /XL.


lonely The file exists in the source but not in the destination (a Lonely file).

The file is skipped because /XL was used. To copy this file, omit /XL.


Newer The source file has a later timestamp than the destination file.

The file is copied because /XN was not used. To skip this file use /XN.


newer The source file has a later timestamp than the destination file.

The file is skipped because /XN was used. To copy this file, omit /XN.


Older The source file has an earlier timestamp than the destination file.

The file is copied because /XO was not used. To skip this file, use /XO.


older The source file has an earlier timestamp than the destination file.

The file is skipped because /XO was used. To copy this file, omit /XO.


Changed The source and destination files have identical timestamps, different sizes.

The file is copied because /XC was not used. To skip this file use /XC.


changed The source and destination files have identical timestamps, different sizes.

The file is skipped because /XC was used. To copy this file, omit /XC.


Same The source and destination files have identical timestamps and sizes.

The file is copied because /IS was used. To skip this file, omit /IS.


same The source and destination files have identical timestamps and sizes.

The file is skipped because /IS was not used. To copy this file, use /IS.


attrib At least one source file attribute matches the attributes specified by /XA.

The file is skipped because of this. To copy this file, modify or omit /XA.


named The file is skipped because it was named in the Exclude Filespecs. To process this file, amend the Exclude Filespecs following /XF.


The following tags apply to directories :


Directory Tag Meaning


(blank) A normal directory.


*MISMATCH This source directory corresponds to a destination file of the same name.

It cannot be processed. Housekeeping of the destination is recommended.


*EXTRA Dir The directory exists in the destination but not the source (an Extra dir).

Some housekeeping of the destination tree may be required.


lonely The directory exists in the source but not the destination (a Lonely dir).

The directory is skipped because /XL was used. To process this, omit /XL.


named The directory is skipped because it was named in the Exclude Dirspecs. To process this directory, amend the Exclude Dirspecs following /XD.


THE RUN SUMMARY


Just before Robocopy terminates, it outputs a summary of its activities during the run to its console window (or disk file if redirected) in the following format:


Total Copied Skipped Mismatch FAILED Extras

Dirs : 75 0 75 0 0 0

Files : 960 13 947 0 0 1

Bytes : 19.8 m 190.0 k 19.6 m 0 0 12.5 k

Times : 0:16.914 0:03.504 0:00.000 0:13.410


This summarises the volume of data processed. The first column shows the total number of files and directories processed and the total size of files matching the Include Filespecs in the source. The other columns provide a breakdown of these grand totals as follows:


Copied: subtotals for directories created and files actually copied.

Skipped: subtotals for directories walked but not created, and files skipped.

Mismatch: subtotals for Mismatched files and directories.

FAILED: subtotals for items not processed successfully within the retry limit.

Extras: subtotals for items present in the destination but not the source.


The second section of the summary provides timing information for the run. Total time should be self-explanatory. This is broken down as follows:


Copied: time spent actually copying files, but excluding retry wait times.

FAILED: time spent waiting between retries for failed operations.

Extra: time spent scanning directories, and doing everything else.


Large times in the FAILED column usually indicate that network problems were experienced.


-----oooooo-----





Saturday, May 17, 2008

Hypervisor and I/O

I was reading blog by Avi who is Kernel Developer and found it very interesting. In his blog he tried to explain how I/O is important for hypervisor and how vendor like VMWare and Xen maintain this hypervisor. Its true that VMWare has their own proprietary hypervisior which means any development or modification can be made ONLY by VMware, where as Xen has open with their hypervisor. That means any kernel developer like Avi can change it to install drivers. Its true hypervisor get I/O hit because all the drivers or communication happen through this I/O. You can read his complete blog here


--------------------------------------------------
I/O performance is of great importance to a hypervisor. I/O is also a huge maintenance burden, due to the large number of hardware devices that need to be supported, numerous I/O protocols, high availability options, and management for it all.
VMware opted for the performance option, but putting the I/O stack in the hypervisor. Unfortunately the VMware kernel is proprietary, so that means VMware has to write and maintain the entire I/O stack. That means a slow development rate, and that your hardware may take a while to be supported.
Xen took the maintainability route, by doing all I/O within a Linux guest, called "domain 0". By reusing Linux for I/O, the Xen maintainers don't have to write an entire I/O stack. Unfortunately, this eats away performance: every interrupt has to go through the Xen scheduler so that Xen can switch to domain 0, and everything has to go through an additional layer of mapping.
Not that Xen solved the maintainability problem completely: the Xen domain 0 kernel is still stuck on the ancient Linux 2.6.18 release (whereas 2.6.25 is now available). These problems have led Fedora 9 to drop support for hosting Xen guests, leaving kvm as the sole hypervisor.
So how does kvm fare here? like VMware, I/O is done within the hypervisor context, so full performance is retained. Like Xen, it reuses the entire Linux I/O stack, so kvm users enjoy the latest drivers and I/O stack improvements. Who said you can't have your cake and eat it?

Thursday, May 15, 2008

Happy new version, VMware!

Some guys are so pissed of with many VMWare version that they started making fun about it. One of the guy Schley Andrew Kutz wrote on servervirtual blog
Hey VMware, it’s me again. I know you’re probably still mad at me for last week. Well, I’m going out on a very public limb here to apologize for something that I did.
I’m sorry that I forgot your version.
Yes, you let everyone know that your version was coming up, but I forgot to create a calendar reminder for it and I just plain forgot. You know how that goes, right?
Now I don’t mind owning up to my bad memory, but here’s the thing – you have sooo many versions! Most people just have one version per year, you have at least five. There’s the version for VMware Infrastructure (VI), currently at 3.5. ESX is already 3.5 versions old, and ESX 3i has its own version too. Then there is the VirtualCenter and the VI client at 2.5. VMware Consolidated Backup (VCB) is straggling behind at 1.1. I think the VI SDK is also 2.5 versions old, but with the VI Perl Toolkit at version 1.5 and the VI Toolkit (for Windows) in beta, it is hard to keep up.
VMware, your enterprise portfolio has expanded far beyond simply ESX, and none but two of the versions align. Therefore, with so many available products, it is fast becoming impossible to understand which version works with which. You should release minor point releases between major revisions in order to maintain a consistent major version number for your enterprise product offerings.
I know you’re a busy company, and it is hard to get everybody together on one day out of the year to celebrate your version, but I beg you, please try. Except for those closest to you, it is getting extremely difficult to remember your versions, or figure out which version we actually mean. Here’s an idea: for the rest of the year, skip all of your versions and then start over your versions all at once on a single day. Maybe even at VMworld? It can be your special version day. I’ll even bring party hats and cake (if you will invite me.)


VMware Infrastructure 4 (VI4) can include:

- ESX 4

- ESX 4i

- VirtualCenter 4

- VI SDK 4

- VI Perl 4

- VI Toolkit (for Windows) 4

- VCB 4
I know it will throw people off at first; your customers might think they missed some of your versions. However, I think in the end you’ll have a lot of people thanking you.
I feel real bad about missing your version, and I don’t want to let the announcement pass me by again. Maybe I should use Outlook?

Wednesday, May 7, 2008

Dr. Kumar Vishwas A Promising Poet of young generation

One of my friend came to me and told that he will introduce to very promising poet name Dr Kumar Vishvas though I am not big fan of poem but wanted to listen so here the line goes like this

कोई दीवाना कहता है, कोई पागल समझता है !
मगर धरती की बेचैनी को बस बादल समझता है !!
मैं तुझसे दूर कैसा हूँ , तू मुझसे दूर कैसी है !
ये तेरा दिल समझता है या मेरा दिल समझता है !!

मोहब्बत एक एहसासों की पावन सी कहानी है !
कभी कबीरा दीवाना था कभी मीरा दीवानी है !!
यहाँ सब लोग कहते हैं, मेरी आंखों में आँसू हैं !
जो तू समझे तो मोती है, जो ना समझे तो पानी है !!

समंदर पीर का है अन्दर, लेकिन रो नही सकता !
यह आँसू प्यार का मोती है, इसको खो नही सकता !!
मेरी चाहत को दुल्हन बना लेना, मगर सुन ले !
जो मेरा हो नही पाया, वो तेरा हो नही सकता !!

भ्रमर कोई कुमुदनी पर मचल बैठा तो हँगामा
हमारे दिल में कोई ख्वाब पला बैठा तो हँगामा,
अभी तक डूब कर सुनते थे हम किस्सा मुहब्बत का
मैं किस्से को हक़ीक़त में बदल बैठा तो हँगामा !!!

Well this was awesome . I liked the way he recite his poem. It reminds me of my child hood when I use to go for Hassya Kavi Sameelen on Holi Evening at Darbhanga School . It use to make us laugh a lot because we were already high (Use to consume Bangh). Believe me this guy is has a promising voice which kept me spell bounded. When I was listening to him , thought about Raju Srivastav (Comedy Actor) . Though some of his joke was not good but the way he present that makes lot of difference for the audience. I googled for Dr. Kumar poem video and found some. Though I am trying to find his complete video.

















I will keep adding as I get more. I am not sure if I should be taking permission from Dr. Kumar but I just want to salute him for these wonderful word.

Monday, May 5, 2008

Server Virtualization with CISCO and Networking Best Practice


Server Virtualization M au riz io Portolani Network Implications & Best Practices P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 1 Session Objectives At the end of the session, the participants should b e ab le to: Objective 1: Understand key concepts of server virtu al iz ation arch itectu res as th ey rel ate to th e netw ork. Objective 2: E x pl ain th e im pact of server virtu al iz ation on D C netw ork desig n ( E th ernet & F iber C h annel ) Objective 3: D esig n C isco D C netw orks to su pport server virtu al iz ation environm ents P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 2 A g end a VMware Architecture and Components VMware L AN N etwork ing Cisco/ VMware D C D E S I G N S B l ade S erv er D esig ns S torag e I mpl ications of S erv er Virtual iz ation P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 3 V ir tu a l iz a tion App Guest OS VM App Guest OS App Guest OS VM App Guest OS App M o d i f i ed OS VM App M o d i f i ed OS Mof ied S tripped D own O S with H y perv isor CPU CPU ¢ ¡ H y p e r v is o r H ost O S ¢ ¡ Mof ied S tripped D own O S with H y perv isor CPU ¡ ¢ VM w a r e M ic r o s o ft XEN aka Pa r a v i r t u a l i z a t i o n P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 4 M ig r a tion VMotion, a VM to b e r H a rd w a re inte r r u p t s k a VM Mig r a tion a l l ow s a e a l l oc a te d on a d if f e r e nt w ith ou t h a v ing to e r v ic e . D ow ntim e in th e or d e r of f e w m il l is e c ond s to f e w m inu te s , not h ou r s or d a y s C a n b e u s e d to p e r f or m Maintenance on a s e r v e r , 2 ty p e s of Mig r a tion: VM o t i o n M i g r a t i o n R e g u la r M ig r a tio n OS Console OS App. App. VMware Virtualization Layer OS OS VMware Virtualization Layer H y p e r v is o r H y p e r v is o r C a n b e u s e d to s h if t w or k l oa d s m or e e f f ic ie ntl y ¤ £ CPU CPU ¢ ¡ ¡ ¢ £ ¤ Console OS 5 App. P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l M a na g em ent D a ta c e n te r D a ta c e n te r D a ta c e n te r P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 6 V M w a r e A r ch itectu r e in a N u tsh el l Mg m t N e tw or k App. App. App. C o n so l e OS VM K e r ne l N e tw or k P r od u c tion N e tw or k OS OS OS Vir tu a l Ma c h ine s VM Vir tu a l iz a tion L a y e r P h y s ic a l H a r d w a r e CPU E S X S erv er H ost C is c o C o n fid e n tia l … ¡ ¢ P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 7 V M w a r eH A C l u ster ing App1 App1 Guest OS App2 Guest OS App3 Guest OS App4 Guest OS Guest OS App5 Guest OS App2 Guest OS H y p e r v is o r H y p e r v is o r H y p e r v is o r E S X H ost 1 CPU CPU ¢ ¡ E S X H ost 2 CPU ¢ ¡ E S X H ost 3 ¡ ¢ P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 8 Application-le v e l H A clu s te r ing (Provided by MSCS, V erit a s et c …) App1 Guest OS App2 Guest OS App3 Guest OS App4 Guest OS App1 Guest OS App5 Guest OS App2 Guest OS H y p e r v is o r H y p e r v is o r H y p e r v is o r E S X H ost 1 CPU CPU ¢ ¡ E S X H ost 2 CPU ¢ ¡ E S X H ost 3 ¡ ¢ P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 9 HA +DRS HA takes care of Powering on V M s on av ail ab l e E S X h osts in th e l east p ossib l e tim e (regu l ar m igration, not V M otion b ased ) D R S takes care of m igrating th e V M s ov er tim e to th e m ost ap p rop riate E S X h ost b ased on resou rce al l ocation (V M otion m igration) P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 10 Q u estions W h ich E S X h ost “interface” is u sed by V irtu al C enter to m onitor and config u re V M s? W h ich E S X h ost “interface” is u sed by iS C S I ? C an I m ig rate a “pow ered on” V M a different one? from a datacenter to H ow l ong does it take for V M w are H A to recover from an E S X h ost fail u re? D oes H A cl u stering req u ire V m otion? P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 11 A g end a VMw a r e A r c h ite c tu r e a nd C om p one nts VMw a r e L A N N e tw or k ing v S w it N ICT v S w it M ig r a ch ea ch tio Ba s i c s m in g v s L AN S w i t c h n , H A, D R S C is c o/ VMw a r e D C D E S I G N S B l a d e S e r v e r D e s ig ns S tor a g e I m p l ic a tions of S e r v e r Vir tu a l iz a tion P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 12 Pe r E S X -s e r v e r c o n f i g u r a t i o n V M w a r e N etw or k ing C om p onents VM s v S w itc h VM N I CS = u p l i n k s VM _ L UN _ 0 0 0 7 v N IC v S w itc h 0 v m n ic 0 VM _ L UN _ 0 0 0 5 v N IC Vi r t u a l Po r t s v m n ic 1 13 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l vN I C M A C a d d r ess / v m fs/ v ol u m es/ 4 6 b 9 d 7 9 a2d e6 e23 e-9 29 d 0 0 1 b 7 8 b b 5 a2c/ V M _ L U N _ 0 0 0 5 /V M _ L U N _ 0 0 0 5 .v m x eth ernet0 . generated Ad d ress = " 0 0 : 5 0 : 5 6 : b 0 : 5 f: 24 „ eth ernet0 . ad d ress = " 0 0 :5 0 :5 6 :0 0 :0 0 :0 6 „ M ech anism s to av oid M AC col l ision V M ’s M AC ad d ress au tom atical l y generated V M ’s M AC ad d resses can b e m ad e static b y m od ify ing th e configu ration fil es eth ernetN . ad d ress = 0 0 :5 0 :5 6 :X X :Y Y :Z Z V M ’s M AC ad d ress d oesn’t ch ange with migration eth ernet0 . ad d ressT y p e = " v p x " eth ernet0 . ad d ressT y p e = „static“ P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 14 vSw itch F or w a r d ing C h a r a cter istics F or w ar d ing b as e d on M AC ad d r e s s ( N o L e ar ning ) : I f tr af f ic d oe s n’t m atch a V M M AC is s e nt ou t to v m nic V M -to-V M V s w itch e s T AG tr af f ic s tay s local tr af f ic w ith 8 0 2 . 1 q V L AN ID v S w itch e s ar e 8 0 2 . 1 q C apab le v S w itch e s can cr e ate E th e r ch anne ls P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 15 vSw itch C r ea tion Y O U D O N ’T H AVE T O S E L E CT A N I C T h is is ju s t a n a m e v N I Cs v s w itc h S e l e c t t h e Po r t -G r o u p b y s p e c i f y i n g t h e N E T W O R K L ABE L P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 16 VM P or t-G r ou p vSw itch P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 17 E x am ple C onf ig u r ation Mu l t ip l e Port -G rou p s , s a m e V L A N T h e VL AN n e e d n o t d i f f e r o n d i f f e r e n t Po r t -G r o u p s P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 18 VM w ith 2 vN I C to sa m e vSw itch VM 4, d u a l -h o m e d P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 19 V L AN T ag g ing O ptions E x t ern a l Sw it c h T a g g in g E x te r na l s w itc h ta g s p a c k e t VL AN a s s i g n m e n t C onf ig u r e d b y s e tting th e N e tw or k L a b e l VL A N I D to b e 0 H ow is VM-toVM tr a f f ic s w itc h e d ? ( th r ou g h L A N or th r ou g h v S w itc h ) S w itc h A V M N IC 0 C a n u s e na tiv e VL A N on 8 02 . 1 q tr u nk ( a s l ong a s na tiv e VL A N is not ta g g e d ) Vi r t u a l S w i t c h 1 1 Po r t-Gr o up 1 Vi r t u a l S w i t c h 2 30 Po r t-Gr o up 2 B V M N IC 2 2 31 32 VMs E S X S e rv e r H o s t P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 20 V L AN T ag g ing O ptions V irt u a l Sw it c h T a g g in g v S w itc h ta g g ing V M N IC 0 80 2. 1q t r u n k V M N IC 1 V M N IC 2 V M N IC 3 I t is s e t b y a s s ig ning th e VL A N I D to th e N e tw or k L a b e l in th e P or t-G r ou p P r ov id e s is ol a tion b e tw e e n VL A N s Mos t C om m on D e p l oy m e nt S tr ip s ta g f r om inb ou nd p a c k e t T a g s ou tb ou nd p a c k e ts Vi r t u a l S w i t c h Po r t G r o u p A 1 2 Po r t G r o u p B 30 31 32 VL AN “A” VL AN “B” V i r tua l M a c h i n es E S X S e r v e r H os t P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 21 A g end a VMware Architecture and Components VMware L AN vSw N IC vSw Mig itc h B a s ic s T e a m ing itc h v s L A N S w itc h r a tion, H A , D R S N etwork ing Cisco/ VMware D C D E S I G N S B l ade S erv er D esig ns S torag e I mpl ications of S erv er Virtual iz ation P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 22 M ea ning of N I C T ea m ing in V M w a r e ( 1 ) E S X s e r v e r N ICc a r d s v S w i t c h Up l i n k s v m n ic 0 N ICT e a m in g v N IC v N IC v N IC v m n ic 1 v m n ic 2 v m n ic 3 N ICT e a m in g T H IS IS N O T N ICT e a m in g v N IC v N IC E S X S erv er H ost P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 23 M ea ning of N I C T ea m ing in V M w a r e ( 2 ) T e a m i n g i s Co n f i g u r e d a t T h e v m n ic L e v e l This is NOT Teaming P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 24 D e s ig n E x am ple 2 N I Cs , V L A N 1 a n d 2 , A c t ive/ St a n dby Po r t-Gr o up 1 VLAN 2 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 v m n ic 1 E SX Ser v er v m n ic 0 v Sw i tc h 0 Po r t-Gr o up 2 VLAN 1 VM1 P r e s e n ta tio n _ ID VM2 Ser v i c e C o n so l e C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 25 B ea con P r obing VM ports u pl i n k ports Team ed ph y si c a l N IC s LAN Beacon probing attempts to detect failures which don’t result in a link state failure for the N I C Broadcast frames sent from each N I C seen by other N I C s in the team Beacons are sent on each V L A N ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l in the team should be in use 26 P r e s e n ta tio n _ ID Activ e / S tand b y pe r -P or t-G r ou p C B S-l ef t C B S-r i g h t V M N IC 0 V M N IC 1 Po r t -G r o u p 1 Po r t -G r o u p 2 v S witch0 .5 P r e s e n ta tio n _ ID VM5 VM7 .7 E SX Ser v er C is c o C o n fid e n tia l VM4 .4 VM6 .6 ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 27 P or t-G r ou p ov e r r id e s v S w itch G lob al C onf ig u r ation P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 28 Activ e / Activ e E S X s e r v e r N ICc a r d s E S X s e rv e r v m n ic 0 v m n ic 1 v S w itc h Po r t -G r o u p VM 1 VM 2 VM 3 VM 4 VM 5 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 29 Activ e / Activ e I P-ba s ed L oa d B a l a n c in g W o r k s w i t h Ch a n n e l -G r o u p m o d e O N L ACP i s n o t s u p p o r t e d ( s e e b e l o w ) : 9w 0d:% Gi g a b i tE 9w 0d:% Gi g a b i tE 9w 0d:% susp en d th e r em o 9w 0d:% susp en d th e r em o L I N K -3-U PD OW N th er n et1/ 0 / 14 , c h L I N K -3-U PD OW N th er n et1/ 0 / 13, c h E C -5-L 3D ON T B N ed : L A C P c ur r en te p o r t. E C -5-L 3D ON T B N ed : L A C P c ur r en te p o r t. : I n ter f a c e a n g ed sta te to up : I n ter f a c e a n g ed sta te to up D L 2: Gi 1/ 0 / 14 tl y n o t en a b l ed o n D L 2: Gi 1/ 0 / 13 tl y n o t en a b l ed o n Po r t -c h a n n e l i n g E S X s e rv e r v m n ic 0 v m n ic 1 v S w itc h Po r t -G r o u p VM 1 VM 2 VM 3 VM 4 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 30 A g end a VMware Architecture and Components VMware L AN N etwork ing v S witch B asics N I C T eaming v S witch v s L AN S witch Mig ration, H A, D R S Cisco/ VMware D C D E S I G N S B l ade S erv er D esig ns S torag e I mpl ications of S erv er Virtual iz ation P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 31 R olling F ailov e r ( ak a P r e e m ption) B y def a u l t Preem p t ion is on 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 v m n ic 0 v Sw i tc h 0 v m n ic 1 v m n ic 0 v Sw i tc h 0 v m n ic 1 VM1 P r e s e n ta tio n _ ID VM2 ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l VM1 VM2 32 All L ink s Activ e , N o S panning -T r e e I s th e r e a L oop? C B S-l ef t C B S-r i g h t N IC 1 N IC 2 N IC 3 N IC 4 P o r t -G r o u p 1 P o r t -G r o u p 2 v S witch1 .5 P r e s e n ta tio n _ ID VM5 VM7 .7 E SX Ser v er C is c o C o n fid e n tia l VM4 .4 VM6 .6 ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 33 B r oad cas t/ M u lticas t/ U nk now n U nicas t F or w ar d ing in Activ e / Activ e ( 1 ) 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 v m n ic 0 v Sw i tc h 0 Po r t-Gr o up 1 VLAN 2 v m n ic 1 E SX Ser v er P r e s e n ta tio n _ ID VM1 VM2 C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 34 B r oad cas t/ M u lticas t/ U nk now n U nicas t F or w ar d ing in Activ e / Activ e ( 2 ) 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 E SX H o st v Sw i tc h N IC 1 N IC 2 VM1 P r e s e n ta tio n _ ID VM2 VM3 C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 35 C a n th e vSw itch p a ss tr a f f ic th r ou g h ? E . g . H SR P? v Sw i tc h N IC 1 N IC 2 VM1 P r e s e n ta tio n _ ID VM2 C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 36 C an V M 1 talk to S e r v e r 3 ? 4 U p lin k s 8 0 2. 1q V l a n 1,2 8 0 2. 1q V l a n 1,2 v Sw i tc h Po r t-Gr o up 1 VLAN 2 N IC 1 N IC 2 Po r t-Gr o up 2 VLAN 1 Ser v er 3 VM1 P r e s e n ta tio n _ ID VM2 Ser v i c e C o n so l e C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 37 C a n V M 5 ta l k to V M 4 ? Ca t a l y s t 1 A l l l i n k s a r e a c ti v e Ca t a l y s t 2 8 0 2. 1q 8 0 2. 1q E S X s e rv e r1 v Sw i tc h 1 V M N IC 2 2 3 V M N IC 1 4 E S X s e rv e r 2 V M N IC 2 v Sw i tc h V M N IC 1 .5 P r e s e n ta tio n _ ID VM5 VM7 .7 C is c o C o n fid e n tia l VM4 .4 VM6 .6 ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 38 I s th is d esig n p ossibl e? Ca t a l y s t 1 Ca t a l y s t 2 8 0 2. 1q E S X s e rv e r1 v Sw i tc h 1 V M N IC 2 2 8 0 2. 1q V M N IC 1 .5 P r e s e n ta tio n _ ID VM5 VM7 .7 C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 39 vSw itch Secu r ity Prom iscu ou s m od e R ej ect p rev ents a p ort from cap tu ring traffic wh ose ad d ress is not th e V M ’s ad d ress M AC Ad d ress C h ange, p rev ents th e V M from m od ify ing th e v N I C ad d ress F orget T ransm its p rev ents th e V M from send ing ou t traffic with a d ifferent M AC (e. g N L B ) P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 40 vSw itch vs L A N Sw itch S im il arl y to a L AN V M -to-V M F orward ing b ased on M AC ad d ress S witch : D iffe r e n tly fr o m N o S p a n n i n g -T r e e p r o t o c o l N o 80 2. 3a d L ACP N o L e a r n in g a L AN S w i t c h N o D y n a m i c t r u n k n e g o t i a t i o n ( D T P) Ce r t a i n d e s i g n s c a n i s o l a t e VM s V switch es T AG traffic with 8 0 2. 1 q V L AN I D v S witch es can create E th erch annel s traffic stay s l ocal v S witch es are 8 0 2. 1 q C ap ab l e P reemption Conf ig uration ( simil ar to F l ex l ink s, b ut no del ay preemption) Be a c o n i n g d o e s n ’t s e e m N o S PAN / m i r r o r i n g c a p a b i l i t i e s : Traffic cap t u rin g is b y far n o t t h e e q u iv al e n t o f S P A N P o rt S e cu rit y v e ry l im it e d to a d d m u c h v a lu e v S w i t c h d o e s n ’t h a v e t h e e q u i v a l e n t o f UPL I N K T R ACK I N G 2E th e r c h a n n e l b a c k in g u p e a c h o th e r is n o t p o s s ib le P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 41 A g end a V M ware A rch itectu re and C omponents V M ware L A N Networking N I C T eaming v S witch B asics v S witch v s L AN S witch Mig ration, H A, D R S C isco/ V M ware D C D E S IG NS S torag e Implications of S erv er V irtu aliz ation 42 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l V M otion M ig r a tion R eq u ir em ents P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 43 V M K er nel N etw or k ca n be r ou ted VM K e r ne l N e tw or k Mg m t N e tw or k VM K e r ne l N e tw or k P r od u c tion N e tw or k Vir tu a l Ma c h ine s E S X S erv er H ost P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l … 44 V M otion L 2 D esig n R a c k 10 Rack1 v m n ic 0 v m n ic 1 v m n ic 2 v m n ic 3 v m n ic 0 v m n ic 2 v Sw i tc h 0 v Sw i tc h 1 v Sw i tc h 2 v Sw i tc h 0 v Sw i tc h 2 v m k e rn e l E SX H o st 2 P r e s e n ta tio n _ ID S e r v ic e c o n s o le VM4 C is c o C o n fid e n tia l v m k e rn e l VM5 VM6 E SX H o st 1 ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 45 VM M ig r a tion—Sp ecW eb9 9 M ig r a tion Ser v er s SPECweb99 i s t h e SPEC ben c h m a r k f o r ev a l u a t i n g t h e p er f o r m a n c e o f W W W T h e St a n d a r d Per f o r m a n c e Ev a l u a t i o n Co r p o r a t i o n ( SPEC) i s a n o n -p r o f i t c o r p o r a t i o n f o r m ed t o es t a bl i s h , m a i n t a i n a n d en d o r s e a s t a n d a r d i z ed s et o f r el ev a n t ben c h m a r k s t h a t c a n be a p p l i ed t o t h e n ewes t g en er a t i o n o f h i g h -p er f o r m a n c e c o m p u t er s Xen P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 46 H A cl u ster ing ( 1 ) E MC/ L eg ato AAM b ased R ecommendations: H A Ag ent runs in ev ery host H a v e 2 S e r v ic e C ons ol e on r e d u nd a nt p a th s A v oid l os ing S A N a c c e s s ( e . g . v ia iS C S I ) Make sure you know before h and i f D R S i s ac t i v at ed t oo! L os ing P r od u c tion VL c onne c tiv ity onl y , I S O VMs ( th e r e ’s no e q u iv u p l ink tr a c k ing on th e N IC T E A MI N G AN LATES a l e nt of v s w itc h ) H eartb eats U nicast U D P port ~ 8 0 4 2 ( 4 U D P ports opened) W hen a F ail ure O ccurs, the E S X H ost ping s the g ateway ( on the S E R VI CE CO N S O L E O N L Y ) to v erif y N etwork Connectiv ity I f E S X H ost is isol ated, it shuts down the VMs thus rel easeing l ock s on the S AN H earb eats run on the S erv ice Consol e O N L Y Cav eats: S ol ution: P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 47 H A cl u ster ing ( 2 ) CO S 10 . 0 . 2. 0 Pr o d 10 . 0 . 10 0 . 0 i S CS I a c c e s s / VM k e r n e l 10 . 0 . 20 0 . 0 v m n ic 0 v m n ic 0 VM 1 VM 2 E S X 1 S erv er H ost P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l E S X 2 S erv er H ost VM 1 VM 2 48 Q u estions W h at is a N etw ork L abel ? W h at does th e vN I C D o vS w itch es al w ays h ave vm nics? Y es/ N o C an 2 P ort-G rou ps be in th e sam e V L A N vS w itch ? D oes N I C team ing req u ire N I C in V M ? Y es/ N o D oes a V M MAC attach to? A vsw itch ? A V L A N ? on th e sam e vendor driver instal l ation address ch ang e du ring a m ig ration? P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 49 A g end a VMw a r e A r c h ite c tu r e a nd C om p one nts VMw a r e L A N N e tw or k ing v S w it N ICT v S w it M ig r a ch ea ch tio Ba s i c s m in g v s L AN S w i t c h n , H A, D R S C is c o/ VMw a r e D C D E S I G N S B l a d e S e r v e r D e s ig ns S tor a g e I m p l ic a tions of S e r v e r Vir tu a l iz a tion P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 50 vSw itch a nd N I C Q : S houl d I use mul tipl e v S witches or mul tipl e P ortG roups to isol ate traf f ic? T ea m ing B est P r a ctices Q : W h ic h N ICT e a m in g c o n fig u r a tio n s h o u ld I u s e ? A: Ac t i v e / Ac t i v e , Vi r t u a l Po r t -I D b a s e d A: W e didn’t see any adv antag e in using mul tipl e v S witches, mul tipl e P ort-G roups with dif f erent VL AN s g iv e y ou enoug h f l ex ib il ity to isol ate serv ers Q : S houl d I use E S T or VS T ? f or A: Al way s use VS T , i. e. assig n the VL AN f rom the v S witch Q : D o I h a v e t o a t t a c h a l l N I Cs i n t h e t e a m to th e s a m e s w itc h o r to d iffe r e n t s w itc h e s ? A: w i t h Ac t i v e / Ac t i v e Vi r t u a l Po r t -I D b a s e d , i t d o e s n ’t m a t t e r Q : S h o u l d I u s e Be a c o n i n g ? A: N o A: Y es y ou can, b ut to mak e it simpl e don’t. I f y ou do, do not T AG VMs with the nativ e VL AN 51 Q : Can I use nativ e VL AN VMs? Q : S h o u ld I u s e R o llin g F a ilo v e r (i.e . n o p r e e m p tio n ) A: N o , d e f a u l t i s g o o d , j u s t e n a b l e t ru n k fas t o n t h e Ci s c o s w i t c h P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l N I C T ea m A cr oss H a r d w a r e P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 52 C isco Sw itch p or t C onf ig u r a tion M a k e it a T r u n k E n a b le T r u n k fa s t Ca n t h e N a t i v e VL AN b e u s e d f o r VM s ? Y e s , b u t IF y o u d o , y o u h a v e 2o p tio n s Co n f i g u r e VL AN I D = 0 f o r t h e VM s t h a t a r e g o i n g t o u s e t h e n a t i v e VL AN (p re fe rre d ) i n ter f a c e Gi g a b i tE th er n etX / X d esc r i p ti o n < < * * V M Po r t * * > > n o i p a d d r ess sw i tc h p o r t sw i tc h p o r t tr un k en c a p sul a ti o n d o t1q switchport tru n k n a tiv e v l a n < id > sw i tc h p o r t tr un k a l l o w ed v l a n x x ,y y -z z sw i tc h p o r t m o d e tr un k switchport n on e g otia te n o c d p en a b l e spa n n in g -tre e portf a st tru n k ! D o n o t e n a b l e Po r t S e c u r i t y ( s e e n e x t s lid e ) Co n f i g u r e “v l a n d o t 1q t a g n a t i v e ” o n t h e 6k ( n o t r e c o m m e n d e d ) M a k e s u r e t h a t “t e a m e d ” N I Cs a r e i n t h e s a m e L a y e r 2d o m a in Pr o v i d e a R e d u n d a n t L a y e r 2 p a t h T y p i c a l l y : S C, VM K e r n e l , VM Pr o d u c t i o n P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 53 P or t S e cu r ity and V M w ar e I n c om p a t ibl e http: / / www. cisco. com/ en/ U S / par tner/ products/ hw/ switches/ ps5 0 2 3 / products_ conf ig uration_ g uid e_ chapter0 9 1 8 6 a0 0 8 0 8 b 0 2 1 0 . ht ml # wp1 1 7 0 5 8 1 3 7 5 0 -S T ACK -top-R 1 ( conf ig if ) # switchport port-security max imum < numb er> v l an < v l an_ numb er> S C , S C iS C S I , VMK e r ne l , VMotion = 4 + 1 MA C p e r VM + B I A MA C I f a M AC m ov es (i. e. V m otion m igration or N I C T eam ing) 9 w0 d : % PO R T _ S E C U R I T Y -2PS E C U R E _ V I O L AT I O N : S ecu rity v iol ation occu rred , cau sed Port goes d own or traffic is d rop p ed 3 7 5 0 -S T AC K -top -R 1 # H ow many MACs do y ou hav e to count? max imum 5 , v iol ation restrict P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 54 Configuration with 2 NIC SC, V M K e r n e l , P r o d u c t i o n s h a r e N I Cs T r un k s 8 0 2. 1q N I C tea m i n g A c ti v e/ A c ti v e V M N IC 1 Po r t-Gr o up 2 8 0 2. 1q : Pr o d uc ti o n V L A N s, Ser v i c e C o n so l e, V M K er n el E SX Ser v er Po r t-Gr o up 3 V M N IC 2 R edundant S C and VMK ernel Connectiv ity R edundant P roduction G Al b l a l l ink lo Ac t i v e / Ac t i v e Po r t-Gr o up 1 v S w itc h 0 s used S C, VMK ernel VS T share N I Cs with P roduction T raf f ic VM1 VM2 HBA1 HBA2 Ser v i c e C o n so l e VM K er n el P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l Ac t i v e / S t a n d b y Vm n i c 1/ v m n i c 2 Ac t i v e / S t a n d b y Vm n i c 2/ v m n i c 1 55 Configuration with 2 NICs D e d i c a t e d N I C t o SC, V M K e r n e l , Se p a r a t e N I C f o r P r o d u c tio n T r un k s 8 0 2. 1q N I C tea m i n g A c ti v e/ A c ti v e Po r t-Gr o up 2 8 0 2. 1q : Pr o d uc ti o n V L A N s, Ser v i c e C o n so l e, V M K er n el E SX Ser v er Po r t-Gr o up 3 V M N IC 2 R edundant S C and VMN 1 VMK ernel Connectiv I C ity G lo b a l Ac t i v e / S t a n d b y Vm n i c 1/ v m n i c 2 R edundant P roduction w itc h 0 vS Al l l ink s used Po r t-Gr o up 1 I n normal condition S C and P roduction are VS T O n dif f erent N I Cs VM1 HBA1 VM2 HBA2 Ser v i c e C o n so l e VM K er n el P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l Ac t i v e / S t a n d b y Vm n i c 2/ v m n i c 1 Ac t i v e / S t a n d b y Vm n i c 2/ v m n i c 1 56 N etw or k A tta ch m ent ( 1 ) ro o t T ru n k fa s t BPD U g u a r d 8 0 2. 1q : Pr o d uc ti o n , SC , V M K er n el S e c o n d a ry ro o t R a p i d PVS T + N o Bl o c k e d Po r t , No Loop Ca t a l y s t 2 8 0 2. 1q Ca t a l y s t 1 8 0 2. 1q : Pr o d uc ti o n , SC , V M K er n el 1 V M N IC 1 v Sw i tc h V M N IC 2 2 3 4 V M N IC 1 v Sw i tc h V M N IC 2 Al l N I Cs a r e u s e d T r a ffic d is tr ib u te d O n a ll lin k s E S X s e rv e r1 E S X s e rv e r 2 57 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l N e tw or k Attach m e nt ( 2 ) ro o t 8 0 2. 1q : Pr o d uc ti o n , SC , V M K er n el S e c o n d a ry ro o t R a p i d PVS T + T ru n k fa s t BPD U g u a r d T y p i c a l S p a n n i n g -T r e e V-S h a p e T o p o l o g y 8 0 2. 1q : Pr o d uc ti o n , SC , V M K er n el 8 0 2. 1q 1 V M N IC 1 v Sw i tc h V M N IC 2 2 3 4 V M N IC 1 V M N IC 2 Al l N I Cs a r e u s e d T r a ffic d is tr ib u te d O n a ll lin k s E S X s e rv e r1 P r e s e n ta tio n _ ID E S X s e r v e r 2 v Sw i tc h C is c o C o n fid e n tia l ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 58 Configuration with 4 NICs D e d i c a t e d N I Cs f o r SC a n d V M K e r n e l D edicatedn N I C f or S C Pr o d u c t i o D edicated N I C f or VMK ernel R edundant P roduction H ow g ood is this desig n? Ac t i v e / Ac t i v e Vm n i c 1/ v m n i c 2 V M N IC 1 VL AN s V M N IC 2 V M N IC 3 E SX Ser v er V M N IC 4 VMs b ecome compl etel y isol ated I sol ates Manag ement Access Po r E VC cannot controlt-Gr o S up X 1H ost v s w itc h Manag ement access is l ost iS CS I access is l ost VMotion can’t run I sol ates VMK ernel I f this is part of an H A Cl uster VMs are powered down HBA1 HBA2 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l I f using iS CS I this is the worst Ser P ossibo n l v soiec l e ail ure, K v er n el compl icated fe ery VM C T o recov er f rom I f this is part of a D R S cl uster I t prev ents automatic mig ration 59 C onf ig u r ation w ith 4 N I C s R edundant S C and Pr o d u c o VMK ernel Connectiv t i ityn V M N IC 1 R edundant P roduction VL AN s S C, VM K e r n e l VL AN s V M N IC 2 V M N IC 3 E SX Ser v er V M N IC 4 H A aug mented b y teaming on D if f erent N I C chipsets P roduction and Manag ement G o throug h chipset 2 Al l l ink s used P roduction and Manag ement Ac t i v e / Ac t i v e v s w itc h Vm n i c 1/ v m n i c 3 G o throug h chipset1 “D edicated N I Cs” f or S C And VMK ernel S C swaps to v mnic4 VC can H stil 1l control2 H ost BA HBA P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . Po r t-Gr o up 1 VMK ernel swaps to v mnic2 Ser v i c e C o n so l e P roduction T raf f ic Continues on v mnic1 Ac t i v e / S t a n d b y Vm n i c 4/ v m n i c 2 60 VM K er n el P roduction T raf f ic g oes to v mnic3 C is c o C o n fid e n tia l Ac t i v e / S t a n d b y Vm n i c 2/ v m n i c 4 N etw or k A tta ch m ent ( 1 ) ro o t S e c o n d a ry ro o t R a p i d PVS T + T ru n k fa s t BPD U g u a r d 8 0 2. 1q : Pr o d uc ti o n , SC , V M K er n el N o Bl o c k e d Po r t , No Loop Ca t a l y s t 2 SC 8 0 2. 1q : a n d V M K er n el Ca t a l y s t 1 8 0 2. 1q : Pr o d uc ti o n 1 2 3 E S X s e rv e r1 v Sw i tc h P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . 4 5 6 7 8 E S X s e rv e r 2 v Sw i tc h C is c o C o n fid e n tia l 61 N etw or k A tta ch m ent ( 2 ) ro o t T ru n k fa s t BPD U g u a r d 8 0 2. 1q : Pr o d uc ti o n , SC , V M K er n el S e c o n d a ry ro o t R a p i d PVS T + T y p i c a l S p a n n i n g -T r e e V-S h a p e T o p o l o g y Ca t a l y s t 1 8 0 2. 1q : Pr o d uc ti o n Ca t a l y s t 2 8 0 2. 1q : a n d V M K er n el 1 2 3 4 5 6 SC 7 8 E S X s e rv e r 2 v Sw i tc h E S X s e rv e r1 v Sw i tc h P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 62 H ow M T ru BPD If A bou t? Pr o d u c t i o n o n E S X 1 i s 8 Co2. m 1q p : l e t e l y i s o l a t S e e d c o n d a r y 0 Pr o d uc ti o n , SC , V M K er n el ro o t ro o t H A d o e s n ’t d o a n y t h i n g f o r E S X 1, VM s a r e i s o l a t e d t e d . O n E S X 2, i f y o u u s e i S CS I , t h i s i s re c o v e r fro m t h a t t h e VM s a r e p o w e r e d o f f o n E S X 2 T y p i c a l S p a n n i n g -T r e e d in E S X 1!!!!! V-S h a p e T o p o l o g y Ca t a l y s t 2 8 0 2. 1q : a n d V M K er n el a n a g e m e n t a n d VM K e r n e l a r e i s o l a n k fa s t n o t e a s y to Ug u a r d y o u u s e a n H Ac lu s te r c h a n c e s a r e a n d re s ta rte Ca t a l y s t 1 8 0 2. 1q : Pr o d uc ti o n E S X s e rv e r1 v Sw i tc h 1 2 3 4 5 6 SC 7 8 E S X s e rv e r 2 v Sw i tc h P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 63 4 N I C s w ith E th er ch a nnel “C l uster ed ” sw i tc h es 8 0 2. 1q : Pr o d uc ti o n 1 3 2 4 5 7 6 8 8 0 2. 1q : SC , V M K er n el v Sw i tc h E S X s e rv e r1 v Sw i tc h E S X s e rv e r 2 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 64 T y p ica l E SX H A / D R S cl u ster d esig n M ain t e n an ce M o d e i n a n H A c l u s t e r l e v e r a g e s VM o t i o n M i g r a t i o n D R S cl u s t e r m a y r e q u i r e VM o t i o n M i g r a t i o n I f y o u w a n t VM s t o a u t o m a t i c a l l y M o v e t o t h e H o s t w i t h m o r e m e m o r y a n d CPU D C Co r e A g g r eg a ti o n Access1 Access2 Access E S X s e rv e rs Al l VM Pr o d u c t i o n VL AN s T r u n k e d V M w a r e “c l uster ” ( ty p i c a l l y ~ 10 -20 ser v er s) P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 65 V M otion M ig r a tion E x a m p l e R ack1 R a c k 10 v m n ic 0 E SX H o st 1 v Sw i tc h v m n ic 1 v m n ic 0 E SX H o st 2 v m n ic 1 v Sw i tc h VM1 P r e s e n ta tio n _ ID VM2 VM3 C is c o C o n fid e n tia l VM4 VM5 VM6 66 ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . N etw or k L a bel a nd V M otion V M s m ov ing from one E S X serv er to a d ifferent one l ook for th e sam e Network Label P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 67 V M otion B est P r a ctices D atapoints: Mig ration onl y happens within a VM HA/DRS cluster and/ or within a d a ta cen ter VMotion l ook s f or the N etwork L ab el to b e av ail ab l e on the T arg et E S X H ost VM MAC doesn’t chang e during the mig ration Best Practice E nab l e the option “N otif y S witch” in the v switch so that targ et v switch sends out a R AR P to update the mac-f orwarding tab l es At most the L ay er 2 domain needs to encompass ~ 1 0 -2 0 machines, set the L ay er 2 b oundary within the D ata center according l y Mak e the VMk ernel network routed, ex tend the L ay er 2 domain onl y f or the VM production traf f ic P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 68 E x p e rim e ntal R oute d Ne twork for P owe re d -off M igration or H A Cl us te r R ou ted N etwork VLAN3 VLAN2 VLAN5 VLAN4 v m n ic 0 v m n ic 1 v m n ic 2 v m n ic 0 v m n ic 2 v Sw i tc h 0 v Sw i tc h 1 v Sw i tc h 2 v Sw i tc h 0 v Sw i tc h 2 E SX H o st 2 1 0 .1 0 .3.4 1 v m k e rn e l S e r v ic e c o n s o le VM4 VLAN2 1 0 .1 0 .2. x E SX H o st 1 1 0 .1 0 .5.4 1 v m k e rn e l 1 0 .1 0 .4. x VLAN4 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 69 A g end a VMware Architecture and Components VMware L AN N etwork ing v S witch B asics N I C T eaming v S witch v s L AN S witch Mig ration, H A, D R S Cisco/ VMware D C D E S I G N S B l ade S erv er D esig ns S torag e I mpl ications of S erv er Virtual iz ation P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 70 H and y F e ature s for L arge S c al e D e p l oy m e nts F le x lin k s F lex link s k eeps one set of ports in forwarding state and a back up set of ports are non forwarding for the same set of V L A N s T o Core R outers Y ou can hav e half V L A N s activ e on one set of link s and half V L A N s activ e on the other set of link s P reemption configurable F ailov er < 1 0 0 ms N o S panning-T ree is inv olv ed so it’s v ery light weight on the C ontrol P lane P reemption D elay C onfigurable o ff fo rc e d b a n d w id th P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 71 D e s ig n w ith th e I nte g r ate d S w itch U p l in k T ra c k in g / T ru n k R es il ien c y s w it s w it s w it s w it s w it s w it ch( ch( ch( ch( ch( ch( con con con con con con fig fig fig fig fig fig )# lin )# in -i f -r a -i f -r a -i f -r a -i f -r a k s ta te tra t r a n g e PO n g e )# lin k n g e )# in t r n g e )# lin k n g e )# e n d ck 1, st an st 1 PO a te ge a te Using Integrated Ethernet Switches 2 g r o u p 1u p s tr e a m g i g 0 / 1 - 16 g r o u p 1d o w n s tr e a m L3 Sw it c hes N o te : PO 1 i s c o m p o s e d o f g i g p o r t s 21 a n d 22 PO 2 i s c o m p o s e d o f g i g p o r t s 23 a n d 24 T h e s e E th e r c h a n n e ls m u s t b e c r e a te d s e p a r a te ly p r io r to c r e a tin g th e L a y e r 2T r u n k F a ilo v e r F e a tu r e . I n t eg rat ed L 2 sw it c hes Blade Server Chassis Interface 1 Interface 2 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 72 H P B lad e S e r v e r + V M 8 U p lin k s 8 U p lin k s C B S-l ef t C B S-l ef t C B S-r i g h t 8 0 2. 1q C B S-r i g h t 8 0 2. 1q E SX Ser v er 2 Mgmt. Module 1 2 3 4 E SX Ser v er 1 1 2 3 v Sw i tc h 0 4 1 – 16 B l a d e Ser v er v Sw i tc h 0 Bl a d e S e r v e r E n c l o s u r e E SX ser v er s VM1 VM2 Ser v i c e VM C o n so l e K er n el VM1 VM2 Ser v i c e VM C o n so l e K er n el 73 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l T r a ck ing on V M N etw or k P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 74 A g end a VMw a r e A r c h ite c tu r e a nd C om p one nts VMw a r e L A N N e tw or k ing C is c o/ VMw a r e D C D E S I G N S B l a d e S e r v e r D e s ig ns S tor a g e I m p l ic a tions of S e r v e r Vir tu a l iz a tion VMW A R E S tor a g e D e s ig n A s p e c ts P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 75 I t’s J u st A noth er SA N N IC T e a m ing LAN A tta ch ed H ost… Mu l ti P a th ing 8 02 . 1 q tr u nk SAN Vir tu a l S w itc h e s VL A N S e tu p 1 VMNI C0 VMNI C1 VMNI C2 VMNI C3 VS P or t G r ou p s HBA FC FC HBA Po r t G r o u p A 2 Vi r t u a l S w i t c h Po r t G r o u p B 30 L UN M a p p i n g 31 32 IP A d d r e s s ing VL AN “A” V i r tua l M a c h i n es VL AN “B” P or t G r ou p U nif or m ity Vol u m e Mg m t V i r tua l M a c h i n es Mu l tip l e Vir E S X S e r v e r H os ttu a l N I C s P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l E S X S e r v e r H os t 76 V M w a r e E SX Stor a g e Op tions FC VM iS C S I/ NF S VM DAS VM VM VM VM FC FC SCSI 80%+ of install base uses FC stor ag e iS CS I is p op ular in S M B m ar k et D A S is not p op ular bec ause it p r oh ibits V M otion P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 77 E SX N etw or k ed Stor a g e Su p p or t Type FC BootESX S er v er Yes V M o t io n Yes VM FS Yes RDM Yes MSCS S u p p o rt Yes V M w a re HA & DRS Yes NFS Yes Yes No No No Yes iS C S I ( H W ) * Yes Yes Yes Yes No Yes iS C S I ( S W ) No Yes Yes Yes No Yes P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 78 E SX F C D a ta F l ow E S X H os t VM VM 1. 2. 3. 4. Al l storag e shows up on v irtual S CS I control l er and appears as S CS I driv e reg ardl ess of storag e source O S dev ice driv er sends req uest to the v irtual S CS I control l er Virtual Machine g uest O S issues R ead/ W rite to disk HW 5. VMk ernel l ocates VM f il e on VMF S , maps v irtual to phy sical b l ock s, sends req uest to phy sical H B A driv er H B A sends F CP operations out the wire D r iv e r S C S I Layer Layer VM FS Virtual S CS I control l er f orwards command to the VMk ernel V i r tua l SC SI I n ter f a c e V M k er n el V i r tua l SC SI I n ter f a c e V i r tua l SC SI VMM V i r tua l M a c h i n e F i l e Sy stem Sc h ed ul i n g / Q uei n g – E r r o r s Pr o c . D i sc o v er y – C o m m a n d Pr o c essi n g U ti l i ti es – SC SI – i SC SI - I D E FC FC P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 79 V ir tu a l Ser ver s Sh a r e a P h y sica l H B A A z one inc lud es th e p stor ag e ar r ay A c c ess c ontr ol is d em “L U N m ask ing and m th e p h y sic al H B A p W it is th e sam e for all V T h e h y p er v isor is in c er r or s m ay be d isastr M D S9 0 0 0 M a p p in g h y sic al h ba and th e and ed to stor ag e ar r ay ap p ing ”, it is based on W N and Ms h ar g e of th e m ap p ing , ous (L U N Sto r a g e A r r a y M a p p i n g a n d M a sk i n g ) Hy p e r v i s o r V ir tu a l S e rv e rs FC HW p W W N -P FC pWWN-P Zone P r e s e n ta tio n _ ID Si n g l e L o g i n o n a Si n g l e Po i n t-to -Po i n t C o n n ec ti o n ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l FC N a m e Ser v er 80 A d atastore is sim p l y a p ool of storage, internal or networked C an b e V M F S -b ased or R awM ap p ed W ith networked storage a d atastore is a cl u ster resou rce av ail ab l e to al l E S X h osts T o enab l e V M otion a d atastore m u st b e av ail ab l e to th e sou rce and d estination E S X h osts M u l tip l e d atastores can b e d efined with in a cl u ster P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l W h a t I s a D a ta stor e? VM1 VM2 VM3 VM4 FC FC FC FC D a ta sto r e1 VM1 VM2 VM FS VM3 D a ta sto r e2 VM4 VM FS 81 R D M al l ows d irect read / write access to d isk B l ock m ap p ing is stil l m aintained with in a V M F S fil e R arel y u sed b u t im p ortant for cl u stering (M S C S su p p orted ) U sed with N PI V env ironm ents R aw D evice M a p p ing VM1 VM2 FC FC RDM M a ppi n g VM FS FC P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 82 N o storage l oad b al ancing, strictl y fail ov er T wo m od es of op eration d ictate b eh av ior (F ix ed and M ost R ecent) F ix ed M od e A llows definition of preferred paths I f preferred path fails a secondary path is used I f preferred path reappears it will fail back Stor a g e M u l ti-P a th ing VM VM FC FC M ost R ecentl y U sed I f prev ious path reappears the current path is still used I f current path fails a secondary path is used P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 83 E SX Stor a g e R ef er ence D ocu m ents ESX SAN Compatibility Guide h ttp: / / w w w . v mw ar e. c om/ pdf / v i3 _ s an _ g uide. pdf V M w ar e SAN Stor ag e D es ig n Guide iSCSI Con f ig ur ation Guide h ttp: / / w w w . v mw ar e. c om/ pdf / v i3 _ s an _ des ig n _ deploy. pdf h ttp: / / w w w . v mw ar e. c om/ pdf / v i3 _ is c s i_ c f g . pdf P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 84 A g end a VMw a r e A r c h ite c tu r e a nd C om p one nts VMw a r e L A N N e tw or k ing C is c o/ VMw a r e D C D E S I G N S B l a d e S e r v e r D e s ig ns S tor a g e I m p l ic a tions of S e r v e r Vir tu a l iz a tion VMW A R E S tor a g e D e s ig n A s p e c ts P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 85 Z oning f or V M otion VM VM VM VM VM VM VM VM VM VM FC FC VM VM VM VM VM VM VM VM VM VM FC FC VM VM VM VM VM VM VM VM VM VM FC FC VM VM VM VM VM VM VM VM VM VM FC FC VM VM VM VM VM VM VM VM VM VM FC FC P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l A ll p h y sic al inter fac es w ith in a c luster M U S T h av e ac c ess to all of th e d isk to sup p or t V M otion S M B m ay use P er m it d efault z one E nter p r ise c ustom er s id eally w ill use m any -to-m any z one 86 Over su bscr ip tion C h a l l eng es T r ad itional M D S P or t G r oup U sag e V ir tual M D S P or t G r oup U sag e VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM H y VM VMi so r VM VM VM VM p er v VM VM VM VM VM VM VM VM H y p er v i so r VM VM VM VM VM VM VM H y p er v i so r H y p er v i so r M any cu stom ers target l ow I / O serv ers for V M consol id ation b u t… Aggregation of m u l tip l e V M s on a singl e HB A increases b and wid th req u irem ents on a p er-p ort b asis P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 87 V ir tu a l Ser ver U sing N P I V a nd Stor a g e D evice M a p p ing V ir tu a l S e rv e rs V ir tual H B A s c an be z oned ind iv id ually “L U N m ask ing and m ap p ing ” is based on th e v ir tual H B A p W W N eac h V M s V er y safe w ith r esp ec t to c onfig ur ation er r or s O nly sup p or ts R D M A v ailble in E S X 3 . 5 M D S9 0 0 0 of Sto r a g e A r r a y Hy p e r v i s o r Ma p p i n g Ma p p i n g Ma p p i n g Ma p p i n g FC T o pWWN-1 T o pWWN-2 pWWN-P pWWN-1 pWWN-2 pWWN-3 pWWN-4 T o pWWN-3 T o pWWN-4 FC FC FC FC p W W N -1 p W W N -2 p W W N -3 p W W N -4 HW p W W N -P FC M ul ti p l e L o g i n s o n a Si n g l e Po i n t-to -Po i n t C o n n ec ti o n P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l FC N a m e Ser v er 88 P r e s e n ta tio n _ ID ©2 0 0 6 C is c o S y s te m s , In c . A ll r ig h ts r e s e r v e d . C is c o C o n fid e n tia l 89