Tuesday, 26 November 2013

How it works : Site Recovery Manager Recovery plan


Site Recovery Manager Recovery plan : Step by Step. 

 

Hi All,

I want to share an really interesting things which will let you know really how the SRM works when any Site Goes down.and how it recover to the other site in different location.. 



Thanks !!



Saturday, 23 November 2013

Didn't Get your Soft and Hard copy of Certification from Vmware ??


Hi All,

I am sharing this post for helping all others to not face the difficulties like i felt some. After completion of your certification you need to follow some steps, to receive the certification on your vmware account portal and on your given home address . Lets see what we forget to do and what we have to do now.

Step 1: Login in to your mylearn portal(http://mylearn.vmware.com/portals/certification/ ) and provide your credentials to login in to it.

Step 2: You will be redirected to the Dashboard after succesfully login . Now select "My Enrollment" on the Right Side Menu.

Step 3: Now if you are seeing the Training Plan still available there And you already passed the exam .Click on it.
 
Step 4 : Now you will be redirected to the selected training plan. and you will see there are some options will be available "Checked Green Mark" You need to selectthem and fill it one by one.

Step 5: After completion of this task in 1 hour you will receive an confirmation mail from Vmware that your Certicate is released and you can find it in your Transcript.


Thanks for reading this Article. Thanks.                                                




Wednesday, 30 October 2013

VMFS vs. RDM


I was searching which cluster file system works best when you create any cluster server on virtual machine 
so i decided and search so me thing to select the best.: Virtual Machine File System (VMFS) or Raw Device Mapping (RDM)

Raw Device Mapping 

With RDM, the VMkernel doesn't format the LUN; instead, the VM guest OS formats the LUN. Each RDM is a single VM hard disk and is usually attached to a single VM. An RDM takes the place of a VMDK file for a VM. This is where the VM's disk contents are stored. But this is not where the files that make up the VM are stored: These files need to be stored on a data store separate from the RDM.

RDMs are sometimes deployed based on the belief that they offer better performance since there is less file system overhead than VMDK files on VMFS. But, in some uses, RDMs are a little slower than VMFS. If a VM needs top disk performance, then dedicate a data store to the VMDK file.

The biggest limitation with RDMs is that the one LUN is only one VM disk. With a data store, the LUN could hold 20 VM disks in VMDK files. RDM can be very limiting, since an ESXi server can only handle 255 LUNs and the whole DRS and HA cluster should see the same LUNs.

Virtual Machine File System(VMFS)

A VMFS data store is the default way for the VMkernel to handle disks; the disk is partitioned and formatted by the VMkernel and nothing but the VMkernel can read the disk, now called a data store. The advantage of VMFS is that a single disk -- logical unit number(LUN) in storage-area network (SAN) terms -- can hold multiple virtual machines.

How many virtual machines (VMs) to assign per LUN is an age-old debate, but an average number would be a dozen VMs sharing one data store. Essentially, a data store can hold multiple VMs and can hold all of the files that make up each VM. These files include the VMX file that lists the VM hardware configuration, the VMDK files that are the VM's hard disks and the other sundry files that make up the VM.

How to choose between VMFS and RDM

There are a few things that require RDMs in vSphere:

1.      Microsoft Failover Cluster Services. MSCS uses shared disks to build a cluster out of VMs on different ESXi hosts. The shared disks cannot be VMDK files; RDMs are required if your storage is Fibre Channel. Check VMware's guidance on MSCS in VMs since it can be tricky to configure. Also, be sure you really need to use MSCS when vSphere HA isn't enough.

2.      Storage-area network Quality of Service. For the SAN fabric to apply QoS to traffic from one VM -- not the ESXi server -- the VM must use a unique Fibre Channel ID using a feature called N_Port Identity Virtualization (NPIV). NPIV only applies when the VM disk is an RDM.

3.      Managing some Fibre Channel storage from a VM. Some storage arrays are controlled using LUNs over the Fibre Channel network. To run the configuration software inside a VM, these control LUNs must be presented to the VM as RDMs. (This is not common; I've seen it only on high-end EMC storage.)

4.      Big VM disks. The largest VMDK file you can create is 2TB, but a single RDM can be up to 64TB. You need to decide if a VM with a huge disk is a good choice when you factor the backup size and how long it would take to do a restore.
     
Using all RDMs means there is only room for 254 RDM VM disks, plus one data store for the VM files. With VMFS data stores, the 255 LUNs could hold thousands of VM disks.

The option to use an RDM may be necessary in some situations, but your default choice when possible should be to use VMFS and store VM disks in VMDK files.


 Thanks. for Reading.  

Sunday, 22 September 2013


Do from Your Own : Vmware Fault Tolerance Simulator 


Hi Techies,

I am sharing this really interesting simulator which can perfect you for Configuring Fault Tolerance in your Environment. So do it here, no matter how many times you want and feel it like real. Must share your feedback. Thanks. 




What's New in VMware vSphere 5.5 One Page QuickReference

Provided By : VMWARE

Hi Vmware Techies,

I was searching about all the new things introduced in Vmware vSphere 5.5 which is released recently.. And luckily find a one page reference from Vmware. So many new improvement performance and hardware capacity related are introduced this time. So enjoy reading the same. Hope you will like it.

Vmware vSphere 5.5 Release improvemnets


Summary of new features and capabilities available in vSphere 5.5
  • Doubled Host-Level Configuration Maximums – vSphere 5.5 is capable of hosting any size workload; a fact that is punctuated by the doubling of several host-level configuration maximums.  The maximum number of logical CPUs has doubled from 160 to 320, the number of NUMA nodes doubled from 8 to 16, the number of virtual CPUs has doubled from 2048 to 4096, and the amount of RAM has also doubled from 2TB to 4TB. There is virtually no workload that is too big for vSphere 5.5!
  • Hot-pluggable PCIe SSD Devices – vSphere 5.5 provides the ability to perform hot-add and remove of SSD devices to/from a vSphere 5.5 host.  With the increased adoption of SSD, having the ability to perform both orderly as well as unplanned SSD hot-add/remove operations is essential to protecting against downtime and improving host resiliency.
  • Improved Power Management – ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states).   By leveraging the deeper CPU sleep states ESXi can minimizes the amount of power consumed by idle CPUs during periods of inactivity.  Along with the improved power savings comes additional performance boost on Intel chipsets as turbo mode frequencies can be reached more quickly when CPU cores are in a deep C-State.
  • Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware 10) – ESXi 5.5 provides a new Virtual Machine Compatibility level that includes support for a new virtual-SATA Advance Host Controller Interface (AHCI) with support for up to 120 virtual disk and CD-ROM devices per virtual machine.   This new controller is of particular benefit when virtualizing Mac OS X as it allows you to present a SCSI based CD-ROM device to the guest.
  • VM Latency Sensitivity – included with the new virtual machine compatibility level comes a new “Latency Sensitivity” setting that can be tuned to help reduce virtual machine latency.  When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.
  • Expanded vGPU Support – vSphere 5.5 extends VMware’s hardware-accelerated virtual 3D graphics support (vSGA) to include GPUs from AMD.  The multi-vendor approach provides customers with more flexibility in the data center for Horizon View virtual desktop workloads.  In addition 5.5 enhances the “Automatic” rendering by enabling the migration of virtual machines with 3D graphics enabled between hosts running GPUs from different hardware vendors as well as between hosts that are limited to software backed graphics rendering.
  • Graphics Acceleration for Linux Guests – vShere 5.5 also provides out of the box graphics acceleration for modern GNU/Linux distributions that include VMware’s guest driver stack, which was developed by VMware and made available to all Linux vendors at no additional cost.
  • vCenter Single Sign-On (SSO) – in vSphere 5.5 SSO comes with many improvements.   There is no longer an external database required for the SSO server, which together with the vastly improved installation experience helps to simplify the deployment of SSO for both new installations as well as upgrades from earlier versions.   This latest release of SSO provides enhanced active directory integration to include support for multiple forest as well as one-way and two-way trusts.  In addition, a new multi-master architecture provides built in availability that helps not only improve resiliency for the authentication service, but also helps to simplify the overall SSO architecture.
  • vSphere Web Client – the web client in vSphere 5.5 also comes with several notable enhancements.  The web client is now supported on Mac OS X, to include the ability to access virtual machine consoles, attach client devices and deploy OVF templates.  In addition there have been several usability improvements to include support for drag and drop operations, improved filters to help refine search criteria and make it easy to find objects, and the introduction of a new “Recent Items” icon that makes it easier to navigate between commonly used views.
  • vCenter Server Appliance – with vSphere 5.5 the vCenter Server Appliance (VCSA) now uses a reengineered, embedded vPostgres database that offers improved scalability.  I wasn’t able to officially confirm the max number of hosts and VMs that will be supported with the embedded DB.  They are targeting 100 hosts and 3,000 VMs but we’ll need to wait until 5.5 releases to confirm these numbers.  However, regardless what the final numbers are, with this improved scalability the VCSA is a very attractive alternative for folks who may be looking to move a way from a Windows based vCenter.
  • vSphere App HA – App HA brings application awareness to vSphere HA helping to further improve application uptime.  vSphere App HA works together with VMware vFabric Hyperic Server to monitor application services running inside the virtual machine, and when issues are detected perform restart actions as defined by the administrator in the vSphere App HA Policy.
  • vSphere HA Compatibility with DRS Anti-Affinity Rules –vSphere HA will now honor DRS anti-affinity rules when restarting virtual machines.  If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now honor those rules when restarting virtual machines following a host failure.
  •  vSphere Big Data Extensions(BDE) – Big Data Extensions is a new addition to the VMware vSphere Enterprise and Enterprise Plus editions.  BDE is a vSphere plug-in that enables administrators to deploy and manage Hadoop clusters on vSphere using the vSphere web client.
  • Support for 62TB VMDK – vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).  The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.
  • Microsoft Cluster Server (MCSC) Updates – MSCS clusters running on vSphere 5.5 now support Microsoft Windows 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
  • 16Gb End-to-End Support – In vsphere 5.5 16Gb end-to-end FC support is now available.  Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
  • Auto Remove of Devices on PDL – This feature automatically removes a device from a host when it enters a Permanent Device Loss (PDL) state.  Each vSphere host is limited to 255 disk devices, removing devices that are in a PDL state prevents failed devices from occupying a device slot.
  • VAAI UNMAP Improvements – vSphere 5.5 provides  and new “esxcli storage vmfs unmap” command with the ability to specify the reclaim size in blocks, opposed to just a percentage, along with the ability to reclaim space in increments rather than all at once.
  • VMFS Heap Improvements – vSphere 5.5 introduces a much improved heap eviction process, which eliminates the need for large heap sizes.  With vSphere 5.5 a maximum of 256MB of heap is needed to enable vSphere hosts to access the entire address space of a 64TB VMFS.
  • vSphere Flash Read Cache – a new flash-based storage solution that enables the pooling of multiple flash-based devices into a single consumable vSphere construct called a vSphere Flash Resource, which can be used to enhance virtual machine performance by accelerating read-intensive workloads.
  • Link Aggregation Control Protocol (LACP) Enhancements – with the vSphere Distributed Switch in vSphere 5.5 LACP now supports 22 new hashing algorithms, support for up to 64 Link Aggregation Groups (LAGs), and new workflows to help configure LACP across large numbers of hosts.
  • Traffic Filtering Enhancements – the vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).
  • Quality of Service Tagging – vSphere 5.5 adds support for Differentiated Service Code Point (DCSP) marking.  DSCP marking support enables users to insert tags in the IP header which helps in layer 3 environments where physical routers function better with an IP header tag than with an Ethernet header tag.
  • Single-Root I/O Virtualization (SR-IOV) Enhancements – vSphere 5.5 provides improved workflows for configuring SR-IOV as well as the ability to propagate port group properties to up to the virtual functions.
  • Enhanced Host-Level Packet Capture – vSphere 5.5 provides an enhanced host-level packet capture tool that is equivalent to the command-line tcpdump tool available on the Linux platform.
  • 40Gb NIC Support – vSphere 5.5 provides support for 40Gb NICs.  In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.
  • vSphere Data Protection (VDP) – VDP has also been updated in 5.5 with several great improvements to include the ability to replicate  backup data to EMC Avamar,  direct-to-host emergency restore, the ability to backup and restore of individual .vmdk files, more granular scheduling for backup and replication jobs, and the ability to mount existing VDP backup data partitions when deploying a new VDP appliance. 

Thursday, 11 July 2013

Installing vCenter Server 5.1


                      Installing vCenter Server 5.1


Hi Guys,

Even i was thinking from so many days to show you how to install the vcenter server 5.1 . I recorded an video and want to let you know how to do it.. 





Monday, 10 June 2013

Interview Cracker Question : Vmware vSphere ..



Hi All,

I was thinking to upload something really interesting and useful to everyone from last some days..
And today i came up with something which is really meant for the people who want to crack the interviews or want to enhance there knowledge so they can use it to troubleshoot there operations..

I collected some of the .htm files which have the step by step troubleshooting steps from Vmware Knowledge base. Just try it once, and i am sure you will gonna love it..;)

Common Licensing issues in VMware Infrastructure

Common Fault issues in VMware Infrastructure

Common system management issues in VMware Infrastructure

(Note : Please Save these files in your local system to use it frequently.. )

This is some of my collection to face the realtime scenario issues/problems.hope it will help you..Thanks.