Tuesday 26 November 2013

How it works : Site Recovery Manager Recovery plan


Site Recovery Manager Recovery plan : Step by Step. 

 

Hi All,

I want to share an really interesting things which will let you know really how the SRM works when any Site Goes down.and how it recover to the other site in different location.. 



Thanks !!



Saturday 23 November 2013

Didn't Get your Soft and Hard copy of Certification from Vmware ??


Hi All,

I am sharing this post for helping all others to not face the difficulties like i felt some. After completion of your certification you need to follow some steps, to receive the certification on your vmware account portal and on your given home address . Lets see what we forget to do and what we have to do now.

Step 1: Login in to your mylearn portal(http://mylearn.vmware.com/portals/certification/ ) and provide your credentials to login in to it.

Step 2: You will be redirected to the Dashboard after succesfully login . Now select "My Enrollment" on the Right Side Menu.

Step 3: Now if you are seeing the Training Plan still available there And you already passed the exam .Click on it.
 
Step 4 : Now you will be redirected to the selected training plan. and you will see there are some options will be available "Checked Green Mark" You need to selectthem and fill it one by one.

Step 5: After completion of this task in 1 hour you will receive an confirmation mail from Vmware that your Certicate is released and you can find it in your Transcript.


Thanks for reading this Article. Thanks.                                                




Wednesday 30 October 2013

VMFS vs. RDM


I was searching which cluster file system works best when you create any cluster server on virtual machine 
so i decided and search so me thing to select the best.: Virtual Machine File System (VMFS) or Raw Device Mapping (RDM)

Raw Device Mapping 

With RDM, the VMkernel doesn't format the LUN; instead, the VM guest OS formats the LUN. Each RDM is a single VM hard disk and is usually attached to a single VM. An RDM takes the place of a VMDK file for a VM. This is where the VM's disk contents are stored. But this is not where the files that make up the VM are stored: These files need to be stored on a data store separate from the RDM.

RDMs are sometimes deployed based on the belief that they offer better performance since there is less file system overhead than VMDK files on VMFS. But, in some uses, RDMs are a little slower than VMFS. If a VM needs top disk performance, then dedicate a data store to the VMDK file.

The biggest limitation with RDMs is that the one LUN is only one VM disk. With a data store, the LUN could hold 20 VM disks in VMDK files. RDM can be very limiting, since an ESXi server can only handle 255 LUNs and the whole DRS and HA cluster should see the same LUNs.

Virtual Machine File System(VMFS)

A VMFS data store is the default way for the VMkernel to handle disks; the disk is partitioned and formatted by the VMkernel and nothing but the VMkernel can read the disk, now called a data store. The advantage of VMFS is that a single disk -- logical unit number(LUN) in storage-area network (SAN) terms -- can hold multiple virtual machines.

How many virtual machines (VMs) to assign per LUN is an age-old debate, but an average number would be a dozen VMs sharing one data store. Essentially, a data store can hold multiple VMs and can hold all of the files that make up each VM. These files include the VMX file that lists the VM hardware configuration, the VMDK files that are the VM's hard disks and the other sundry files that make up the VM.

How to choose between VMFS and RDM

There are a few things that require RDMs in vSphere:

1.      Microsoft Failover Cluster Services. MSCS uses shared disks to build a cluster out of VMs on different ESXi hosts. The shared disks cannot be VMDK files; RDMs are required if your storage is Fibre Channel. Check VMware's guidance on MSCS in VMs since it can be tricky to configure. Also, be sure you really need to use MSCS when vSphere HA isn't enough.

2.      Storage-area network Quality of Service. For the SAN fabric to apply QoS to traffic from one VM -- not the ESXi server -- the VM must use a unique Fibre Channel ID using a feature called N_Port Identity Virtualization (NPIV). NPIV only applies when the VM disk is an RDM.

3.      Managing some Fibre Channel storage from a VM. Some storage arrays are controlled using LUNs over the Fibre Channel network. To run the configuration software inside a VM, these control LUNs must be presented to the VM as RDMs. (This is not common; I've seen it only on high-end EMC storage.)

4.      Big VM disks. The largest VMDK file you can create is 2TB, but a single RDM can be up to 64TB. You need to decide if a VM with a huge disk is a good choice when you factor the backup size and how long it would take to do a restore.
     
Using all RDMs means there is only room for 254 RDM VM disks, plus one data store for the VM files. With VMFS data stores, the 255 LUNs could hold thousands of VM disks.

The option to use an RDM may be necessary in some situations, but your default choice when possible should be to use VMFS and store VM disks in VMDK files.


 Thanks. for Reading.  

Sunday 22 September 2013


Do from Your Own : Vmware Fault Tolerance Simulator 


Hi Techies,

I am sharing this really interesting simulator which can perfect you for Configuring Fault Tolerance in your Environment. So do it here, no matter how many times you want and feel it like real. Must share your feedback. Thanks. 




What's New in VMware vSphere 5.5 One Page QuickReference

Provided By : VMWARE

Hi Vmware Techies,

I was searching about all the new things introduced in Vmware vSphere 5.5 which is released recently.. And luckily find a one page reference from Vmware. So many new improvement performance and hardware capacity related are introduced this time. So enjoy reading the same. Hope you will like it.

Vmware vSphere 5.5 Release improvemnets


Summary of new features and capabilities available in vSphere 5.5
  • Doubled Host-Level Configuration Maximums – vSphere 5.5 is capable of hosting any size workload; a fact that is punctuated by the doubling of several host-level configuration maximums.  The maximum number of logical CPUs has doubled from 160 to 320, the number of NUMA nodes doubled from 8 to 16, the number of virtual CPUs has doubled from 2048 to 4096, and the amount of RAM has also doubled from 2TB to 4TB. There is virtually no workload that is too big for vSphere 5.5!
  • Hot-pluggable PCIe SSD Devices – vSphere 5.5 provides the ability to perform hot-add and remove of SSD devices to/from a vSphere 5.5 host.  With the increased adoption of SSD, having the ability to perform both orderly as well as unplanned SSD hot-add/remove operations is essential to protecting against downtime and improving host resiliency.
  • Improved Power Management – ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states).   By leveraging the deeper CPU sleep states ESXi can minimizes the amount of power consumed by idle CPUs during periods of inactivity.  Along with the improved power savings comes additional performance boost on Intel chipsets as turbo mode frequencies can be reached more quickly when CPU cores are in a deep C-State.
  • Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware 10) – ESXi 5.5 provides a new Virtual Machine Compatibility level that includes support for a new virtual-SATA Advance Host Controller Interface (AHCI) with support for up to 120 virtual disk and CD-ROM devices per virtual machine.   This new controller is of particular benefit when virtualizing Mac OS X as it allows you to present a SCSI based CD-ROM device to the guest.
  • VM Latency Sensitivity – included with the new virtual machine compatibility level comes a new “Latency Sensitivity” setting that can be tuned to help reduce virtual machine latency.  When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.
  • Expanded vGPU Support – vSphere 5.5 extends VMware’s hardware-accelerated virtual 3D graphics support (vSGA) to include GPUs from AMD.  The multi-vendor approach provides customers with more flexibility in the data center for Horizon View virtual desktop workloads.  In addition 5.5 enhances the “Automatic” rendering by enabling the migration of virtual machines with 3D graphics enabled between hosts running GPUs from different hardware vendors as well as between hosts that are limited to software backed graphics rendering.
  • Graphics Acceleration for Linux Guests – vShere 5.5 also provides out of the box graphics acceleration for modern GNU/Linux distributions that include VMware’s guest driver stack, which was developed by VMware and made available to all Linux vendors at no additional cost.
  • vCenter Single Sign-On (SSO) – in vSphere 5.5 SSO comes with many improvements.   There is no longer an external database required for the SSO server, which together with the vastly improved installation experience helps to simplify the deployment of SSO for both new installations as well as upgrades from earlier versions.   This latest release of SSO provides enhanced active directory integration to include support for multiple forest as well as one-way and two-way trusts.  In addition, a new multi-master architecture provides built in availability that helps not only improve resiliency for the authentication service, but also helps to simplify the overall SSO architecture.
  • vSphere Web Client – the web client in vSphere 5.5 also comes with several notable enhancements.  The web client is now supported on Mac OS X, to include the ability to access virtual machine consoles, attach client devices and deploy OVF templates.  In addition there have been several usability improvements to include support for drag and drop operations, improved filters to help refine search criteria and make it easy to find objects, and the introduction of a new “Recent Items” icon that makes it easier to navigate between commonly used views.
  • vCenter Server Appliance – with vSphere 5.5 the vCenter Server Appliance (VCSA) now uses a reengineered, embedded vPostgres database that offers improved scalability.  I wasn’t able to officially confirm the max number of hosts and VMs that will be supported with the embedded DB.  They are targeting 100 hosts and 3,000 VMs but we’ll need to wait until 5.5 releases to confirm these numbers.  However, regardless what the final numbers are, with this improved scalability the VCSA is a very attractive alternative for folks who may be looking to move a way from a Windows based vCenter.
  • vSphere App HA – App HA brings application awareness to vSphere HA helping to further improve application uptime.  vSphere App HA works together with VMware vFabric Hyperic Server to monitor application services running inside the virtual machine, and when issues are detected perform restart actions as defined by the administrator in the vSphere App HA Policy.
  • vSphere HA Compatibility with DRS Anti-Affinity Rules –vSphere HA will now honor DRS anti-affinity rules when restarting virtual machines.  If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now honor those rules when restarting virtual machines following a host failure.
  •  vSphere Big Data Extensions(BDE) – Big Data Extensions is a new addition to the VMware vSphere Enterprise and Enterprise Plus editions.  BDE is a vSphere plug-in that enables administrators to deploy and manage Hadoop clusters on vSphere using the vSphere web client.
  • Support for 62TB VMDK – vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).  The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.
  • Microsoft Cluster Server (MCSC) Updates – MSCS clusters running on vSphere 5.5 now support Microsoft Windows 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
  • 16Gb End-to-End Support – In vsphere 5.5 16Gb end-to-end FC support is now available.  Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
  • Auto Remove of Devices on PDL – This feature automatically removes a device from a host when it enters a Permanent Device Loss (PDL) state.  Each vSphere host is limited to 255 disk devices, removing devices that are in a PDL state prevents failed devices from occupying a device slot.
  • VAAI UNMAP Improvements – vSphere 5.5 provides  and new “esxcli storage vmfs unmap” command with the ability to specify the reclaim size in blocks, opposed to just a percentage, along with the ability to reclaim space in increments rather than all at once.
  • VMFS Heap Improvements – vSphere 5.5 introduces a much improved heap eviction process, which eliminates the need for large heap sizes.  With vSphere 5.5 a maximum of 256MB of heap is needed to enable vSphere hosts to access the entire address space of a 64TB VMFS.
  • vSphere Flash Read Cache – a new flash-based storage solution that enables the pooling of multiple flash-based devices into a single consumable vSphere construct called a vSphere Flash Resource, which can be used to enhance virtual machine performance by accelerating read-intensive workloads.
  • Link Aggregation Control Protocol (LACP) Enhancements – with the vSphere Distributed Switch in vSphere 5.5 LACP now supports 22 new hashing algorithms, support for up to 64 Link Aggregation Groups (LAGs), and new workflows to help configure LACP across large numbers of hosts.
  • Traffic Filtering Enhancements – the vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).
  • Quality of Service Tagging – vSphere 5.5 adds support for Differentiated Service Code Point (DCSP) marking.  DSCP marking support enables users to insert tags in the IP header which helps in layer 3 environments where physical routers function better with an IP header tag than with an Ethernet header tag.
  • Single-Root I/O Virtualization (SR-IOV) Enhancements – vSphere 5.5 provides improved workflows for configuring SR-IOV as well as the ability to propagate port group properties to up to the virtual functions.
  • Enhanced Host-Level Packet Capture – vSphere 5.5 provides an enhanced host-level packet capture tool that is equivalent to the command-line tcpdump tool available on the Linux platform.
  • 40Gb NIC Support – vSphere 5.5 provides support for 40Gb NICs.  In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.
  • vSphere Data Protection (VDP) – VDP has also been updated in 5.5 with several great improvements to include the ability to replicate  backup data to EMC Avamar,  direct-to-host emergency restore, the ability to backup and restore of individual .vmdk files, more granular scheduling for backup and replication jobs, and the ability to mount existing VDP backup data partitions when deploying a new VDP appliance. 

Thursday 11 July 2013

Installing vCenter Server 5.1


                      Installing vCenter Server 5.1


Hi Guys,

Even i was thinking from so many days to show you how to install the vcenter server 5.1 . I recorded an video and want to let you know how to do it.. 





Monday 10 June 2013

Interview Cracker Question : Vmware vSphere ..



Hi All,

I was thinking to upload something really interesting and useful to everyone from last some days..
And today i came up with something which is really meant for the people who want to crack the interviews or want to enhance there knowledge so they can use it to troubleshoot there operations..

I collected some of the .htm files which have the step by step troubleshooting steps from Vmware Knowledge base. Just try it once, and i am sure you will gonna love it..;)

Common Licensing issues in VMware Infrastructure

Common Fault issues in VMware Infrastructure

Common system management issues in VMware Infrastructure

(Note : Please Save these files in your local system to use it frequently.. )

This is some of my collection to face the realtime scenario issues/problems.hope it will help you..Thanks.



Thursday 6 June 2013

Vmware introduced "edit File" Second .vmx file in Vmware vsphere 5.1 .. Enhanced precaution feature.


Hi Guys,

Today i want to share a really intresting feature introduced in Vmware vsphere5.1 Version. Lets dig it out..

In vSphere 5.1, there is now an additional VMX file that ends with an ~ (tilde) found within the virtual machine's configuration directory for powered on machine.
 

This extra .VMX file is edit file. edit file is a copy of the original VMX file and when changes are required, they are applied to the edit file first. Once the changes are complete, the edit file is then atomically swapped with the original VMX file which helps prevent potential VMX file corruption. In the worst case event where the original VMX file is somehow corrupted, the virtual machine can be restored using the edit file.

This is another reason why you should not be manually editing a virtual machine's VMX file, especially when it is still powered on. 

Hope you all find it intersting like i find it.. 

Thanks for reading...

Wednesday 29 May 2013

All About : Fault Tolerance



                                 Fault Tolerence 

What is VMware Fault Tolerance?

VMware Fault Tolerance is a component of VMware vSphere and it provides continuous availability to applications by preventing downtime and data loss of Virtual machines in the event of ESXi server failures.


What is the name of the technology used by VMware FT?

VMware FT using a technology called vLockstep technology

What are requirements and Limitattions for ESX hosts & infrastructure components to run FT protected virtual machines in Vsphere 4 & 4.1?

1. VMware FT is available after versions of vSphere Advanced version (Advanced, Enterprise, Enterprise Plus)

2.Hardware Virtualization should be enabled in the BIOS

3.CPU should be compatible with FT. Please refer VMware site for supported processors.

4.FT enabled virtual machines should be placed in Shared storage (FC,ISCSI or NFS)

5. FT virtual machines should be placed in HA enabled cluster

6. FT cannot be used with DRS in vsphere 4.0 but FT is fully now integrated with DRS from vSphere 4.1

7. In vsphere 4.0, primary and secondary ESX should have same ESX version and patch level. This   limitation is no more from vSphere 4.1. The primary and secondary ESX need not be at same build and
patch level because FT has its own version associated with it.

8. Only 4 FT protetcted virtual machines is allowed per ESX/ESXi host.

9. vMotion and FT Logging should be enabled in vmkernel portgroup of the virtual machine (Separate NIC for vMotion & FT logging is recommeneded along with NIC teaming)

10. Host certificate checking should be enabled (enabled by default)

11. Dedicated 10 GB ethrenet card between ESX servers will give best performance results.

12.  FT ports 8100, 8200 (Outgoing TCP, incoming and outgoing UDP) must be open if any firewall exist between ESX hosts

13. Minimum of 3 hosts in HA enabled cluster is  the requirement for running FT protected virtual machines but 2 hosts is the strong requirement.

14. FT virtual Machines cannot be backed up using the backup technology like (VCB, vmware data recovery),which uses snapshot feature.

15. NPIV (N-PortID Virtualization) is not supported with vmware FT

16. USe Redundancy at all layers(like NIC teaming, multiple network switches, and storage multipathing) to fully utilize the FT features.

17. MSCS clustering is not supported with VMware Fault Tolerance.

18. In vSphere 4.0, Manual vmotion is allowed but automatic load balance using vmotion by DRS is fully supported from vSphere 4.1

19. We cannot use storage vmotion (SVmotion) to migrate FT protected virtual machines from one datastore to another.

20. EVC (enhanced vmotion compatibility) should be enabled in DRS cluster to utilize the automatic load balancing feature provided by DRS for the FT protected virtual machine.



What are Requirements and Limitations for Virtual Machine to Enable FT ?

1. FT protected virtual machine should be running on the supported Guest operating system

2. FT protected virtual machine's guest operating system and processor combination must be supported by Fault Tolerance. Please refer VMware Site for Supported Guest OS and CPU combination

3.Physical RDM is not supported for FT protected virtual machines but virtual mode RDM is supported

4.FT protected virtual machine should have eagerzeroed Thick disks. Virtual machine with  thin provisioned disk will be automatically converted to thick disk, while enabling FT for the virtual machine. Make sure enough free space is avaialble in the datastore for this operation.

5.SMP (symmetric multiprocessing) is not supported. Only 1 VCPU per virtual mahcine is allowed.

6.Only 64 GB of  maximum RAM is allowed for the FT VM's.

7.Hot add and remove devices are not allowed for FT protected VM's.

8.NPIV is not supported for FT VM's.

9.USB Passthough and VMDirectPath should not be enabled for FT VM's and it is not supported.

10. USB and Sound devices are not supported for FT VM's.

11.Virtual Machine snapshot is not supported for FT protected VM's.FT virtual Machines cannot be backed up using the backup technology like (VCB, vmware data recovery),which uses snapshot feature.

12. Virtual machine hardware version should be 7 and above

13.Paravirtualized Guest OS and paravirtualized scsi adapter for FT protected virtual machine is not supported.

14.Windows Guest OS should not be using MSCS (Microsoft Cluster services) to Protect the Virtual Machine using FT.

15.FT Protected virtual machines should not be HA disbaled by Virtual Machine level HA settings.

16. FT protected virtual machines cannot be migrated using svmotion. If want to migrate FT protected virtual  machine, disable the FT on the VM, Migrate the VM using svmotion and re-enable the FT.

17. IPv6 is not supported by VMware HA so, it is not supported for FT.


What is FT Logging Traffic?

FT logging is the one of option in VMkernel port setting which is similar to enable vmotion option in the vmkernel port. when FT is enabled for the virtual machine, all the inputs (disk read.. wirte,etc..) of the primary virtual machine are recorded and sent to the secondary VM over via FT logging enabled VMkernel port.

Tuesday 28 May 2013

All About vMotion !!!



Vmotion

1.What is vMotion?

Live migration of a virtual machine from one ESX server to another with Zero downtime called vMotion. VMs disk files stay where they are (on shared storage)


2. What are the use cases of vMotion ?

   Balance the load on ESX servers (DRS)
   Save power by shutting down ESX using DPM
    Perform patching and maintenance on ESX server (Update Manager or HW maintenance)



3.  What are Pre-requisites for the vMotion to Work?

     ESX host must be licensed for VMotion
    ESX  servers must be configured with vMotion Enabled VMkernel Ports.  
    ESX servers must have compatible CPU's for the vMotion to work
    ESX servers should have Shared storage (FB, iSCSI or NFS) and VM's should be stored on that    storage.
    ESX servers should have exact similar network & network names


4. What are the Limitations of vMotion?

      Virtual Machine cannot be migrated with VMotion unless the destination swapfile location is the
              Same as the source swapfile location. As a best practice, Place the virtual machine swap files with
              the virtual  machine configuration file.
    Virtual machines configured with the Raw Device Mapping(RDM) for clustering features using
              vMotion
    VM cannot be connected to a CD-ROM or floppy drive that is using an ISO or floppy image
             restored on a drive that is local to the host server. The device should be disconnected before
             initiating the vMotion.
     Virtual Machine affinity must not be set (aka, bound to physical CPUs).


5. Steps involved in VMWare vMotion ?
 
A request has been made that VM-1 should be migrated (or "VMotioned") from ESX A to ESX B.
VM-1's memory is pre-copied from ESX A to ESX B while ongoing changes are written to a memory
        bitmap on ESX A.
VM-1 is quiesced on ESX A and VM-1's memory bitmap is copied to ESX B.
VM-1 is started on ESX B and all access to VM-1 is now directed to the copy running on ESX B.
The rest of VM-1's memory is copied from ESX A all the while memory is being read and written from
        VM-1 on ESX A when applications attempt to access that memory on VM-1 on ESX B.
If the migration is successful, VM-1 is unregistered on ESX A.


So its all about vmotion.. Hope you find this blog benificial.. Thanks!!


Monday 27 May 2013

Troubleshooting while doing Physical to Virtual conversion (P2V)

Troubleshooting while doing Physical to Virtual conversion

Conversions sometimes fail no matter how careful you are preparing the server. The failure can occur at various stages in the conversion process; these stages are based on the task bar percent and are estimated values.

1.      Creation of the target virtual machine (VM) (0%-5%)
2.      Preparing to Clone the Disk (5%-6%)
3.      Cloning (6%-95%)
4.      Post-cloning (95%-97%)
5.      Customization/Reconfig (97%-99%)
6.      Install Tools/Power On (99%-100%)

The conversion process may fail at any stage, but if it's going to fail, it will typically fail at 97%. Converter creates a detailed log file during the conversion process which will contain exact errors pertaining to why the conversion failed. This log file is located on the server you are converting that is running the Converter agent, and is usually named vmware-converter-0.log and is located in the C:\Windows\temp\vmware-temp directory. Open this log file and scroll towards the bottom and look for failure errors. Once the process fails, Converter will destroy the VM that it created automatically.

One clue to determine which stage it failed at is how fast it gets to 97%. If it jumps to 97% quickly and fails, this usually indicates a problem with network ports, DNS resolution or a required Windows service that is not running. Here are some things to try to resolve these types of problems.

1.      If you used a hostname to choose your VC/ESX server destination make sure you can resolve it on your source server. Also try using the FQDN of the server instead of the short name.


2.      On the source server make sure the Workstation, Server, TCP/IP NetBIOS Helper and VMware Converter services are running. On Windows XP and 2003 servers make sure the Volume Shadow Copy service is not disabled, by default it should be set to Manual. This service does not need to be running for Converter to work.


3.      Use telnet to see if you can connect to the required ports on the VC/ESX servers. From the source server type "telnet 902". You should get a response back from the VC/ESX server, also do this on port 443.


4.      Try rebooting the source server, this is a requirement for Windows NT and 2000 servers.


If it takes a long time to get to 97%, then typically the clone failed during the data cloning process or the post-cloning procedures. Some possible causes of these types of failures can be lost network connectivity between the servers, excessive network errors and source disk problems. Here are some steps to try to resolve these types of problems.

1.      Verify network speed/duplex settings match on your source server's NIC and the physical switch port it is connected to.


2.      If you have OS mirroring enabled, break the mirrors.


3.      Clean-up your boot.ini file and make sure it is correct.


4.      Make sure you are using the latest version of Converter. Earlier versions fail if the source server has dynamic disks.


5.      Run chkdsk on your source server to verify file system integrity.


6.      Ensure you have at least 200 MB of free disk on the source server.


7.      If your source server has more then two serial (COM) ports, edit the registry and look for HKLM\HARDWARE\DEVICEMAP\SERIALCOM and remove any ports above serial port 2. You can export the key before you do this and re-import after the conversion is completed if needed.


Finally, if your conversion completes successfully but your server will not boot (or boots to a blue screen) you can try the following things to fix it.

1.      Edit the boot.ini on the newly created VM to make sure the disks are in the proper order. Sometimes the boot disk will not be listed as the first partition. To do this, simply use a working VM as a helper and add an additional virtual hard disk. Browse to the newly created VM's disk file. You can then browse that disk and edit the boot.ini file. When complete, remove the disk from the helper VM. You can also try running Converter again and selecting "Configure Machine" and select your newly created VM. Run through the Wizard, and (when complete) try powering it on again.


2.      Verify you are using the proper SCSI controller for your virtual disk (BusLogic or LSI Logic).


3.      Boot the VM in safe mode to see if any hardware specific services/drivers are loading.

Enhancing performance in a new virtual machine
When your conversion completes, there are several steps you should to do clean your new VM up so it will perform better.

       

1.      Edit the VM's hardware. Remove all unnecessary hardware, including floppy drives and serial, parallel and USB ports. You should only give the VM as much RAM as it needs. Reduce it if you can. Most VM's run better with one vCPU, so consider reducing the number of CPUs if you came from a SMP physical server.


2.      Power on the VM, wait a few minutes to let it discover all it's new hardware then reboot it.


3.      Check the server HAL, if you came from a multiple CPU physical system and have a single CPU VM you need to go into Device Manager and edit the CPU (Computer). Select Update Driver, say No to Windows Update, select Install from List, select Don't Search and select ACPI Uniprocessor instead of ACPI Multiprocessor.


4.      Remove any hardware specific applications and drivers.


5.      Finally, my most important tip: Remove all non-present hardware devices. These are hardware devices that were removed from the system without having been uninstalled and are a by-product of the conversion. These devices are not physically present in the system anymore, but Windows treats them as they were there and devotes system resources to them. They can also cause conflicts when trying to set your new network adapter's IP address to the same address of the source server.
The reason for this is that the old NIC still exists as non-present hardware with an IP address. There will be dozens of non-present hardware devices left after the conversion. To remove them all simply go to a CMD prompt and type SET DEVMGR_SHOW_NONPRESENT_DEVICES=1. Then in the same CMD window type DEVMGMT.MSC and then select Show Hidden Devices when the Device Manager window opens. As you expand each hardware category you will see lots of non-present devices, indicated by grayed out icons. Right-click on each and select uninstall. Reboot once you have removed them all.

Hopefully the information in these articles will help you in converting your physical servers to virtual ones.


Thursday 23 May 2013

Favorite question of interviewer : Difference between Esxi 4 & esxi5


Hi Guys,

A really interesting topic .. For which i was searched a lot . And ofcourse this is the most favourire question now a days in vmware interviewer and terror for candidates.. So i bring some points here .. Which will surely help you...


Features
vSphere 4.1
vSphere 5.0
Hypervisor
ESX & ESXi
Only ESXi
VMA
Yes VMA 4.1
Yes VMA 5
HA Agent
AAM
Automatic Availability
Manager
FDM
Fault Domain Manager
HA Host Approach
Primary & Secondary
Master & Slave
HA Failure Detection
Management N/W
Management N/W and Storage
communication
HA Log File
/etc/opt/vmware/AAM
/etc/opt/vmware/FDM
DNS Dependent on DNS
Yes
NO
Host UEFI boot support
NO
boot systems from hard drives, CD/DVD drives, or USB media
Storage DRS
Not Available
Yes
VM Affinity & Anti-Affinity
Available
Available
VMDK  Affinity & Anti-Affinity
Not Available
Available
Profile driven storage
Not Available
Available
VMFS version
VMFS-3
VMFS-5
VSphere Storage Appliance
Not Available
Available
Iscsi  Port Binding
Can be only done via Cli
using ESXCLI
 Configure dependent
hardware iSCSI and software
iSCSI adapters along with the
network configurations and
port binding in a single dialog
 box using the vSphere Client.
Storage I/O control for NFS
Fiber Channel
Fiber Channel & NFS
Storage Vmotion Snapshot support
VM with Snapshot cannot be migrated using Storage vMotion
VM with Snapshot can be migrated using Storage vMotion
Swap to SSD
NO
Yes
Network I/O control
Yes
Yes with enhancement
ESXi firewall
Not Available
Yes
vCenter Linux Support
Not Available
vCenter Virtual Appliance
vSphere Full Client
Yes
Yes
vSphere Web Client
Yes
yes with lot of improvements
VM Hardware Version
7
8
Virtual CPU per VM
8 vCpu
32 vCpu
Virtual Machine RAM
255 GB
1 TB of vRAM
VM Swapfile size
255 GB
1 TB
Support for Client connected USB
Not Available
Yes
Non Hardware Accelerated
3D grpahics support
Not Available
Yes
UEFI Virtual BIOS
Not Available
Yes
VMware Tools Version
4.1
5
Mutlicore vCpu
Not Available
Yes  configure at VM setting
MAC OS Guest Support
Not Available
Apple Mac OS X Server 10.6
Smart card reader support for VM
Not Available
Yes
Auto Deploy
Not Available
Yes
Image Builder
Not Available
Yes
VM's per host
320
512
Max Logical Cpu per Host
160
160
RAM per Host
1 TB
2 TB
MAX RAM for Service Console
800 MB
Not Applicable (NO SC)
LUNS per Server
256
256
Metro Vmotion
Round-trip latencies of up to
5 milliseconds.
Round-trip latencies of up to
10 milliseconds. This provides better performance over
long latency networks
Storage Vmotion
Moving VM Files using moving to using dirty block tracking
Moving VM Files using I/O
mirroring with better enhancements
Virtual Distributed Switch
Yes
Yes with more enhancements
like deeper view into virtual machine traffic through Netflow and enhances monitoring and troubleshooting capabilities through SPAN and LLDP
USB 3.0 Support
NO
Yes
Host Per vCenter
1000
1000
Powered on virtual machines
 per vCenter Server
10000
10000
Vmkernel
64-bit
64-bit
Service Console
64-bit
Not Applicable (NO SC)