ESXI Nexenta 4, round robin, iops=1, no Hardware Accelerated Locking


Nexenta 4 (CE) on ESXI (5/6) sort of fails when you have Hardware Accelerated Locking enabled. You will see a ton of errors in your vmkernel log about this once you activate your ISCSI.

To get it all going again here is a quick snippet.

esxcli system settings advanced set -i 0 -o /VMFS3/HardwareAcceleratedLocking

esxcfg-rescan vmhba32

for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.600`; do esxcli storage nmp device set -d $i --psp VMW_PSP_RR;done

for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.600`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done

The first line disables the HW accelerated locking, e.g. back to basics. Then we do a rescan of vmhba32 (SW/ISCSI), then push all disks to VMW_PSP_RR and set the IOPS to 1 for optimal distribution,

C’est ca..

VirSH with ESXi for Ubuntu 14.04 LTS (and MAAS)


Why o why is the latest release of virsh with ESX not in the APT.

Anyway, lets hack away.

First of all get your virsh source tarball, I used 1.2.18 today…

Now “apt-get install” the crapload of dependencies to compile the source, the usual Gnu/Dev,XML/dev and other stuff to make things work.

Then do the ./configure (with the –with-esx=yes), and ./make
apt-get install gcc make pkg-config libxml2-dev libgnutls-dev libdevmapper-dev libcurl4-gnutls-dev python-dev libpciaccess-dev libxen-dev libnl-dev uuid-dev xsltproc
wget http://libvirt.org/sources/libvirt-1.2.18.tar.gz
tar -zxvf libvirt-1.2.18.tar.gz
cd libvirt-1.2.18/
./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-esx=yes
make
make install

Now try virsh -v, should come back with 1.2.8 or whatever version you installed
and virsh -c esx://root@<your.esxi.server>?no_verify=1 should allow you to control your ESXi host, wonder 🙂 we can stop and start ESXi VM’s now, MAAS will be so happy 🙂

Oh, if you want to use MAAS now to power control your VM’s, use the Virsh power type and the esx://root@<your.esxi.server>/system?no_verify=1 power control and don’t forget to make your /etc/libvirt/auth.conf file for the authentication. It looks a bit like this:

[credentials-esx]
username=root
password=somethingsecret
[auth-esx-hostname1]
credentials=esx
[auth-esx-hostname2]
credentials=esx
[auth-esx-hostname3]

hostnameX is your ESXi host of course..

Ces’t ca

Setting IOPS for HP/EVA devices, the dirty ESXCLI way.


Stumbled upon the “Best practices for HP EVA, vSphere 4 and Round Robin multi-pathing” by Ivo Beerens or the VMware community article “Very slow performance on EVA4400” and wondered how to hack this in to an ESXi box without a service console. (tried Ivo’s solution in the ‘Engineering SSH shell, but for some reason it failed on the grep command’. Of course you can use the horrid windows power(s)hell and fiddle around with that, or for dinosours like me that live in old DOS/Command line worlds, download the VMware ESXCLI package for windoze (or Linux, but then this script wont work)  and copy and paste this script in to a .cmd file

@echo off
esxcli --server <hostname> --username=root --password=*** nmp device list | find "HP Fibre Channel Disk" >dev.lst
for /F "tokens=1,2 delims=()" %%G IN (dev.lst) DO esxcli --server <hostname> --username=root --password=Nipples@Sandpaper nmp roundrobin setconfig --type "iops" --iops=1 --device=%%H
del dev.lst

Remember to replace the obvious username,password and servername values and off you go!.

Of course you can get creative and make a loop around this to go through your servers automatically, but I had only 8 to worry about so I didn’t bother.

Good luck,

— Fault

Notes for the VCP-510 exam


Auto Deploy

  • vSphere Auto Deploy installs the ESXi image directly into Host memory.

  • By default, hosts deployed with VMware Auto Deploy store logs in memory.

  • When deploying hosts with VMware Auto Deploy, Host Profiles is the recommended method to configure ESXi once it has been installed.

  • Benefits of auto deploy are:

  • decouples the Vmware ESXi host from the physical server
    Eliminates the boot disk
    Eliminates configuration drift
    Simplifies patching and updating

  • VMware Auto Deploy Installation is the quickest possible way to deploy > 10 ESXi hosts.

  • Interactive Installation is recommended install method to evaluate vSphere 5 on a small ESXi host setup.

  • The vSphere power CLI image builder cmdlet defines the image profiles used with autodeploy.

  • Three ways that vSphere AutoDeploy can access the answer file are:

  • CIFS.
    SFTP.
    HTTP.

vCenter Server /ESXi Upgrades

  • vSphere 5.0 supports the following upgrade scenarios.
  • You can perform in-place upgrades on 64-bit systems from vCenter Server 4.x to vCenter Server 5.0.
    You cannot upgrade an instance of vCenter Server 4.0.x that is running on Windows XP Professional x64 Edition.

  • You can upgrade VirtualCenter 2.5 Update 6 and later and vCenter Server 4.x to vCenter Server 5.0 by installing vCenter Server 5.0 on a new 64-bit operating system and migrating the existing database. This upgrade method makes it possible to upgrade from a 32-bit system to a 64-bit system.

  • vCenter Server 5.0 can manage ESXi 5.0 hosts in the same cluster with ESX/ESXi 4.x and ESX/ESXi 3.5 hosts. It can also manage ESX/ESXi 3.5 hosts in the same cluster with ESX/ESXi 4.x hosts.

  • vCenter Server 5.0 cannot manage ESX 2.x or 3.0.x hosts.

  • vSphere 5.0 provides the following tools for upgrading ESX/ESXi hosts.
  • vSphere Update Manager. If your site uses vCenter Server, use vSphere Update Manager to perform an orchestratedhostupgrade or an orchestratedVMupgrade.

  • Upgradeinteractively using an ESXi installerISO image on CD-ROM or DVD. This method is appropriate for upgrading a small number of hosts.

  • Perform a scripted upgrade. You can upgrade or migrate from ESXi/ESX 4.x hosts to ESXi 5.0 by running an update script, which provides an efficient, unattended upgrade. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXEbooting the installer.

HA

  • In a HA cluster after an initial election process, host is either Master or Slave.

  • An HA Slot is a logical representation of the memory and CPU resources that satisfy the requirements for any/largest powered-on VM in the cluster.

  • The 4 VM Restart Priority options available on a HA cluster

  • Disabled
    Low
    Medium
    High

  • The three Host Isolation Response options available on a HA Cluster =

  • Shut down

  • Power off

  • Leave powered on

  • If the Admission Control option Disable (allows VM power on operations that violate availability constraints) is selected, then only VMs with a high restart priority are restarted on surviving ESXi hosts.

Licensing

  • VMware vSphere can be evaluated for 60 days prior to purchase.

  • Free vSphere Hypervisor is allowed 32GB Physical RAM per host

  • If more vRAM is allocated than licensed for, new VM’s cannot be powered on

  • Licensing – Entitlements per CPU license

  • 32GB vRAM, 8-way vCPU for Essentials, Essentials Plus, Standard
    64GB vRAM, 8-way vCPU for Enterprise
    96GB vRAM, 32-way vCPU for Enterprise Plus

  • Licensing – Features

  • Essentials Plus: High Availability, Data Recovery, vMotion
    Standard: as above
    Enterprise: + Virtual Serial Port Concentrator, Hot Add, vShield Zones, Fault Tolerance, Storage APIs for Array Integration, Storage vMotion, Distributed Resource Scheduler & Distributed Power Management
    EnterprisePlus: + Distributed Switch, I/O Controls (Network and Storage), Host Profiles, Auto deploy, Profile-Driven Storage, Storage DRS

Memory

  • VMX Swap can be used to reduce VM memory overhead.

  • VMX Swap Files
    VM executable (VMX) swap files allow the host to greatly reduce the amount of overhead memory reserved for the VMX process. NOTE VMX swap files are not related to the swap to host cache feature or to regular host-level swap files. ESXi reserves memory per VM for a variety of purposes. Memory for the needs of certain components, such as the VM monitor (VMM) and virtual devices, is fully reserved when a VM is powered on. However, some of the overhead memory that is reserved for the VMX process can be swapped. The VMX swap feature reduces the VMX memory reservation significantly (for example, from about 50MB or more per VM to about 10MB per VM). This allows the remaining memory to be swapped out when host memory is overcommitted, reducing overhead memory reservation for each VM. The host creates VMX swap files automatically

  • Memory allocation – memory limitis the amount of VM memory that will always be composed of disk pages.

  • (v)NUMA is (virtual) Non-Uniform Memory Access (a computer memory design used in multiprocessors where the memory access time depends on the memory location relative to a processor.)

  • vNUMA is enabled by default when a VM has more than 8 vCPU’s.

  • Disabling transparent memory page sharing increases resource contention.

  • For maximum perfomance benefits of vNUMA, recommend make sure your clusters are composed entirely of hosts with matching NUMA architecture.

  • Memory reservation is the amount of physical memory that is guaranteed to the VM.

  • Resource Allocation tab definitions:

  • Host memory usage is amount of physical host memory allocated to a guest (includes virtualisation overhead.)

    Guest memory usage is amount of memory actively used by a guest operating system and it’s applications.

  • Three metrics to diagnose a memorybottleneck at the ESXi host level:
  • MEMSZ
    MEMCTL
    SWAP

  • VM Memory Overhead is determined by Configured Memory and Number of vCPUs.

Miscellaneous

  • New features made available with vSphere 5 are:

  • Storage Distributed Resource Scheduler (Storage DRS), which performs automatic storage I/O load balancing

    Virtual NUMA, allowing guests to make efficient use of hardware NUMA architecture

    Memory compression, which can reduce the need for host-level swapping

    Swap to host cache, which can dramatically reduce the impact of host-level swapping

    SplitRx mode, which improves network performance for certain workloads

    VMX swap, which reduces per-VM memory reservation

    Multiple vMotion vmknics, allowing for more and faster vMotion operations

  • Via the Direct Console, it is possible to:

    Configure Host DNS
    Shutdown host
    view host logs ( DCUI)

  •  Via the Direct Console, it is NOT possible to: Enter host into MaintenanceMode
  • Image Builder is used to create ESXi installation images with a customized set of updates, patches, and drivers.

  • Packaging format used by the VMware ESXi Image Builder is VIB

  • By default, the Administrator role at the ESX Host Server level is assigned to root and vpxuser

  • Distributed Power Management (DPM) requires Wake On LAN (WOL) technology on host NIC

  • ESXi 5.0 supports only LAHF and SAHF CPU instructions (no Old XEON)

  • The three defaultroles provided on an ESXi host are:

  • No Access
    Read Only
    Administrator

  • ESXi Dump Collector is a new feature of vSphere 5

  •  The four DRS Automation Levels are:
  • Manual
    Partially
    Automated
    Fully Automated

  • ESXi 5.0 introduces Virtual Hardware VM Version 8

  • To disablealarm actions for a DRS cluster while maintenance is taking place:

    ‘Right-Click the DRS cluster, select Alarm → ‘Disable Alarm Actions.’

  • Three valid objects to place in a vApp

  • Resource pools
    vApps
    VMs

  • Required settings for a ESXi host upgradescript file: root password & IPaddress.

  • vMotion needs RDM boot mapping files to be placed on the samedatastore

  • storagevMotioncannot be used with RDMs using NPIV

  • Conditions that would stopVMrestarting in the event of a hostfailure in a HA cluster

  • Anti-affinity rule configured where restarting the VMs would place them on the same host.
    The VMs on the failed host are HA disabled.

  • The VMkernel is secured by the features

  • memory hardening
    kernel module integrity

  • VMware vCloud Director pools virtual infrastructure resources in datacenters and delivers them to users as a catalog-based service.

  • Two ways to enable remotetechsupport mode (SSH) on an ESXi 5.x host:

  • Through the Security Profile pane of the Configuration tab in the vSphere Client
    Through the Direct Console User inferface (DCUI)

  • %WAIT metric is checked to determine if CPU contention exists on an ESXi 5.x host.

  • Quiescing VM snapshot operation:

  • Requires VMware tools.
    Ensures that the snapshot includes a power state.
    May alter the behaviour of applications within the VM.
    Ensures all pending disk I/O operations are written to disk.

  • Image Profile Acceptance Levels

  • Community Supported
    Partner Supported
    VMwareAccepted
    VMwareCertified (most stringent requirements)

  • Each VMware Data Recovery (VDR) appliance can have no more than two dedupe destinations, and it is recommended that each dedupe destination is no more than 1TB in size when using virtual disks, and no more than 500GB in size when using a CIFS network share.

Networking – General

  • ESX 4.X to ESXi 5.0 upgraderemoves the “Service Console” port group because ESXi 5.0 has no Service Console.

  • SplitRX can be used to increase network throughput to VMs.

  • The default security policy on a Port Group are set to

  • Promiscuous Mode is – Reject.
    MAC Address Changes – Accept
    Forged Transmitsare – Accept

  • ESX 4.X to ESXi 5.0 upgrade process migrates all vswif interfaces to vmk interfaces.

  • SSH configuration is notmigrated for ESX/ESXi 4.x hosts (SSHaccess is disabledduring the migration or upgrade process.)

  • Custom ports that were opened by using the ESX/ESXi 4.1 esxcfg-firewallcommanddo not remain open after upgrade to ESXi 5.0.

  • A firewall has been added to ESXi 5.0 to improvesecurity.

  • vSphere Standard Switch tTraffic shaping settings:

  • Status – Disabled/Enabled
    Average Bandwidth (Kbit/sec)
    Peak Bandwidth (Kbit/sec)
    Burst Size (Kbytes)

  • To relieve a network bottleneckcausedby a VM with occasional highoutbound network activity, apply traffic shaping to the port group that contains the VM.

  • NIC Teaming policy: Notify Switches → the physical switch is notified when the location of a virtual NIC changes.

  • A remote SSH connection to a newly installed ESXi 5.x host fails, possible causes:

  • The SSH service is disabled on the host by default.
    The ESXi firewallblocks the SSH protocol by default.

  • Forged transmits: allows packets to be created by a VM with differentsourceMAC address.

  • When a new uplink is added to a vSphere Standard Switch configured with IP based load balancing uplinks – by default configuration – the uplink is considered active/active but it will not participate in the active NIC team until assignedto a port group.

  • To verify all IPstorageVMkernelinterfaces are configured for jumboframes, either:

  • esxcli network IP interface list.
    View the VMkernel interface properties in the vSphere client.

  • Map view indicates vMotion is disabled => vMotion has not been enabled on a VMkernelport group.

  • ESXi Host → Configuration Tab → Network Adapters : Headings =

  • Device, Speed, Configured, Switch, MAC Address, Observed IP, WOL Supported

  • If you create a portgroup and assign it to VLAN 4095, it will have access to all the VLANs that are exposed to the physical NIC (a special driver is neededwithin the VM that can properly tagVLANs.)

Networking – vDS

  • Port bindings options.

  • Static binding – assign a port to a VM when the VM connects to the distributed port group. This option is not available when the vSphere Client is connected directly to ESXi.
    Dynamic binding – assign a port to a VM the first time the VM powers on after it is connected to the distributed port group. Dynamic binding is depricated in ESXi 5.0.
    Ephemeral for no port binding. This option is not available when the vSphere Client is connected directly to ESXi.

  • Network Load Balancing policies for vDS (vDS)

  • Route based on originating virtual port;
    Route based on IP hash (good for etherchannel)
    Route based on source MAC hash;
    Route based on physical NIC load (only vDS)
    Use explicit failover order

  • Requirements for a collector VM to analyze traffic from a vDS:

  • The source and target VMs must both be on a vNetwork Switch, but can be on any vDS in the datacenter.
    The port group on distributed port must have NetFlow enabled.

  • Two methods to migrate a VM from a VSS to a vDS

  • Migrate the port group containing the VM from a vNetwork Standard Switch using the MigrateVM networking option.
    Edit the Network Adapater settings for the VM and select a dvPortgroup from the list.

  • Features only available when using a vDS:

  • NetFlow monitoring.

  • Network I/O control.
    Egress and ingress traffic shaping.
    New feature: vDS – improves visibility of virtual-machine traffic through NetFlow and enhances monitoring and troubleshooting through Switched Port Analyzer (SPAN) and Link Layer Discovery Protocol (LLDP) support.

Networking – Ports

  • 22: SSH

  • 80: vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443

  • 389: This port must be open on the local and all remote instances of vCenter Server. This is the LDAP port number for the Directory Services for the vCenter Server group. The vCenter Server system needs to bind to port 389, even if you are not joining this vCenter Server instance to a Linked Mode group.

  • 443: The default port that the vCenter Server system uses to listen for connections from the vSphere Client. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients.

  • 636: For vCenter Server Linked Mode, this is the SSL port of the local instance.

  • 902: The default port that the vCenter Server system uses to send data to managed hosts. Managed hosts also send a regular heartbeat over UDP port 902 to the vCenter Server system. This port must NOT be blocked by firewalls between the server and the hosts or between hosts. Also must NOT be blocked between the vSphere Client and the hosts. The vSphere Client uses this port to display VM consoles.

  • 8080: Web Services HTTP. Used for the VMware VirtualCenter Management Web Services.

  • 8443: Web Services HTTPS. Used for the VMware VirtualCenter Management Web Services.

  • 10109: vCenter Inventory Service Service Management

  • 10111: vCenter Inventory Service Linked Mode Communication

  • 10443: vCenter Inventory Service HTTPS

  • 60099: Web Service change service notification port

Storage

  • vStorage Thin Provisioning feature provides dynamic allocation of storage capacity

  • Upgrade from VMFS-3 to VMFS-5 requires no downtime

  • VMFS-5 is introduced by vSphere 5

  • The globally unique identifier assigned to each Fibre Channel Port World Wide Name (WWN)

  • NFSprotocol is used by an ESXi host to communicate with NAS devices

  • It is now possible in vSphere 5 to Storage vMotion VMs that have snapshots

  • Two iSCSI discovery methods supported by an ESXi host =

  • Static Discovery,

  • SendTargets

  • VMFS-5 upgraded from VMFS3 continues to use the previous file block size which maybelarger than the unified 1MB file block size

  • Sharedlocalstorage is not a supported location for a hostdiagnosticpartition

  • Create diagnostics partition:

esxcli [connection_options] system coredump partition set –partition = "/vmfs/devices/disks/mpx.vmhba2:C0:T0:L0:7"
esxcli [connection_options] system coredump partition set –enable = true

  • To automaticallyselect and and activate an accessible diagnosticpartition, use the command:
    esxcli [connection_options] system coredump partition set --enable=true --smart

  • The VMware HCL lists the correct MPP (Multi-Pathing Protocol) to use with a storage array

  • To guarantee a certain level of capacity, performance, availability, and redundancy for a VMsstorage use the Profile-Driven Storage feature of vSphere 5

  • Use sDRS (storage DRS) to ensure storage is utilized evenly

  • If an ESXi 5.x host is configured to boot from SoftwareiSCSI adapter and the administrator disables the iSCSISoftwareadapter, then it will be disabled but is reenabled the next time the host boots up.

  • An array that supports vStorageAPIs for array integration (VAAI) can directly perform:

  • Cloning VMs and templates

  • Migrating VMs using Storage vMotion.

  • VAAI thin provisioning deadspacereclamation feature can reclaimblocks on a thinprovisionedLUN array:

  • When a VM is migrated to a different datastore

  • When a virtual disk is deleted

  • Manage Paths can disable a path by right-clicking and selecting disable.
    A
    preferred path selection can only be made with Fixed ‘Path Selection’ types (not possible with ‘Round Robin’ or ‘Most Recently Used’ types.)

  • Information about a VMFS datastore available via the Storage Views tab includes Multipathing Status, Space Used, Snapshot Space

  • Two benefits of virtual compatibility mode RDMs vs physicalcompatibilitymodeRDMs:

  • Allows for cloning.

  • Allows for template creation of the related VM.

  • To uplink a Hardware FCoE Adapter, create a vSphere StandardSwitch and add the FCoE Adapter as an uplink.

  • Three storage I/O control conditions that might trigger the non-VI workload detected on the datastorealarm:

  • The datastore is on an array that is performing system tasks such as replication.

  • The datastore is utilizing active/passive multipathing or NMP (Native Multi-Pathing.)

  • The datastore is storing VMs with one or more snapshots.

  • The software iSCSI Adapter and Dependent HardwareiSCSIAdapter require one or moreVMkernelports.

  • Unplanned Device Loss in a vSphere 5 environment is a condition where an ESXi host determines a device loss has occurred that was not planned. To resolve unplanned device loss:

  • Unmount any related datastores a

  • Perform a storagerescan to removepersistentinformation related to the device.

  • Manualstoragerescanwhen:

  • Perform the manual rescan each time you make one of the following changes.

  • Zone a new disk array on a SAN.

  • Create new LUNs on a SAN.

  • Change the path masking on a host.

  • Reconnect a cable.

  • Change CHAP settings (iSCSI only).

  • Add or remove discovery or static addresses (iSCSI only).

  • Add a single host to the vCenter Server after you have edited or removed from the vCenter Server a datastore shared by the vCenter Server hosts and the single host.

  • To convert thin provisioned disk to thick either use the inflate option in the Datastore Browser or use Storage vMotion and change the disk type to Thick.

  • To managestorageplacement by using VMprofiles with a storage arraythat supports vSphere Storage APIs (Storage Awareness):

  • Create user-defined storage capabilities.

  • Associate user-defined storage capabilities with datastores.

  • Enable VM storage profiles for a host or cluster.

  • Create VM storage profiles by defining the storage capabilities that an application running on a VM requires.

  • Associate a VM storage profile with the VM files or virtual disks.

  • Verify that VMs and virtual disks use datastores that are compliant with their associated VM storage profile

Update Manager

  • DefaultHostsupgradebaselines included with vSphere Update Manager:

  • CriticalHost Patches,

  • NonCritical Host Patches.

  • Default VMs/VAs upgrade baselines included with vSphere Update Manager:

  • VMware Tools Upgrade to Match Host,

  • VM Hardware Upgrade to Match Host,

  • VA Upgrade to Latest.

  • vSphere 5 vCenter Update Manager cannotupdate VM hardware when running against legacyhosts (old versions).

  • Update Manager cannot update the vCenter Server Appliance.

vCenter

  • vCenter Server 5 requires a 64 Bit DSN.

  • vCenter Heartbeat product provides high availability for vCenter server.

  • vCenter requires a valid (internal) domain name system (DNS) registration.

  • vCenter 4.1 and vCenter 5.0 cannot be joined with Linked-Mode.

  • The VMware vSphere Storage Appliance manager (VSAmanager) is installed on the vSphere 5 vCenterServerSystem.

  • Optional components that can be installed from the VMware vSphere 5.0 vCenter Installer: Product Installers)

  • vSphere Client,

  • VMware vSphere WebClient (Server),

  • VMware vSphereUpdateManager.

  • VMware ESXi DumpCollector,

  • VMware SyslogCollector,

  • VMware AutoDeploy,

  • VMware vSphere AuthenticationProxy

  • vCenter Host Agent Pre-UpgradeChecker.

  • Predefined vCenter Server roles are:

  • Noaccess,

  • Readonly,

  • Administrator (there are also 6sample roles.)

  • To export ESXi 5.x hostdiagnosticinformation / logs from a host managed by a vCenter server instance using the vSphere Client:

  • Home → Administration → System Logs → Export System Logs → Source: select the ESXi host → Select System Logs: Select all → Select a Download Location → Finish.

  • Under ‘Hosts and Clusters’ view select the ESXi host → File → Export → Export System Logs → Select System Logs: Select all → Select a Download Location → Finish.

  • vCenter Server Sizing

  • Medium deployment of up to 50 hosts and 500 powered-on VMs: 2 cores, 4GB RAM, 5GB disk

  • Large deployment of up to 300 hosts and 3000 powered-on VMs: 4 cores, 8GB RAM, 10GB disk

  • ExtraLarge deployment of up to 1000 hosts and 10’000 powered-on VMs: 8 cores,16GB RAM, 10GB disk

Configuration Maximums

1: vSphere 5 Compute Configuration Maximums

1 = Maximum amount of virtual CPU’s per Fault Tolerance protected VM

4 = Maximum Fault Tolerance protected VMs per ESXi host

16 = Maximum amount of virtual disks per Fault Tolerance protected VM

25 = Maximum virtual CPU’s per core

160 = Maximum logical CPU’s per host

512 = Maximum VMs per host

2048 = Maximum virtual CPU’s per host

64GB = Maximum amount of RAM per Fault Tolerance protected VM

2: vSphere 5 Memory Configuration Maximums

1 = Maximum number of swap files per VM

1TB = Maximum swap file size

2TB = Maximum RAM per host

3: vSphere 5 Networking Configuration Maximums

2 = Maximum forcedeth 1Gb Ethernet ports (NVIDIA) per host

4 = Maximum concurrent vMotion operations per host (1Gb/s network)

8 = Maximum concurrent vMotion operations per host (10Gb/s network)

8 = Maximum VMDirectPath PCI/PCIe devices per host

8 = Maximum nx_nic 10Gb Ethernet ports (NetXen) per host

8 = Maximum ixgbe 10Gb Ethernet ports (Intel) per host

8 = Maximum be2net 10Gb Ethernet ports (Emulex) per host

8 = Maximum bnx2x 10Gb Ethernet ports (Broadcom) per host

16 = Maximum bnx2 1Gb Ethernet ports (Broadcom) per host

16 = Maximum igb 1Gb Ethernet ports (Intel) per host

24 = Maximum e1000e 1Gb Ethernet ports (Intel PCI-e) per host

32 = Maximum tg3 1Gb Ethernet ports (Broadcom) per host

32 = Maximum e1000 1Gb Ethernet ports (Intel PCI-x) per host

32 = Maximum distributed switches (VDS) per vCenter

256 = Maximum Port Groups per Standard Switch (VSS)

256 = Maximum ephemeral port groups per vCenter

350 = Maximum hosts per VDS

1016 = Maximum active ports per host (VSS and VDS ports)

4088 = Maximum virtual network switch creation ports per standard switch (VSS)

4096 = Maximum total virtual network switch ports per host (VSS and VDS ports)

5000 = Maximum static port groups per vCenter

30000 = Maximum distributed virtual network switch ports per vCenter

6x10Gb + 4x1Gb = Maximum combination of 10Gb and 1Gb Ethernet ports per host

4: vSphere 5 Orchestrator Configuration Maximums

10 = Maximum vCenter server systems connect to vCenter Orchestrator

100 = Maximum hosts connect to vCenter Orchestrator

150 = Maximum concurrent running workflows

15000 = Maximum VMs connect to vCenter Orchestrator

5: vSphere 5 Storage Configuration Maximums

2 = Maximum concurrent Storage vMotion operations per host

4 = Maximum Qlogic 1Gb iSCSI HBA initiator ports per server

4 = Maximum Broadcom 1Gb iSCSI HBA initiator ports per server

4 = Maximum Broadcom 10Gb iSCSI HBA initiator ports per server

4 = Maximum software FCoE adapters

8 = Maximum non-vMotion provisioning operations per host

8 = Maximum concurrent Storage vMotion operations per datastore

8 = Maximum number of paths to a LUN (software iSCSI and hardware iSCSI)

8 = Maximum NICs that can be associated or port bound with the software iSCSI stack per server

8 = Maximum number of FC HBA’s of any type

10 = Maximum VASA (vSphere storage APIs – Storage Awareness) storage providers

16 = Maximum FC HBA ports

32 = Maximum number of paths to a FC LUN

32 = Maximum datastores per datastore cluster

62 = Maximum Qlogic iSCSI: static targets per adapter port

64 = Maximum Qlogic iSCSI: dynamic targets per adapter port

64 = Maximum hosts per VMFS volume

64 = Maximum Broadcom 10Gb iSCSI dynamic targets per adapter port

128 = Maximum Broadcom 10Gb iSCSI static targets per adapter port

128 = Maximum concurrent vMotion operations per datastore

255 = Maximum FC LUN Ids

256 = Maximum VMFS volumes per host

256 = Maximum datastores per vCenter

256 = Maximum targets per FC HBA

256 = Maximum iSCSI LUNs per host

256 = Maximum FC LUNs per host

256 = Maximum NFS mounts per host

256 = Maximum software iSCSI targets

1024 = Maximum number of total iSCSI paths on a server

1024 = Maximum number of total FC paths on a server

2048 = Maximum Powered-On VMs per VMFS volume

2048 = Maximum virtual disks per host

9000 = Maximum virtual disks per datastore cluster

30’720 = Maximum files per VMFS-3 volume

130’690 = Maximum files per VMFS-5 volume

1MB = Maximum VMFS-5 block size (non upgraded VMFS-3 volume)

8MB = Maximum VMFS-3 block size

256GB = Maximum file size (1MB VMFS-3 block size)

512GB = Maximum file size (2MB VMFS-3 block size)

1TB = Maximum file (4MB VMFS-3 block size)

2TB – 512 bytes = Maximum file size (8MB VMFS-3 block size)

2TB – 512 bytes = Maximum VMFS-3 RDM size

2TB – 512 bytes = Maximum VMFS-5 RDM size (virtual compatibility)

64TB = Maximum VMFS-3 volume size

64TB = Maximum FC LUN size

64TB = Maximum VMFS-5 RDM size (physical compatibility)

64TB = Maximum VMFS-5 volume size

6: vSphere 5 Update Manager Configuration Maximums

1 = Maximum ESXi host upgrades per cluster

24 = Maximum VMware tools upgrades per ESXi host

24 = Maximum VMs hardware upgrades per host

70 = Maximum VUM Cisco VDS updates and deployments

71 = Maximum ESXi host remediations per VUM server

71 = Maximum ESXi host upgrades per VUM server

75 = Maximum VMs hardware scans per VUM server

75 = Maximum VM hardware upgrades per VUM server

75 = Maximum VMware Tools scans per VUM server

75 = Maximum VMware Tools upgrades per VUM server

75 = Maximum ESXi host scans per VUM server

90 = Maximum VMware Tools scans per ESXi host

90 = Maximum VMs hardware scans per host

1000 = Maximum VUM host scans in a single vCenter server

10000 = Maximum VUM VMs scans in a single vCenter server

7: vSphere 5 vCenter Server, and Cluster and Resource Pool Configuration Maximums

100% = Maximum failover as percentage of cluster

8 = Maximum resource pool tree depth

32 = Maximum concurrent host HA failover

32 = Maximum hosts per cluster

512 = Maximum VMs per host

1024 = Maximum children per resource pool

1600 = Maximum resource pool per host

1600 = Maximum resource pool per cluster

3000 = Maximum VMs per cluster

8: vSphere 5 VM Configuration Maximums

1 = Maximum IDE controllers per VM

1 = Maximum USB 3.0 devices per VM

1 = Maximum USB controllers per VM

1 = Maximum Floppy controllers per VM

2 = Maximum Floppy devices per VM

3 = Maximum Parallel ports per VM

4 = Maximum IDE devices per VM

4 = Maximum Virtual SCSI adapters per VM

4 = Maximum Serial ports per VM
4 = Maximum VMDirectPath PCI/PCIe devices per VM (or 6 if 2 of them are Teradici devices)

10 = Maximum Virtual NICs per VM

15 = Maximum Virtual SCSI targets per virtual SCSI adapter

20 = Maximum xHCI USB controllers per VM

20 = Maximum USB device connected to a VM

32 = Maximum Virtual CPUs per VM (Virtual SMP)

40 = Maximum concurrent remote console connections to a VM

60 = Maximum Virtual SCSI targets per VM

60 = Maximum Virtual Disks per VM (PVSCSI)

128MB = Maximum Video memory per VM

1TB = Maximum VM swap file size

1TB = Maximum RAM per VM

2TB – 512B = Maximum VM Disk Size

LINKS