Sunday, 18 December 2016

Differences from vSphere 5.5 to 6.5

vSphere 5.5
vSphere 6.0
vSphere 6.5
Released date
Sep-13
 March 2015
Nov-16
Physical CPUs per host
320
480
576
Physical RAM per host
4 TB
12 TB
12 TB
VMs per host
512
1024
1024
vCPU per VM
64
128
128
vRAM per VM
1 TB
4 TB
6 TB
VMDK Size
62 TB
62 TB
62 TB
Cluster Size
32
64
64
High Availability
Reactive HA
Reactive HA
Proactive HA
vSphere Integrated Containers
 NO
 NO
YES
VM Hardware Version
10
11
13
VMFS Version
5.6
5.61
6.81
Management
vSphere Web Client (c#)
vSphere Web Client (c#)
vSphere Web Client -HTML 5 Client
Authentication Management
Single Sign-On 5.5
Platform Services Controller
Platform Services Controller
vMotion
vMotion restricted to Datacenter object
vMotion across vCenters
vMotion across vCenters
vMotion across vSwitches
vMotion across vSwitches
Cross-Cloud vMotion
vMotion Network Support
L2 Network
L3 Network
L3 Network
max. 10ms RTT
max. 100ms RTT
max. 100ms RTT
Win to vCSA Migration
vCenter 5.5 to vCSA 6.0
vCenter 5.5 to vCSA 6.0
vCenter 5.5 to vCSA 6.5
vCenter 6.0 to vCSA 6.5
REST API
NO
NO
YES
Content Library
NO
YES
YES
VM Encryption
NO
NO
YES
Certificate Authority (VMCA)
NO
YES
YES
Virtual Volumes
NO
 Virtual Volumes 1.0
Virtual Volumes 2.0
Virtual SAN
VSAN 5.5
VSAN 6.0 / 6.1 / 6.2
VSAN 6.5
vCenter Type
Windows - Linux VCSA
Windows - Linux VCSA
Windows - Linux VCSA
vCenter HA
NO
 NO
YES
vCenter Native Backup
NO
 NO
YES
vCSA Scale (vPostgres)
100 Hosts
1000 Hosts
2000 Hosts
3000 VMs
10000 VMs
25000 VMs
vCSA Operating System
SUSE Enterprise
SUSE Enterprise
Photon OS
vCenter Linked Mode
Windows only
Windows & VCSA
Windows & VCSA
Microsoft ADAM Replication Native Replication Native Replication
All-Flash VSAN
NO
YES
YES
VSAN Scale
32 Nodes
64 Nodes
64
VSAN Fault Domains
NO
YES
YES
FT Supported Features
HA, DPM, SRM, VDS
HA, DPM, SRM, VDS, Hot Configure FT, H/W Virtualization, Snapshot, Paravirtual Devices, Storage Redundancy
HA, DPM, SRM, VDS, Hot Configure FT, H/W Virtualization, Snapshot, Paravirtual Devices, Storage Redundancy, Improve DRS integration,Host level network latency reduction,Multiple NIC aggregation

Saturday, 10 September 2016

VMWARE VSPHERE 6.X – DEPLOYMENT ARCHITECTURE POINTS

First thing to do in a vSphere 6.x deployment is to understand the new deployment architecture options available on the vSphere 6.0 platform, which is somewhat different from the previous versions of vSphere. The below will highlight key information but is not a complete guide to all the changes..etc. For that I’d advise you to refer to the official vSphere documentation (found here)

Deployment Architecture

The deployment architecture for vSphere 6 is somewhat different from the legacy versions. I’m not going to document all of the architectural deference’s  (Please refer to the VMware product documentation for vSphere 6) but I will mention few of the key ones which I think are important, in a bullet point below.
  • vCenter Server – Consist of 2 key components
    • Platform Service Controller (PSC)
      • PSC include the following components
        • SSO
        • vSphere Licensing Server
        • VMCA – VMware Certificate Authority (a built in SSL certification authority to simply certificate provisioning to all VMware products including vCenter, ESXi, vRealize Automation….etc. The idea is you associate this to your existing enterprise root CA or a subordinate CA such as a Microsoft CA and point all VMware components at this.)
      • PSC can be deployed as an appliance or on a windows machine
    • vCenter Server
      • Appliance (vCSA) – Include the following services
        • vCenter Inventory server
        • PostgreSQL
        • vSphere Web Client
        • vSphere ESXi Dump collector
        • Syslog collector
        • Syslog Service
        • Auto Deploy
      • Windows version is also available.
Note: ESXi remains the same as before without any significant changes to its core architecture or the installation process.

Deployment Options

What’s in red below are the deployment options that I will be using in the subsequent sections to deploy vSphere 6 u1 as they represent the likely choices adopted during most of the enterprise deployments.
  • Platform Services Controller Deployment
    • Option 1 – Embedded with vCenter
      • Only suitable for small deployments
    • Option 2 – External – Dedicated separate deployment of PSC to which external vCenter(s) will connect to
      • Single PSC instance or a clustered PSC deployment consisting of multiple instances is supported
      • 2 options supported here.
        • Deploy an external PSC on Windows
        • Deploy an external PSC using the Linux based appliance (note that this option involves deploying the same vCSA appliance but during deployment, select the PSC mode rather than vCenter)
    • PSC need to be deployed first, followed by vCenter deployment as concurrent deployment of both are NOT supported!
  • vCenter Server Deployment – vCenter Deployment architecture consist of 2 choices
    • Windows deployment
      • Option 1: with a built in Postgre SQL
        • Only supported for a small – medium sized environment (20 hosts or 200VMs)
      • Option 2: with an external database system
        • Only external database system supported is Oracle (no more SQL databases for vCenter)
      • This effectively mean that you are now advised (indirectly, in my view) to always deploy the vCSA version as opposed to the Windows version of vCenter, especially since the feature parity between vCSA and Windows vCenter versions are now bridged
    • vCSA (appliance) deployment
      • Option 1: with a built in Postgre SQL DB
        • Supported for up to 1000 hosts and 10,000 VMs (This I reckon would be the most common deployment model now for vCSA due to the supported scalability and the simplicity)
      • Option 2: with an external database system
        • As with the Windows version, only Oracle is supported as an external DB system

PSC and vCenter deployment topologies

Certificate Concerns

  • VMCA is a complete Certificate Authority for all vSphere and related components where the vSphere related certificate issuing process is automated (happens automatically during adding vCenter servers to PSC & adding ESXi servers to vCenter).
  • For those who already have a Microsoft CA or a similar enterprise CA, the recommendation is to make the VMCA a subordinate CA so that all certificates allocated from VMCA to all vSphere components will have the full certificate chain, all the way from your Microsoft root CA(i.e. Microsoft Root CA cert->Subordinate CA cert->VMCA Root CA cert->Allocated cert, for the vSphere components).
  • In order to achieve this, the following steps need to be followed in the listed order.
    • Install the PSC / Deploy the PSC appliance first
    • Use an existing root / enterprise CA (i.e. Microsoft CA) to generate a subordinate CA certificate for the VMCA and replace the default VMCA root certificate on the PSC.
      • To achieve this, follow the VMware KB articles listed here.
      • Once the certificate replacement is complete on the PSC, do follow the “Task 0” outlinedhere to ensure that the vSphere service registrations with the VMware lookup service are also update. If not, you’ll have to follow the “Task 1 – 4” to manually update the sslTrust parameter value for the service registration using the ls_update_certs.py script (available on the PSC appliance). Validating this here can save you lots of headache down the line.
    • Now Install vCenter & point at the PSC for SSO (VMCA will automatically allocate appropriate certificates)
    • Add ESXi hosts (VMCA will automatically allocate appropriate certificates)

Key System Requirements

  • ESXi system requirements
    • Physical components
      • Need a minimum of 2 CPU cores per host
      • HCL compatibility (CPU released after sept 2006 only)
      • NX/SD bit enabled in BIOS
      • Intel VT-x enabled
      • SATA disks will be considered remote (meaning, no scratch partition on SATA)
    • Booting
      • Booting from UEFI is supported
      • But no auto deploy or network booting with UEFI
    • Local Storage
      • Disks
        • Recommended for booting from local disk is 5.2GB (for VMFS and the 4GB scratch partition)
        • Supported minimum is 1GB
          • Scratch partition created on another local disk or RAMDISK (/tmp/ramdisk) – Not recommended to be left on ramdisk for performance & memory optimisation
      • USB / SD
        • Installer DOES NOT create scratch on these drives
        • Either creates the scratch partition on another local disk or ramdisk
        • 4GB or larger recommended (though min supported is 1GB)
          • Additional space used for the core dump
        • 16GB or larger is highly recommended
          • Prolongs the flash cell life
  • vCenter Server System Requirements
    • Windows version
      • Must be connected to a domain
      • Hardware
        • PSC – 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM
    • Appliance version
      • Virtual Hardware
        • PSC- 2 cpu / 2GB RAM
        • Tiny environment (10 hosts / 100 VM- 2 cpu / 8GB RAM
        • Small (100 hosts / 1000 VMs) – 4 cpus / 16GB RAM
        • Medium (400 hosts / 400 VMs) – 8cpus / 24GB RAM
        • Large (1000 hosts / 10000 VMs) – 16 cpus / 32GB RAM

Saturday, 2 April 2016

VLAN tagging in VMware vSphere

Introduction
In a physical environment all the servers have dedicated physical NIC that are connected to a physical switch. VLANs in physical world are usually controlled by setting the VLAN ID on the physical switch port and then setting the server’s IP address to correspond to that NIC’s VLAN.
But in a virtual environment, dedicating a physical NIC (pNIC) to each VM that resides on the host is not possible. In reality, a physical NIC of the Esxi host service many VMs, and these VM’s may need to be connected to different VLANs. So the method of setting a VLAN ID on the physical switch port doesn’t work.
To counter this issue, 802.1Q VLAN tagging comes in picture in virtual environment.
Before digging deep into 802.1Q VLAN tagging lets understand how networking works in a virtual environment.
An Esxi host typically can have more than one physical network adapters for redundancy, load balancing and segregation. The physical NICs (pNICs) are connected to physical switches and these pNICs are in turn assigned to vSwitches that are created on each Esxi host. Connecting pNICs to vSwitches is referred to as uplink connection. On vSwitch we create different Port groups which can be connected to the virtual NICs (vNICs) that are assigned to each VM on the host. Virtual machines can use any pNIC connected to a vSwitch and this is determined by the load balancing policies which define how pNICs are selected when routing traffic to and from a VM.
Shown below is a typical network in a virtual environment.


Using the traditional VLAN method of assigning a single VLAN ID to a physical NIC does not work very well in virtual environments because with this method, all the VMs on a vSwitch would have to use the same VLAN ID. But in most of the cases you need to route different VM’s through different VLAN’s so the traditional VLAN method is of less use in this scenario.
Another method which you can use is to create multiple vSwitches for each VLAN, but if you had many VLANs, you would need a great number of pNICs and even the modern day servers comes with limited number of physical network adapters.
To overcome this situation, 802.1Q VLAN tagging is used.
How 802.1Q VLAN tagging for vSphere VLANs works
802.1Q VLAN tagging allows use of multiple VLANs on a single physical NIC. This capability can greatly reduce the number of pNICs needed in the host. Instead of having a separate pNIC for each VLAN, you can use a single NIC to connect to multiple VLANs. Tagging works by applying tags to all network frames to identify them as belonging to a particular VLAN.
Types of 802.1Q VLAN tagging in VMware vSphere
There are several methods for tagging vSphere VLANs, but they are differentiated by where the tags are applied. Basically there are 3 types of tagging methods available in Vmware vSphere. These are explained as below:
1: Virtual Machine Guest Tagging (VGT)– With this mode, the 802.1Q VLAN trunking driver is installed inside the virtual machine. All the VLAN tagging is performed by the virtual machine with use of trunking driver in the guestS. Tags are understandable between the virtual machine networking stack and external switch when frames are passed to and from virtual switches. vSwitch only forwards the packets from Virtual machine to physical switch and will not perform any operation.
Prerequisite for configuring VGT
1) Port group of the virtual machine should be configured with VLAN ID 4095.
2) The physical switch port connecting the uplink from the Esxi server should be configured as Trunk port.
How to configure VGT
To configure VGT login into your guest O.S and select the network adapter for which you want to configure tagging. Open the properties of this adapter and click on configure in the popup window which opens. In the next window select the advance tab and select VLAN from list of configurable options and specify the VLAN ID through which traffic of this adapter needs to pass.






2: External Switch Tagging (EST) – In this mode, physical switches does the VLAN tagging. The tag is appended when a packet arrives at a switch port and stripped away when a packet leaves a switch port toward the server.
Since the tagging is done at physical switch so virtual switch have no information of this and you do not need to configure any VLAN at portgroup level. VM network Packet is delivered to physical switch without any tagging operation performed at virtual switch level.





Note: There is one caveat in this approach. You can only create those many numbers of VLAN’s equal to number of physical NIC’s present/connected to your Esxi host.
Prerequisites for Configuring EST
1) Number of physical NIC’s = no of VLANs connected to ESX
2) The physical switch port connecting the uplink from the ESX should be configured as Access port assigned to specific VLAN.
Virtual Switch Tagging (VST) – In this mode, VLANs are configured on port groups of the virtual switch. The vNIC of the virtual machine is then connected to the appropriate port group. The virtual switch port group tags all outbound frames and removes tags for all inbound frames.
This approach reduces the number of Physical NIC’s on the server by running all the VLANs over one physical NIC. Since less physical NIC’s are used, it also reduces the number of cables from Esxi host to physical switch.
Best practice is to use NIC teaming and keep 2 NIC’s for redundancy.
Prerequisite for configuring VGT
The physical switch port connecting the uplink from the ESX should be configured as Trunk port.
VST mode is the one that is most commonly used for configuring VLANs in vSphere because it’s easier to configure and manage. It also eliminates the need to install a specific VLAN driver inside a virtual machine, and there is almost no performance impact from doing the tagging inside the virtual switches.
You can consult the below table to determine which will be the best tagging policy in your environment



VMware NIC Teaming and Load Balancing Policies in virtual switch

NIC Teaming
In its simplest terms NIC teaming means that we are taking multiple physical NICs on a given ESXi host and combining them into a single logical link that provides bandwidth aggregation and redundancy to a vSwitch. NIC teaming can be used to distribute load among the available uplinks of the team.  A NIC teaming configuration can look like as shown in below screenshot:


There are several Load Balancing policies available for the virtual switch. These are discussed as below:
1: Route Based on Originating virtual Port-ID: This is the default load balancing policy for a vSS or vDS. This policy doesn’t require any special configuration to be done at virtual switch level or physical switch level.
In this policy when a NIC is added to a VM or a new VM is provisioned with a NIC and comes online, VMkernel assigns a Port-ID to the virtual NIC of the VM. The outgoing traffic from the VM NIC will be routed through which uplink (physical adapter) of the team is determined by vSwitch using a modulo function where Port-ID of the VM NIC (virtual adapter of VM) is divided by total number of uplinks present in the team and the remainder obtained determines which uplink will be used to route the traffic of that VM NIC.
At a given time a VM NIC can use only one uplink to send out its traffic. In case of failure of the uplink the traffic of that VM NIC is rerouted (failed over) among one of the available uplink of the team. The selected uplink for a VM NIC can be changed if a VM changes its power state or is migrated using vMotion.
For better understanding consider the below example:
We have a virtual switch with a port group named Production. We have 4 virtual machines connected to this port group and 3 physical NICS connected to the virtual switch.


This policy works kind of like round robin. VM-A will use vmnic1, VM-B will use vmnic-2, VM-C will use vmnic-3 and VM-D will use vmnic1. The virtual machines traffic are just distributed over the available physical NIC’s.
2: Route Based on Source MAC hash: This policy is similar to Route based on originating Port ID but with the difference that vSwitch uses the MAC address of the VM NIC to determine the uplink which will be responsible for taking outgoing traffic of that VM NIC.
In this policy also, a VM NIC can be assigned only one uplink to send traffic out at a given time but failover is supported in case that uplinks fails. This policy is available in both vSS and vDS.

3: Route Based on IP Hash: This is the only load balancing policy in which a VM NIC can send out traffic through more than one uplink at a given time. This policy requires a special configuration i.e. Ether-Channel or Port-Channel to be configured on physical switch.
There is one caveat in this policy. A VM NIC can utilize more than one uplink to send outgoing traffic when it is communicating with more than one destination (IP). If a VM is doing one to one communication i.e. communicating with only one destination IP, traffic will not be shared among the uplinks and only one of the uplink will be used to send the traffic out.

4: Route Based on Physical NIC Load: This load balancing policy is only available with vDS and by far is the most intelligent policy to distribute load among the uplinks in a teamed environment.
The assignment of uplinks to VM NIC’s is based on the originating Port-ID itself but before assigning any uplink vDS looks at the load on the physical adapters. The adapter which is least loaded will be assigned to the VM NIC for sending out traffic. If an adapter which was previously less utilized but suddenly becomes busy due to a heavy network activity on a VM NIC, then that VM NIC will be moved to a different physical adapter so as to keep balance among all uplinks as best as possible.
This load balancing policy use an algorithm to perform a regular inspection of load on the Physical NIC’s every 30 seconds. When the utilization of Particular physical uplink exceeds 75% over 30 seconds, the hypervisor will move VM’s traffic to another uplink adapter. This load balancing doesn’t require any additional configuration at the physical switch level.

Graphic Thanks to VMwareArena.Com
5: use explicit failover order: This policy really doesn’t do any sort of load balancing. Instead, the first Active NIC on the list is used to route the outgoing traffic for all VM’s. If that one fails, the next Active NIC on the list is used, and so on, until you reach the Standby NICs.
Note: With Explicit Failover option if you have a vSwitch with many uplinks, only one of the uplink will be actively used at any given time.


Friday, 22 January 2016

ports checking and kill or stop listening ports

To list open network ports and the processes that own them with netstat, you can use this command

netstat -a

You can add the -n option to netstat to get port numbers instead of having the utility try to provide names for services:

netstat -an

command-line tool to see what ports are in use, and use a special flag that tells us which port is assigned to each Windows process identifier number. Then we can use that number to look up exactly which process it is

netstat -ab | more

This will immediately show you a list, although it’s maybe a little complicated. You’ll see the process name in the list, and you can search for it. You can also use this other method, which takes an extra step, but makes it easier to locate the actual process:

netstat -aon | more

If you look on the right-hand side, you’ll see where I’ve highlighted the list of PIDs, or Process Identifiers. Find the one that’s bound to the port that you’re trying to troubleshoot—for this example, you’ll see that 0.0.0.0:80, or port 80, is in use by PID 4708.





Now you can simply open up Task Manager—you might have to use the option to Show Processes for All Users, and then you’ll be able to find the PID in the list. Once you’re there, you can use the End Process, Open File Location, or Go to Service(s) options to control the process or stop it.




if port is used by important window services which is not possible to kill.
Example : port 80
you can temporarily stop listen port without killing services

NET stop HTTP