With the release of
VMware Cloud Foundation (VCF) 3.0, major changes were made to the deployment
and architecture of the platform and as a result, there are new pre-requisites
that need to be met before bringing up can occur.
1. Physical prerequisites
This
is where the architecture changes come into play. VCF 3.0 includes a new Bring
Your Own Network design methodology that is drastically different than the very
prescribed networking in previous versions. This was from direct feedback from
customers, and it shows that they’re listening and willing to make changes
based on said feedback.
Racking
and cabling of the hosts is still the same, but there are no longer any
requirements around how the networking is cabled or configured if VLAN tagging
is enabled and the ports are available. This opens up VCF to being supported on
what would previously have been a non-VCF architecture. This provides a great
deal of flexibility in switching vendors as well as throughput choices. The
distinct downside to this is that you lose the switch configuration automation
from the pre-3.x builds. This was a gift and a curse as it limited your switch
and firmware options but also allowed large scale network changes to happen
seamlessly and quickly.
2. Host Imaging
In
VCF 2.x, the VIA appliance was used to image and configure the switches and
hosts in a rack and the resulting configuration file was imported into SDDC
Manager and the hosts would be available for use. While this seems great,
during our 2.2 deployment we ran into a number of various issues with the
process and it lead to a longer than expected time to get our deployment
completed. With VCF 3.x, the VIA imaging process is available, but no longer
required. Customers over the years have developed various ways to deploy their
hosts and the VIA process was very different than their standard way. VCF 3.0
introduced the Cloud Builder VM, which performs the tasks previously performed
during the first rack imaging process in 2.x. This can be deployed into your environment
but must be on the same network or VLAN as your new hosts vmk0 ports will be.
We chose to forgo the VIA process and manually installed ESXi on all of the
hosts as VIA does not support UEFI boot and we wanted to enable Secure Boot on
our hosts. This is also a great opportunity to make sure your hosts are on the
right firmware for VSAN compatibility.
Here
are the new base set of prerequisites for a new host image:
1.
The Management network
and the base VM Network port group must be configured with the same VLAN
2.
NTP and DNS must be
configured the same way on ESXi as it is on the Cloud Builder VM
3.
SSH and NTP must be
enabled and set to “Start and stop with host”
4.
The Management
interface (vmk0) must have a static IP
5.
The default standard
switch (vSwitch0) must only have one vmnic assigned.
Seems
simple, right? I’ve created a script that
can help with mass configuration. There’s also a template file to put your host
names in.
3. External prerequisites
There
was a lot of under the covers automation that VCF 2.x did that was eliminated
in favor of a more flexible platform. One of the pieces of automation was DNS
registration. In VCF 2.x, the SDDC Manager ran its own DNS server that
controlled a delegated subzone that lead to host names like
r1n0.subdomain.domain.com. It eliminated trying to integrate with every DNS
vendor in the world, but also meant you had no control over hostnames. VCF 3.x
removes the automation, which means you’re free to name your vCenter, NSX
Manager, vRealize Log Insight and ESXi hosts whatever you’d like….as long as
you create the forward and reverse lookup entries in DNS for
them. It’s one of those set it and forget it steps. DNS entries aren’t going to
change often, so it’s not so bad to do it once.
You
will also need some VLANs. Specifically:
- Management
- vMotion
- VSAN
- VXLAN
The
last three preferably with jumbo frames enabled.
You’ll
also need to collect the following license keys from MyVMware:
- vCenter
- ESXi
- NSX
- vRealize Log Insight
- SDDC Manager
4. The Deployment Spreadsheet
You
may be wondering why we’re collecting all of this information. VCF 2.x had
quite the Word document of pieces of information, but it all had to be input by
hand. Thanks to the flexibility of the naming schemes, passwords, DNS entries
and IP addresses, there needed to be a better way to do the initial
configuration. Enter the deployment spreadsheet.
In
the spreadsheet there are a number of tabs:
- The Prerequisite Checklist
contains a much shorter version of what I’ve listed here.
- The Management Workloads tab
has a list of the products that will be deployed and it’s where you’ll put
the license key information we collected earlier.
- The Users and Groups tab is
where you’ll set the passwords for a number of service accounts across the
SDDC. Security protip: You should probably enter the passwords into a
password safe of some sort and delete the passwords from this sheet when
the deployment is completed.
- The Hosts and Networks Tab is
where you’ll set the network information for the management, vMotion,
VSAN, and VXLAN networks, as well as input the IP addresses for the
management hosts and the Cloud Builder VM.
- The Deploy Parameters page is a
bit of an eye chart, but it’s where you’ll input the IP information for
the PSCs, management vCenter, management NSX Manager, Log Insight and SDDC
Manager VMs. You’ll also put in some vCenter object names like the
datacenter and cluster.
All
of the tabs (especially the Deploy Parameters tab) have a bit of error
checking. The Deploy Parameters tab does a bunch of error checking against the
Hosts and Networks tab so that you’re not configuring 10.0.0.0/24 on the
management network and trying to IP your new vCenter on 192.168.100.10.
Once
you’ve filled out the Deployment Spreadsheet to the best of your abilities, you
can login to the Cloud Builder VM and upload the file for validation. When you
do this, the hosts should be imaged and online as the validation will attempt
to login to the hosts and check the configuration based on what you’ve put into
the spreadsheet.
A
few notes:
- The root password for the host
*must* have lower case, upper case and special characters along with a
number. It *must* be between 8 and 12 characters or the validation will
complain. Yes, you can set an ESXi password longer than that. The
validation expects the password to be between 8 and 12 characters.
- vSAN capacity disks may show up
as not eligible. This is due to a known issue with
the validation. You can run the following where “Size: “ contains the
size of your capacity disks. You can run “esxcli storage core device
list” and look for your capacity disks to get the correct number to put
into the command.
- esxcli storage core device
list | grep -B 3 -e “Size: 3662830” | grep ^naa > /tmp/capacitydisks;
for i in `cat /tmp/capacitydisks`; do esxcli vsan storage tag add -d $i
-t capacityFlash; vdq -q -d $i; done
This seems like a lot to do, but once you’re done, the standup
of the environment should go very smoothly, and you’ll be on your way to
creating workload domains in no time.
Comments
Post a Comment