VMworld 2018 — Books to Look For

Every year at VMworld VMware runs a store featuring relevant books. This year, as always, it opens first thing Sunday.

Yes, the store also has clothes and other swag bearing the VMware logo (just in case you want your kids in VMware logoed blues and greens), but VMware staff curate a selection of books that are worth a look. Supplies on the hottest titles run out early in the week, so stop in early.

I’ve got no inside knowledge whatsoever about what is or is not going to be in the store this year, but here’s a list of books I’m either familiar with (because I have owned/read them) or that I’m looking to pick up. Again, not all these titles will be at VMworld (some aren’t even off the presses), but look for any of these and I don’t think you’ll be disappointed.

VDI Design Guide: A comprehensive guide to help you design VMware Horizon, based on modern standards
Johan van Amersfoort

Aside from the rad cover design, this is at the top of my list for a reason. Really excited to crack into this book, which covers every element of the VDI design with careful attention from security experts, storage experts, application delivery strategies, and much more–all from the “VCDX mindset” approach.

VMware vSphere 6.7 Clustering Deepdive
Frank Denneman, Duncan Epping, Niels Hagoort

This is another top pick. This is an all-start team with guidance that combines technical depth with operational experience. I hear there may be some free copies going out just like the book from last year…

VMware vSphere 6.5 Host Resources Deep Dive
Frank Denneman, Niels Hagoort

This book was released a year ago and got a lot of hype for good reason. As a systems designer, I have found myself turning to this for reference, but also sometimes just cracking the book to random spots, paging around for context, and then plowing in just for the joy of it–it’s well written, thorough, and a solid resource.

Mastering VMware vSphere 6.7
Nick Marshall

This one isn’t out yet, as far as I know, but it’s an important update to the series. Particularly useful if you’re digging into vSphere for the first time or shooting for a VCP6.5-DCV, except that there’s no 6.7 level certification (and probably won’t be.)

IT Architect Series: Designing Risk in IT Infrastructure

I’m really interested in this book. Hoping to leave with this one in my bags.

It Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for It Architects
John Yani Arrasjid (VCDX 001),  Mark Gabryjelski (VCDX 023), Chris McCain (VCDX 079)

This is a great guide on how architects think and work. The exact technical examples may seem a little out of date (for instance, plenty of small data systems are making the jump beyond 10Gb, or the storage technology may seem a generation or two back) but the principles apply. The examples are not given as a blueprint on how YOUR system should be architected, but instead the examples are concrete vehicles for how to understand the actual principles you absolutely SHOULD architect a data system.

It Architect Series: The Journey: A Guidebook for Anyone Interested in It Architecture
Melissa Palmer (VCDX 236)

I’ve not read this, but I’d like to review it soon. The Journey is a fresh-off-the-presses book on how to get into being an esteemed, well-rounded architect across the various areas where most people specialize.

Kubernetes: Up and Running: Dive into the Future of Infrastructure
By Brendan Burns, Kelsey Hightower, Joe Beda

Looking for a solid intro into Kubernetes? Want to go deeper than blog posts? Here you go. This one is on my “want-to-read” list.
The authors of this book are some of the original developers of Kubernetes and you can hear Joel Beda (now running Heptio) talk about the book in passing on the Kubernetes Podcast: Kubernetes Origins. – Recommended listening in any case.

Site Reliability Engineering: How Google Runs Production Systems
Edited by Betsy Beyer, Chris Jones, Jennifer Petoff and Niall Richard Murphy

This on is on my need-to-read list. I’m more interested in the processes than the specifics of the SRE role. “Members of the SRE team explain how their engagement with the entire software lifecycle has enabled Google to build, deploy, monitor, and maintain some of the largest software systems in the world.”

Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity
Kim Scott

I read this earlier in the year and while it has nothing to do with technology, it has a lot of excellent content about management, and it’s right out of the trenches of some of the top tech companies like Apple, Google, and Twitter.

Technology is only as good as it’s delivery, and the culture of the company delivering the tech has everything to do with success. Read this book, even if you’re not in management.

Bear in mind that VMware does put in an effort to make these books available at reasonable costs. Books may be less expensive on Amazon, or they may not. In my experience, it varies considerably, that is if you don’t mind lugging your new books back through airports. In that case, you online vendor of choice may be your best bet.

Kubernetes 101: Introduction

If you’ve been listening to any of the conversations around containers, you’ve heard of Kubernetes. The name, which my kids apparently love saying, comes from a greek word meaning “helmsman” or “pilot.” That’s not a bad name for the product: Kubernetes automates and orchestrates (or “pilots”) the deployment and scaling of containers. We’ll get to what that means shortly.

I plan to make this the first of a series of articles introducing Kubernetes.  I suspect that daily developers generally understand the basics of Kubernetes without this kind of article. This summary is written more for you if you have cloud platform experience, but you don’t write software. If you want an understanding of how Kubernetes works and where it fits, then I hope this can help.

A Quick History

Kubernetes was developed by Google internally and released a little over 3 years ago (2015).  The project was donated to the Cloud Native Computing Foundation and is currently listed as the only “graduated” product–presumably something that is now production-ready. Several interesting things are coming if you check the “Incubating Projects” list on the CNCF site.

Cloud Native Computing Foundation (CNCF) and Kubernetes

Kubernetes is the gorilla in the park. That doesn’t mean the alternatives should be neglected, but it’s the big player, not just because Google developed it, but because of the depth of features and scope of support. Lots of third-party companies out there are getting into the Kubernetes management services business because of another thing: It’s complex. The variants are worth their own discussion. Hang in there and we’ll get to it.

What is Kubernetes For?

So let’s get back to what it means to “automate and orchestrate containers” and why it matters.

I recall doing web design and development in the early Web 2.0 days when the whole nature of website functionality changed. No longer was a website for disseminating information or buying a product. Everyone was launching sites that could make your life easier by giving you some kind of service that was delivered through the website. “Write-boards” come to mind.

Facebook and then Twitter were born in those days and social media came into it’s own. All these  applications were written on technologies designed for fast development and fast server-side processing. PHP and Ruby on Rails were getting a lot of buzz and I was part of those tribes for about 5 years.

The ideas were simple: create small, lean applications built with frameworks that let you skeleton your application rapidly. Through good API’s developers could connect multiple small applications to deliver the parts of a larger package. One tiny application might handle the user signup process, while another handles the message-delivery while another runs and secures the database.

But what happens when you need to scale the whole thing up? Or what happens when your whole code base gets an upgrade and you need to coordinate upgrading that whole stack? The problem shifted quickly from “How do I manage my consumed hardware resources?” to “How do I manage my micro-services regardless of my hardware resources?”

Enter Kubernetes. And Nomad, and Docker Swarm, and others. In fact, all three just named saw initial release in 2015 because developers everywhere saw the need.  We’ll talk about the alternatives to Kubernetes and why they may well be worth your time, but the thing to realize is this:

Container automation/orchestration provides an important function between the container/application itself and the resources that the software consumes.

So whether it’s Kubernetes or another good option, developers of cloud native applications need that layer of administration to deal with all the tasks related to keeping the applications up and running on their resources. The resources–the CPU/Memory/Disks/Network–also need to be interchangeable. Kubernetes does all this and a lot more.

The Alternatives

Like any good solution, there’s contenders. Kubernetes is probably the most complicated but is arguably the most feature-rich solution for the need. Others actually carve their niche by focusing on being much simpler and quicker to deploy.

Nomad vs KubernetesNomad: Nomad is a leaner, simpler-to-deploy solution to many of the same problems. In fact, it’s so much simpler that it’s worth some attention. Another factor is that Nomad fits into a framework of 3 other products that are equally worthwhile, helping with provisioning, securing, and connecting applications in it’s own way.

I love the ideas behind Nomad and I’ve had long-term respect for Mitchell Hashimoto who wrote and maintained projects like Vagrant back when I used it as a Ruby Gem. That product and others like it have now been rolled into HashiCorp.

Docker Swarm: Docker Swarm vs KubernetesThis is a mode that gets setup via additional configuration right from your existing Docker environment. If you don’t know what Docker is, then you should probably do some more reading before the next post in this series.

Docker may be the largest container-manager out there and lets you build applications into isolated, tiny environments that are able to be relocated or reproduced easily. In a way, Docker does for applications what VMware did for Windows with the SDDC.

To help compare, here’s a chart:

If you think I’ve missed a contender and you want to see something added to the chart, reach out on twitter. I originally included Apache Mesos: Marathon, but decided to drop it.

Container Orchestration/Automation Solutions

Docker Swarm
Differentiator(s)The Leader: Deep feature set; Advanced Self-HealingLightweight & simple DeploymentGrow Docker deployment to include Swarm mode
DownsidesLearning CurveNot as broad adoption, not baked into major cloudsDocker Only, obviously
Other FeaturesConfigurable priority of containersSelf-Healing; Multi-Datacenter/Region Aware
Service DiscoveryYYY
Load BalancingYYY
Multi-CloudYYNot Native
Hybrid CloudYYNot Native
ReleasedJuly 2015September 2015August 2015
Written InGoRubyGo

The more I look at Nomad, the more I want to give it some attention. Maybe I’ll get around to an article series on that one day.

I’m writing about Kubernetes not because it’s the best solution, but because it’s mainstream.

Where Are We Going Here?

In the next article we’ll get our hands dirty. I hope to get that out very shortly, so stay tuned. First, I’d like to talk a bit about how you can deploy and use K8s (as Kubernetes is often shortened).

  • In a Cloud (Easiest)
    Currently available as services from the big three: AWS, Google Cloud Platform, Azure
  • In a Local/Private Environment (Hardest)
    This is if you want to run K8s in a test/dev environment or home lab
  • On your laptop (Not very hard, but a little bit of setup)
    Don’t expect to run a lot of services or redundancy, but Minikube will let you get your hands into K8s while you’re on a flight without WiFi.

I’d like to walk through all three of these. But there’s a problem with what to deploy. The whole idea of K8s is to take complex microservices and simplify administering their parts. The idea of an introduction is to keep it simple.

No, we’re not going to revert to “hello world” applications. I have a few things in mind, but I think I’m going to go with something that might just be useful if you’re hands-on enough to give K8s a spin: the Ubiquiti Unifi controller. We’ll see where that goes. An alternative might be a logging application, like Splunk.

Ubiquiti Unifi running on KubernetesFinally, before you build anything with K8s, you need a solid idea of what it is you’re building. I’ll lay this architecture out with the next post and my goal is to push through a GCP-deployed cloud application that uses some of the K8s functionality. Then we’ll run the same thing locally with Minikube. Then we’ll do it again in a local environment.

So if you want to get your hands dirty, stay tuned. In the mean time, if you don’t have a Google Cloud Platform account, go create one. I hear they’re giving away $300 for use in the 1-year demo period when you open a new account, so that could make it worth your while. You are going to be consuming real resources, so that’s a good deal.

We could use AWS or Azure, but Google created Kubernetes, so it only seems appropriate. Want to see the same kind of workflow on AWS and Azure as followup posts? Let me know on Twitter.

VCSA: Deployed with no external DNS?

External DNS is a documented requirement for vCenter in a supported deployment model [read the documentation.] However, are there are times when you want to set up vCenter without any DNS?
When on earth would you want that?

  • Small lab or POC
  • Tiny Environments for an initial standup prior to providing DNS
  • DR scenarios when services (including DNS) are down you may need to manage multiple hosts and simply get recovery VM’s running (this can be avoided through good design, but it can happen.)
  • Home Labs, of course

When do you NOT want vCenter without DNS?

  • Anytime you need to count on VCSA long-term. Just don’t.
  • Any deployment of vCSA with an external PSC or any of the extended features like Linked Mode/Enhanced Linked Mode, Stretched Clusters, etc.
  • Also, don’t expect to get LDAP integration working.

So you still want to deploy vCSA and don’t want to depend on DNS forward/reverse lookups? Read on… it’s pretty easy.

The Key Thing…

The bottom line here is that there are a few configuration points you have to diverge from. It’s simplest to explain this from the scripted installer configuration because these variables are clearly laid out. (William Lam writes up the whole process over at Virtually Ghetto. )

It’s also a good bit easier to deploy with the scripted installer if your skills permit it. Again, see William Lam’s posts for a primer.

The JSON looks like this and we’re after the network settings.

If you read carefully you will notice a few things that are unusual.

  • The DNS server (only one) is the same as the system IP
  • The System Name is also using this same address

Both these are deliberate and required for a no-external-DNS VCSA.

What’s happening here? This is declaring that VCSA is going to be it’s own DNS server, and that seems to work just fine.

UI Installer

Stage 1 of the installer is straightforward. In my screenshots I’m deploying an embedded PSC model and the only critical piece that’s unusual here is to keep the System Name, IP address, and DNS Servers all at the same address you’re deploying vCSA to.






Once you’ve deployed stage 1, Stage 2 is straightforward, but Stage 2 WILL NOT proceed without a functioning network. So if you blew it in Stage 1, you’ll know it right at the start of Stage 2.

In this example I’ve deployed VCSA 6.7, the lastest-greatest at the time of writing. However, I’ve done this with success all the way back to VCSA 6.0.


Don’t deploy a no-dns VCSA if you expect it to scale or you want to grow the environment.

Don’t deploy a no-dns VCSA if you expect it to be supported. VMware will point you to the documentation and likely leave you alone with your problems.

But do you want to stand up a VCSA for a nested ESXi or other lab use with minimal effort? Give this a spin and reach out on twitter if you have trouble–I’d be interested to hear.


Upgrading vSphere 5.5: what version should I settle with?

In April, VMware blogs posted an article for those who need to upgrade from 5.5 before end of support (documented: September 19, 2018). The article pushed users to consider:

Should I upgrade to 6.5 or 6.7?

The VMware blog article says this: consider the features and value of 6.7–do you need the new gadgets or should you stick with a more time-tested release in 6.5? (those are my words.)

What else to consider?

There’s a good bit more to consider than just features and value. Those are great starting points and will help initiate the project, but there are constraints that administrators had better consider before the upgrade. The cost of up-fitting for these constraints could push the upgrade to a lower version than the latest, shiniest 6.7.


Do we even need to say this? If you haven’t kept up with the major changes, it could be a surprise that there are several changes in 6.7 that may not even be compatible in your infrastructure:

  • The Release notes give us a list of CPU’s you cannot use with 6.7 found under the ominous heading “Upgrades and Installations Disallowed for Unsupported CPUs
    (see another article on CPU compatibility)
  • There are similar CPU lists for vSphere 6.5, so awareness of hardware compatibility is key. You could need to only upgrade to 6.0 while you figure out your hardware plan.
  • There are many storage changes in vSphere 6.7. I’m seeing that while hardware may be compatible it can require updated firmware. Storage controllers are particularly likely to need an updated firmware. vSAN makes storage controller firmware updates easy for a lot of storage controllers, but if you’re not using vSAN it may be more of an effort.

Other VMware Products

  • VMware Horizon 7.4 is incompatible with vSphere 6.7. Using Horizon 7.4 or earlier? Stick with vSphere 6.5. The upgrade to Horizon 7.5 warrants it’s own writeup.
  • VMware NSX for vSphere 6.4 is also incompatible with vSphere 6.7. The HCL makes this clear. The Product Interpretability Matrices help here.


How about your Backup/DR/Replication solution? Many don’t support vSphere 6.7 yet and who wants to be the one who upgraded, but now has no current backups?

For backups, the variety of solutions is worth considering. This is where a solution that runs on the Hypervisor typically has to be updated when the hypervisor gets a major release. On the other hand, a solution that runs within the virtual environment guest operating system using an agent or other mechanism can typically receive updates outside the hypervisor update cycle.

Other constraints:

There’s more to consider than just what’s here, and that’s close to the point: a thorough review of hardware, firmware, other VMware solutions and third-party solutions needs to go in prior to upgrading 5.5, and upgrading to 6.0x may be a path forward depending on your environment.

Plan and Strategize

Another strategy worth considering: it is normal to have ESXi and vCenter versions out of sync as part of a larger effort. Most of the constraints above are related to ESXi, not vCenter. Horizon and NSX can be exceptions, of course, depending on the versions.

If you’re moving from vCenter on Windows 5.5x and are making the jump to vCSA, then there’s paths forward, and you can usually bump vCenter up to your latest version before touching ESXi. In any case, vCenter always gets the upgrade before ESXi.

Here’s a simple example. You could have ESXi 5.5 / vCenter 5.5 and hardware that is–without some overhaul–incompatible with vSphere 6.5. You could plan your upgrade to run roughly this way:

  1. Upgrade vCenter to 6.5U2
  2. Upgrade ESXi to 6.0U3a.
  3. Plan your hardware upgrades to achieve compatibility to ESXi 6.5.

This would get you off 5.5 and could have you moving forward quickly.

Again, spend some quality time with the VMware Product Interoperability Matrices to validate where you want to go and your best path may make itself clear.

The growing list of unsupported CPU’s

VMware has disgruntled some customers by releasing ESXi versions that no longer support some previously-supported CPU’s. I think this is an inevitable change: if the Hypervisor is going to change in some ways then it’s going to have to stop supporting old CPU’s with limited features at some point. Some may argue this is too soon, but you can’t please everyone.

One thing to be clear about is that this is a change for ESXi as a part of vSphere. That is the hypervisor. Not vCenter. vCenter is generally the first thing you upgrade in an environment and you can run an updated vCenter server with backwards compatibility for older versions of ESXi. There are all sorts of reasons you might not upgrade ESXi but you might go ahead and upgrade vCenter, especially with the benefits of the UI upgrades we’ve been seeing of late.

VMware’s language from the release notes is striking and makes me smile a little. Here’s the ominous heading:
Upgrades and Installations Disallowed for Unsupported CPUs 

Under that heading there is a list of unsupported CPU’s right in the release notes, parallel to what you find in the HCL. (VMware is clearly trying to be upfront.) Here are the lists for vSphere 6.5 and 6.7:

vSphere 6.7 (release notes) – CPU’s newly unsupported since 6.5

  • AMD Opteron 13xx Series
  • AMD Opteron 23xx Series
  • AMD Opteron 24xx Series
  • AMD Opteron 41xx Series
  • AMD Opteron 61xx Series
  • AMD Opteron 83xx Series
  • AMD Opteron 84xx Series
  • Intel Core i7-620LE Processor
  • Intel i3/i5 Clarkdale Series
  • Intel Xeon 31xx Series
  • Intel Xeon 33xx Series
  • Intel Xeon 34xx Clarkdale Series
  • Intel Xeon 34xx Lynnfield Series
  • Intel Xeon 35xx Series
  • Intel Xeon 36xx Series
  • Intel Xeon 52xx Series
  • Intel Xeon 54xx Series
  • Intel Xeon 55xx Series
  • Intel Xeon 56xx Series
  • Intel Xeon 65xx Series
  • Intel Xeon 74xx Series
  • Intel Xeon 75xx Series

vSphere 6.5 (release notes) – CPU’s newly unsupported since 6.0

  • Intel Xeon 51xx series
  • Intel Xeon 30xx series
  • Intel core 2 duo 6xxx series
  • Intel Xeon 32xx series
  • Intel core 2 quad 6xxx series
  • Intel Xeon 53xx series
  • Intel Xeon 72xx/73xx series

vSphere 6.0 does not define a list, but points to the age-old hardware compatibility list (HCL). Look up your CPU’s and validate.

The HCL should always be consulted, in every case. If you’re using VMware then you value a resilient infrastructure. Take the time to check your hardware and make sure it’s compatible before you upgrade your hypervisor.



Upgrading to Horizon 7.4

Upgrading VMware Horizon components and the order in which that is done has been significantly improved in the last few years, but at the same time the related VDI components have changed leaving an administrator like me going back to the documentation asking…

how do I go about my upgrade again?”

Thankfully VMware has had their View Upgrades guide published for some time and it’s been updated for VMware Horizon 7.4. This is the go-to document if you’re moving from 5 or 6 with latest patch releases. (It says as much right at the beginning.)

The online documentation: https://docs.vmware.com/en/VMware-Horizon-7/7.4/horizon-upgrades/GUID-E3607442-8936-49A8-97B4-722D012FDF1E.html

The PDF: (for those who love trees so much we like to hold them in our hands…) https://docs.vmware.com/en/VMware-Horizon-7/7.4/horizon-upgrades.pdf

If you’re moving from View 7.x to 7.4 then there’s another guide that just covers patches that may be a help.

That said, I still like to review the whole scope of what needs touched when there’s a new release. Essentially, I want to know how much has changed in the whole stack of software. Release notes cover it, but I sometimes like to dig into the full documentation to understand more of how things have changed.

From a whole-system upgrade, here’s the basic list from the documentation above:

  1. Back up View Composer & vCenter, halt some tasks per documentation & Upgrade Composer
    (Composer operations go down during an upgrade)
  2. Backup View Connection Server & Upgrade View Connection Server
    (non-reversible, pay attention to special ordered requirements if you’re still using security servers)
  3. Backup Security Servers & Upgrade Security Servers along with each one’s paired Connection Server
  4. Upgrade GPO’s
  5. Upgrade vCenter (if needed), then vSphere
  6. Upgrade the Horizon Agents on RDS servers or Virtual Desktops/Gold Images
  7. Upgrade Horizon Client

3 cheers for 7 easy steps to an upgrade.

But what about App Volumes, UEM, and other components that View works with? There’s a few considerations I’d like to lay out in a future post, but VMware sends you to the newly pluralized “Product Interoperability Matrices” and doesn’t clutter up their documentation with those other products, though it would be nice if there were more to alert that they do have to be considered.

The bottom line is that for 7.4, it’s green lights for other products below at the versions specified:

UEM back to 9.0
AppVolumes back to 2.12.0
Mirage back to 5.8.1
Horizon Clients back to 3.2.0

UAG back to 2.1
IDM back to 2.9.1

NSX back to 6.2.4
vRealize Operations Manager back to 6.3 or vRealize Operations for Horizon only back to 6.5.0 (this should be upgraded at the same time if you’re using it)