Skip to main content
  1. Posts/

Building a NSX-T nested lab part 2

·1754 words·9 mins·
NSX-T blogposts NSX-T blogposts

It’s been a while since we’ve written anything about NSX-T, and - if hollywood is to be believed - every good story needs a sequel, preferably one that’s just a little less exciting than the first part. And do i have good news for you, because today we will be deploying a NSX-T lab environment based on NSX-T 2.1, with nested labs, and hopefully in the next part we’ll be expanding said lab to include an Arista leaf-spine topology using Arista vEOS.

Now if you’ve read my previous NSX-T lab deployment blog post , you should know the basics of what NSX-T does and why it’s the best thing since mankind invented rocket surgery. In this post we won’t be going over the basics of NSX-T but we’ll try to give you a brief overview of the steps to take to deploy NSX-T in your lab environment the easy way, because if there’s one thing that has a learning curve for people used to managing vSphere and NSX-V, it’s T. I’ll also briefly go over the differences with previous releases of NSX-T and - if you’ve read my previous blogpost you’ll see why this story is going to be significantly less exciting. There’s less convincing antagonists, the story is more straightforward and - most importantly - there’s no exciting plot twist at the end.

not-a-car
Remember, this is still applicable.

The lab overview #

We’ll start our initial deployment simple, partially because of a lack of time on my side, and partially to reduce the complexity of the deployment. I know there are a large amount of people out there that just want to play with NSX-T and don’t really care for the nested network environment.

lab–1-

Our lab looks something like the above. We have two vCenter servers, two ESXi clusters (these were all deployed through William Lam’s vGhetto script , thanks for that!). We are running vSphere 6.5 and have a standard deployment. All of this by the way is deployed on a single supermicro host, so if you want to deploy this yourself, don’t think you need additional network equipment. Obviously, you can start off with a single cluster as well. As you can see, the ESXi hosts have two interfaces, one for the management and one for their respective vTEP network. The same applies for the eve-NG host and the edge VMs, which are connected to both the management VLAN as well as the VLAN trunking port group.

The deployment #

After firing off the vSphere lab deployment scripts , there are a few changes you have to make to make NSX-T work. First off, you need to increase the ESXi disk from 2GB to at least 4GB. The reason for this is that because - by default - the nested ESXi hosts are seriously constrainted in regards to memory, which means there isn’t enough space in the ramdisk used for scratch storage. What i did was increase the disk space, create a VMDK and set a persistent scratch drive. The details on how to set the scratch location can be found at https://kb.vmware.com/s/article/1033696 . Obviously, you could also temporarily increase the memory of the ESXi hosts, but that was something i only thought of later.

Also note that by default the vGhetto deployment scripts deploy a second nic (intended for NSX-V) which we will use for NSX-T. By default this is joined to the same portgroup as the portgroup for management, but we want it on a different port group. You can change the nic in the VM, or you can be lazy and use powershell:

get-vm |? {$_.Name -like "*vesxi65*nsxt-1-*"} | Get-NetworkAdapter -name "Network Adapter 2" | Set-NetworkAdapter -NetworkName "VLAN15_VTEP" -Confirm:$false

Obviously modify this for your own VM naming convention and port group names

Now that you have a running vSphere deployment and several cups of coffee, it’s time to deploy the NSX-T manager. This is still done the same was as it was in NSX-T 1, basically perform the following steps:

  • Deploy the NSX-T manager (also known as the unified appliance) OVA on your physical host. You could also nest it on the nested ESXi host, but keep in mind that you’d need to seriously up the resources in that case, and there’s no significant benefit in doing so.
  • Next, deploy the controllers through the OVA, again on the physical host. You’ll need at least one controller, through three is the required for production. I’ve deployed three as i want to deploy as close to production as possible in this lab.
  • Log in to the manager and each of the controllers over SSH (if you haven’t enabled it, you can enable it by going to the console of each individual VM and running set service ssh start-on-boot and start service ssh.
  • next, log in to the manager and run the following:
get certificate api thumbprint
  • store this value somewhere.
  • log into to each controller and run the following one by one (not simultaneously):
join management-plane <ip-of-nsx-manager> username admin thumbprint <thumbprint>
  • The thumbprint is the thumbprint you just got from the manager, and the CLI only accepts IP adresses, not hostnames. Keep that in mind.
get control-cluster certificate thumbprint
  • Again, store this thumbprint somewhere.
  • On the manager run the following:
get management-cluster status
  • wait for all controllers to show up. Note that the control cluster status will show as unstable until it has been initialised.
  • Then, on the first controller run the following:
set control-cluster security-model shared-secret secret <some magical secret that is clearly not VMware1!>
initialize controle-cluster
get control-cluster status verbose
  • wait for the results to show up and validate that the cluster is running.
join control-cluster <nsx-controller2-ip> thumbprint <thumbprint>
  • Repeat the above step for every non-primary controller in your cluster. Note that if you don’t have more than one controller you don’t need to do this.
activate control-cluster
get control-cluster status verbose
  • Again, wait until your control cluster is stable.

The NSX-T configuration #

When you’ve deployed the manager, it’s time to log into the glorious HTML5 interface:

NSX-T interface
It’s so shiny!

When you’re done ogling the absolutely gorgeous NSX-T interface (i’m sorry NSX-V, but flash just doesn’t cut it anymore), we need to do some preparatory work.

First, open the fabric tab and then “Transport Zones”. Click Add and create a new transport zone using the following settings:

blog5

Names are up to you ofcourse. N-VDS mode should be set to standard, ENS is used for encrypted network services which we will not be touching upon. The traffic type should be set to overlay, as VLAN transport zones are primarily used for T0 edges to be able to route into the physical world.

Next, open Inventory, group and then on top “IP Pools”. Add a new IP pool and add a subnet. Note that the CIDR field might be a bit confusing, as it is not actually asking for a subnet mask or classless notation but it’s expecting a full CIDR subnet. Don’t fall into the same trap as i did diving into the logs wondering if the UI is actually broken.

blog6
If you are familiar with NSX-V this should all be relatively comfortable to you, as the concepts are largely the same up to now.

When the IP pool and TZ have been created, open the Fabric tab and click on “Compute managers”. Then, add a new compute manager:

blog3-1

Fill out the details as you’re used to, and don’t worry about the SHA256 thumbprint. If you leave it blank, when saving it will ask you to confirm the thumbprint as you’re used to from your usual vSphere deployments.

Once the compute manager is added, move over to “Nodes” (still in the fabric grouping). By default you won’t see any nodes as the dropdown is set to standalone hosts. Select the “Managed by” dropdown and select your vCenter. Now, put a checkmark next to your vCenter, click configure cluster and select the following options:

blog8

Enable the “Automatically install NSX” and “Automatically Create transport zones”. Then, Use the transport zone you’ve just created, the default Uplink profile (you can create your own uplink profile which allows you to configure LAG, active/active profiles, MTU, etc. but for this lab the default uplink profile is more than enough). In addition, select “Use IP Pool for the IP assignment” and select the IP pool you created.

For the physical NICs field you’ll need to check what the name is of your vmnic in ESXi, but if you’ve used the vGhetto deployment scripts it should be one.

*Note that if you get an error here when installing the NSX kernel modules, go back and read what i said about scratch disks in the start of this article. If you get an error on configuring the transport nodes (it will say “partial success”), doublecheck if the vmnic is actually correct.

Now, once the configuration is complete, you should see the vmnic assigned to a NSX-T N-VDS switch as displayed below:

blog9

Once this is all done, you can start creating switches in NSX-T, and they will automatically be created on your ESXi hosts.

blog10
.

Note that we have only deployed in a single vCenter currently. As i want to use this lab for a PKS deployment, we’ve created separate transport zones for the separate pods. If you want to deploy both clusters to the same transport zone just repeat the above steps, except for the TZ and IP pool creation. If you want disjoint networks, repeat all steps above.

And with that, we’re finishing off the first part of the deployment. By now, you should be able to provision VMs on your nested cluster and have them communicate within the same VNI. There’s no routing yet, which will be done in a followup post, including external connectivity.

Also, remember how i mentioned that a prequel is always slightly less exciting than the original? If we go through the original post i wrote, you’ll notice that the steps above to prepare the hosts and create the transport nodes was almost the length of this whole article, with multiple steps to take. ESXi agent installation was manual per host, transport node configuration was manual, everything was a manual step. Now, with NSX-T 2.1 we simply add the vCenter and everything is done automatically, like we’ve become accustomed to with NSX for vSphere.

In the next post - no ETA yet as i need to actually find time for these labs - we’ll be configuring the nested physical network, the edges and the uplink configuration. Until then, happy NSX’ing!