In this post we will learn how to set up NSX-T Data Center v3.0 for Tanzu Kubernetes .
To perform a new installation of NSX-T Data Center for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented
NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX-T Data Centre components such as logical switches, logical routers, and firewalls.
NSX Manager provides a system view and is the management component of NSX-T Data Center.
For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. For a proof-of-concept environment, you can deploy a single NSX Manager.
In a vSphere environment, the following functions are supported by NSX Manager:
- vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and clusters.
- vCenter Server can use the Storage vMotion function to live migrate NSX Manager across hosts and clusters.
- vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX Manager across hosts and clusters.
- vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and clusters.
Deploy NSX-T Manager
Enter passwords for all user types.
Fill out the required fields on the Customize Template section of the Deploy OVF Template wizard.
Once the installation is completed, log in with admin privileges on NSX Manager at https://192.168.208.160/
Register vCenter Server as a Compute Manager
On the NSX UI Home page, navigate to System > Configuration > Fabric > Compute Managers and click +ADD.
Enable the NSX-T Manager Interface
The NSX Management Console provides two user interfaces: Policy and Manager. TKGI requires the Manager interface for configuring its networking and security objects. Do NOT use the Policy interface for TKGI objects.
In NSX-T Manager GUI console go to System > User Interface Settings.
Select Visible to all Users and Manager
Click Save
Refresh the NSX-T Manager Console
Upper-right area of the console, verify that the Manager option is enabled.
Create Transport Zones
you need to create two transport zones: an Overlay TZ for Transport Nodes and a VLAN TZ for Edge Nodes.
On the NSX UI Home page, navigate to System > Configuration > Fabric > Transport Zones and click +ADD
Click ADD.
Create a VLAN-based transport zone to communicate with the nonoverlay networks that are external to NSX-T Data Center. a. Click +ADD.
In the New Transport Zone window, create a transport zone
Create IP Pool
You create an IP pool for assigning IP addresses to the NSX transport nodes.
2. Provide the configuration details in the ADD IP ADDRESS POOL window.
a. Enter VTEP-IP-Pool in the Name text box.
b. Enter IP Pool for ESXi, KVM, and Edge in the Description text box.
c. Click Set under Subnets and select ADD SUBNET > IP Ranges.
d. In the IP Ranges/Block text box, enter 192.168.208.190-192.168.208.200 and click Add item(s). e. In the CIDR text box, enter 192.168.208.0/24.
f. In the Gateway IP text box, enter 192.168.208.1
g. Click ADD on the ADD SUBNETS page
Prepare the ESXi Hosts
You prepare the ESXi hosts to participate in the virtual networking and security functions offered by NSX-T Data Center.
On the NSX UI Home page, navigate to System > Configuration > Fabric > Nodes > Host Transport Nodes.
In the NSX Installation window, click APPLY.
The auto install process starts.
The process might take approximately 5 minutes to complete.
When the installation completes, verify that NSX is installed on the hosts and the status of the SA-Compute-01 cluster nodes is Up.
You might need to click REFRESH at the bottom to refresh the page.
Deploying and Configuring NSX Edge Node
NSX Edge Nodes provide the bridge between the virtual network environment implemented using NSX-T and the physical network. Edge Nodes for Tanzu Kubernetes Grid Integrated Edition run load balancers for TKGI API traffic, Kubernetes load balancer services, and ingress controllers.
On the NSX UI Home page, navigate to System > Configuration > Fabric > Nodes > Edge Transport Nodes.
Click +ADD EDGE VM.
Click NEXT.
Click NEXT.
Configure the node settings as below
Configure the first NSX switch for the Edge Node
The deployment status displays various values, for example, Node Not Ready, which is only temporary.
Wait for the configuration status to appear as Success and the status as Up.
You can click REFRESH occasionally.
On the NSX UI Home page, navigate to System Configuration > Fabric > Nodes > Edge Transport Nodes , click +ADD EDGE VM, and provide the configuration details to deploy the second edge node.
In the Credentials window, enter VMware1!VMware1! as the CLI password and the system root password.
Click the Allow SSH Login and Allow Root SSH Login toggles to display Yes.
Click next
On the Configure Node Settings window, enter the details
Click FINISH.
The Edge deployment might take several minutes to complete.
The deployment status displays various temporary values, for example, Node Not Ready. Wait for the configuration state to appear as Success and the node status as Up.
You can click REFRESH occasion.
Verify that the two edge nodes are deployed and listed on the Edge VM list.
Configure an Edge Cluster
You create an NSX Edge cluster and add the two NSX Edge nodes to the cluster.
On the NSX UI Home page, navigate to System > Configuration > Fabric > Nodes > Edge Clusters.
Click +ADD
In the Available (2) pane, select both sa-nsxedge-01 and sa-nsxedge-02 and click the right arrow to move them to the Selected (0) pane.
Click ADD.
Verify that Edge-Cluster-01 appears in the Edge Cluster list. Click REFRESH if Edge-Cluster-01 does not appear after a few seconds
Click 2 in the Edge Transport Nodes column and verify that sa-nsxedge-01 and sa-nsxedge-02 appear in the list.
Create Uplink Logical Switch
Create an uplink Logical Switch to be used for the Tier-0 Router.
At upper-right, select the Manager tab
Go to Networking > Logical Switches.
Click on ADD
Configure Logical Switch as below.
Click on Add and verify if uplink created
Create Tier-0 Router
You create segments for the uplinks used by the Tier-0 gateway to connect to the upstream router
On the NSX UI Home page, navigate to Networking > Connectivity > Tier-0 Gateways.
Click ADD GATEWAY > Tier-0
Save and verify
Select the T0 router
Configure and Test the Tier-0 Router
Create an HA VIP for the T0 router, and a default route for the T0 router. Then test the T0 router.
Configure the HA VIP as below
Click add and verify
Create Static Routes
Go in Routing > Static Routes.
Click on Add
Click Add and verify.
Verify the Tier 0 router by making sure the T0 uplinks and HA VIP are reachable from your laptop.
ping 192.168.208.67
PING 192.168.208.67 (192.168.208.67) 56(84) bytes of data.
64 bytes from 192.168.208.67: icmp_seq=1 ttl=64 time=0.981 ms
64 bytes from 192.168.208.67: icmp_seq=2 ttl=64 time=0.568 ms
64 bytes from 192.168.208.67: icmp_seq=3 ttl=64 time=0.487 ms
64 bytes from 192.168.208.67: icmp_seq=4 ttl=64 time=0.895 ms
64 bytes from 192.168.208.67: icmp_seq=5 ttl=64 time=0.372 ms
64 bytes from 192.168.208.67: icmp_seq=6 ttl=64 time=0.386 ms
ping 192.168.208.68
PING 192.168.208.68 (192.168.208.68) 56(84) bytes of data.
64 bytes from 192.168.208.68: icmp_seq=1 ttl=64 time=1.26 ms
64 bytes from 192.168.208.68: icmp_seq=2 ttl=64 time=0.586 ms
64 bytes from 192.168.208.68: icmp_seq=3 ttl=64 time=0.651 ms
ping 192.168.208.69
PING 192.168.208.69 (192.168.208.69) 56(84) bytes of data.
From 192.168.208.165 icmp_seq=1 Destination Host Unreachable
From 192.168.208.165 icmp_seq=2 Destination Host Unreachable
From 192.168.208.165 icmp_seq=3 Destination Host Unreachable
From 192.168.208.165 icmp_seq=4 Destination Host Unreachable
Create IP Blocks and Pool for Compute Plane
TKGI requires a Floating IP Pool for NSX-T load balancer assignment and the following 2 IP blocks for Kubernetes pods and nodes:
- PKS-POD-IP-BLOCK: 172.18.0.0/16
- PKS-NODE-IP-BLOCK: 172.23.0.0/16
Click Add
Configure the Pod IP Block as follows:
Name: PKS-POD-IP-BLOCK
CIDR: 172.18.0.0/16
Add and verify
Configure the Pod Node Block as follow
Name: PKS-NODE-IP-BLOCK
CIDR: 172.23.0.0/16
Select IP Pools tab
Click on add
Configure the IP pool as below
Add and verify
Create Management Plane
Create Tier-1 Router and Switch
On the NSX UI Home page, navigate to Networking > Logical Switches.
Click add
Click Add and verify
On the NSX UI Home page, navigate to Networking > Tier-1 Logical Router.
Click Add
Add and verify
Go to T1 router> Configuration > Router port.
Click Add
Verify the Route port
Select Routing tab.
Click edit
Status: Enabled
Advertise All Connected Routes: Yes
Save and verify
Create NAT Rules
You should create the following NAT rules on the Tier-0 router for the TKGI Management Plane VMs.
On the NSX UI Home page, navigate to Networking > NAT
Verify new added DNAT
Verify the creation of the DNAT rules.
Create the SNAT rule
Verify the creation of the SNAT rule.
I hope you enjoy reading this blog as much as I enjoyed writing it. Feel free to share this on social media if it is worth sharing.
Awesome
ReplyDelete