Nexus 1000v - Installation

In this blog post, I'm going to go through the installation of the Nexus 1000v on my ESXi host. The reason I'm installing the Nexus 1000v in my lab is so that I can tag vNIC traffic with Security Group Tags (SGTs) for later labbing. 

In order to install the Nexus 1000v in your lab environment, you will need to download and install vCenter prior to beginning the following steps. If this is only for a lab, I would recommend going to vmware.com and downloading at evaluation copy. I won't walk through the entire installation process for vCenter but if you would like to check out a blog that does, go here

Starting in Nexus 1000v 5.2(1)SV3(1.1) release, the Nexus 1000v supports SGACLs and in later versions, the Nexus 1000v added additional features and improved scalability such as:

  • SXP Version 3 (SXPv3) support to transport IPv4 subnets to SGT bindings
  • SGT tagging based on subnet IP addresses - mapping an SGT to all host addresses of a specified subnet 
  • 6000 IP-SGT mappings
  • 4000 IP-Subnet-SGT mappings
  • 128 SGACLs
  • 128 ACEs per SGACL 
  • 8 SXP peers

TrustSec uses the device and user information that's acquired during authentication to classify or tag packets as they enter the network. Also you may statically tag a vEth port. These packets are tagged on ingress to the network to be identified for the purpose of applying security and other policy along the data path. I'll expand on this further in the next blog post.

Let's get started on the installation of the Nexus 1000v. You may download the Nexus 1000v fils from Cisco.com as a .zip file. After you unzip the file, navigate to the Nexus1000v.5.2.x.x.x\VSM\Install\ folder and import the appropriate OVA to your ESXi host. During the import, it'll ask you to assign port groups to the interfaces and give it a management IP address. 

Start the new virtual machine that was just created for the Virtual Supervisor Module (VSM) and walkthough the setup:

#conf t
setup
[standalone/primary/secondary] standalone
Basic configuration dialog: yes
Configure another login account: no
Configure read-only SNMP community string: yes
Community String: ISEc0ld
Switchname: NX-Sw1
Configure out-of-band? Y
V4/V6: V4
Management IP address: 10.1.100.4

Subnet mask: 255.255.255.0
Default gateway? Y
Default gateway: 10.1.100.1
Advanced IP options? No
Enable the telnet services? No
Enable SSH? Yes
SSH Key: rsa

RSA key bits: 1024
Enable HTTP-server? Yes
Configure NTP? Yes
NTP Server IP: 10.1.100.1

SVS domain parameters: yes
SVS Control mode: L3
Edit the configuration? N
Save the configuration? Y
 

After this, I will create the server VLAN. I'm just going to use a single VLAN in my lab but you can put as many as you would like in. There's no special syntax to this at all:

vlan 100
name ServerVLAN

Then I will create a username and password to login with in the future:

no password strength-check (Only for labbing would I use this option)
username admin password networknode role network-admin

Open up a browser and navigate to the VSM IP address you just assigned. Download the cisco_nexus1000v_extension.xml


Open up your vSphere client and go to the Plug-ins on the top bar:


On the window that pops up, right-click on the whitespace and choose New Plug-in. From this page, choose the cisco_nexus1000v_extension.xml file that you previously downloaded and register the plugin. Ignore the certificate warning:

Login to the command line of the Nexus 1000v and enter the following:

svs connection SecurityLab
protocol vmware-vim
remote ip address 10.10.10.4 port 80
<- Address of vCenter
vmware dvs datacenter-name SecurityLab <- Must be exact name of the data center in vCenter
max-ports 8192
connect

At this point, you should see the Nexus 1000v connecting in the Recent Tasks bar of the vSphere client. Issue the show svs connections command on the Nexus to verify.

Around this time, I usually like to configure my port profiles in the Nexus 1000v:

port-profile type vethernet ProductionServer
vmware port-group
switchport mode access
switchport access vlan 100
no shutdown
state enabled

port-profile type vethernet SecurityServer
vmware port-group
switchport mode access
switchport access vlan 100
no shutdown
state enabled

port-profile type vethernet ISEandAD
vmware port-group
switchport mode access
switchport access vlan 100
no shutdown
state enabled

port-profile type vethernet Trunk
vmware port-group
switchport mode trunk
switchport trunk allow vlan 1-100
no shutdown
state enabled

port-profile type ethernet SYSTEM-UPLINK
vmware port-group SYSTEM-UPLINK
switchport mode trunk
switchport trunk allow vlan 1-100
no shutdownload
system vlan 1,100
state enabled

port-profile type vethernet n1kv-L3
switchport mode access
switchport access vlan 100
no shutdown
capability l3control
system vlan 100
state enabled
vmware port-group

Gracefully shut down your host VMs and SSH into your ESXi host. If you have not already enabled BASH and SSH access, console to your ESXi server from the UCS CIMC (or whichever server you're using) and press F2 to configure this. After it's configured, SSH into the BASH shell.

After you are in the CLI of the ESXi host, enter into maintenance mode:
esxcli system maintenanceMode set --enable true

Use WinSCP to copy the Nexus 1000v vib file to the \tmp\directory on the ESXi host. After it is copied, install the vib file from the CLI using the following command:
esxcli  software vib install -v /tmp/nexus1000v.vib

After it has successfully installs, exit maintenance mode:
esxcli system maintenanceMode set --enable false

In the vSphere client, navigate to Hosts and Clusters>Network and view the Nexus 1000v. Highlight the Nexus 1000v switch and right-click. Choose Add Hosts.

In the add hosts screen, check the box next to your ESXi host and check the box next to the vnmic not being currently used by vSwitch0. Put it in the SYSTEM-UPLINK profile from the drop-down and click Next:

Click Next twice until this is completed.

Navigate to Hosts and Clusters and highlight your ESXi host. On the right tab, navigate to Configuration\Network\ and click the link for Add Networking

 

Add a VMKernel port with an IP address in the ESXi host's subnet:

Navigate back to Networking. Highlight the Nexus 1000v switch and right-click. go to Manage Hosts:

On this window, check the box next to your ESXi host and click Next:

Click Next again. 

In the Network Connectivity screen, check the Destination Port group to the new VMKernal port you created to the previously created n1kv-L3 port group and click Next until the wizard is complete.

Now that you've migrated your ESXi host over, you can edit the settings of individual hosts under Host and Clusters and ensure that you have network connectivity:

Do this for all the VMs you want to add to the Nexus 1000v.

Now that the Nexus 1000v is running and is the virtual switch for our hypervisor, I'm going to add some basic configurations to it so we can start providing some basic information to ISE. Be sure to add the Network Access Device in ISE as well:

snmp-server community networknode ro
snmp-server host 10.1.100.21 version 2c networknode udp-port 20514
snmp-server traps host 10.1.100.21 version 2c networknode udp-port 20514
snmp-server informs host 10.1.100.21 version 2c networknode udp-port 20514
snmp-server source-interface informs mgmt0
snmp-server source-interface traps mgmt0
snmp-server enable traps link linkup
snmp-server enable traps link linkdown

logging event link-status default
logging server 10.1.100.21 use-vrf management
logging monitor 6

feature dot1x

Congratulations. You have now installed the Nexus 1000v onto your ESXi host and migrated your virtual machines over to use the Nexus 1000v.