Lab-23:Getting to know Puppet

Puppet like ansible, chef and salt is a configuration management tool. Say you are a system admin and your job is to manage company servers. You install new packages, add users, delete users and various other tasks.  Puppet allows you to automate all these tasks and make life easier.

You can execute puppet in a standalone mode  or  client server mode. In the client server mode you need to install puppet agent on a machine that needs to be managed and server on another. A single server can manage multiple agents.

Puppet works as pull request where agents pull their configuration data from server. This can be achieved on demand by running ‘puppet agent –test’ command on client machine or periodically. Every 30 min client asks server for configuration data. As far I know there is no push operation from puppet server to client, it is always pull from client

Puppet programs works on desired state, it changes the state of resource from current state to desired state. If resource is already in desired state no action performed. You specify desired state in puppet program (called manifest). Say you want to install a new package on a client machine, you write a puppet program  and specify desired state ‘ensure => ‘present”. When puppet client pulls this configuration data (called catalog) from server it checks for desired state for this particular package, if package already in desired state ‘present’ (package already installed), no action performed. If package is in different state then client will bring it to ‘present’ state (install the package). If you need to delete a package change desired state to ‘absent’ and client will remove the package

In this lab I will demonstrate how to install puppet in a client server configuration. I will show how to create a simple puppet script.

Its good idea to know commonly used  puppet terminology before we start

Puppet master:

Machine which is running puppet server is called puppet master. Master has certification authority (CA) to generate and sign certificate. I use master and server interchangeably

Puppet agent:

Machine running puppet agent is call puppet agent or simply agent. I use agent and client interchangeably

Puppet manifest:

Puppet programs are called manifest, they have .pp extension. A manifest is made of modules

Catalog:

A  document agents pulls from master that contains desired state of each resource. Any time you do ‘puppet agent –test’ it pulls catalog from server

Resource:

Resource is something that you manage on client machine. for example file & package are resources. Using file resource you can create/delete, update file on client

Pre-condition:

A Linux machine. I am using Ubuntu 14.04 LTS. I have created this topology with Linux bridgeI I have created three Ubuntu 14.04 VMs simulating server and agent. They are all in same subnet, I can ping from one VM to another VM thru bridge. Check out  my  lab-15 for Linux bridge setup. Note: If you have two machines you can create one as server and another as  agent no need to follow lab-15

puppet_intro

sjakhwal@rtxl3rld05:/etc/network$ sudo brctl show
[sudo] password for sjakhwal: 
bridge name    bridge id        STP enabled    interfaces
testBr        8000.000acd27b824    no        eth3
                            tap0
                            tap1
virbr0        8000.000000000000    yes

I have created two Ubuntu 14.04 VMs with hostname puppet & puppet-client1.

On VM with hostname puppet install pupperserver.

wget https://apt.puppetlabs.com/puppetlabs-release-pc1-trusty.deb
sudo dpkg -i puppetlabs-release-pc1-trusty.deb
sudo apt-get install puppetserver

labadmin@puppet:$ puppetserver --version
puppetserver version: 2.3.1

On VMs with hostname puppet-client1  install puppet agent

wget https://apt.puppetlabs.com/puppetlabs-release-pc1-trusty.deb
sudo dpkg -i puppetlabs-release-pc1-trusty.deb

labadmin@puppet-client1:~$ puppet --version
4.4.2
labadmin@puppet-client1:~$ 

This completes installation part of puppet. Now let’s setup server to  agent communication.

Puppet agents by default uses hostname ‘puppet’ to communicate with server. I have already setup server hostname as ‘puppet’ now we need to make sure it is resolved from agents.   This can be done by updating   /etc/hosts file

I have added this line in puppet-client1  /etc/hosts file. 192.168.10.101 is server IP address

192.168.10.101  puppet.sunny.net puppet

I have added these lines in server /etc/hosts file to resolve clients hostname

192.168.10.102 puppet-client1.sunny.net puppet-client1

Change domain name in /etc/resolv.conf file on all machines (agents & server). Note:You will lose this setting if machine rebooted a more permanent way is to add domain in interfaces file

search sunny.net

Try ping from agent and server machines using hostname , make sure ping passes

ping puppet
ping puppet-client1.sunny.net

Start puppet server. Check  puppet server is running and port 8140 is listening. Agent uses port 8140 to connect to master. Note:If your agent unable to connect to master make sure port 8140 is not blocked by firewall

Note:Make sure user is part of ‘puppet’ group

#start puppet server
labadmin@puppet:/etc/puppet$sudo service puppetserver start
labadmin@puppet:/etc/puppet$ ps -ef | grep puppet
avahi    10079     1  0 11:28 ?        00:00:00 avahi-daemon: running [puppet.local]
puppet   10577     1  0 11:46 ?        00:02:11 /usr/bin/java -XX:OnOutOfMemoryError=kill -9 %p 
-Djava.security.egd=/dev/urandom -Xms2g -Xmx2g -XX:MaxPermSize=256m -cp 
/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar clojure.main 
-m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d -b 
/etc/puppetlabs/puppetserver/bootstrap.cfg

labadmin@puppet:$ puppetserver --version
puppetserver version: 2.3.1
labadmin@puppet:$ sudo netstat -pan | grep 8140
[sudo] password for labadmin: 
tcp6       0      0 :::8140                 :::*                    LISTEN      10577/java      
labadmin@puppet:$ 

#Use this command to open port 8140 on master if you think port is blocked
$iptables -I INPUT -m state --state NEW -m tcp -p tcp --dport 8140 -j ACCEPT

Start puppet agent

labadmin@puppet-client1:~$ which puppet
/opt/puppetlabs/bin/puppet
labadmin@puppet-client1:~$ puppet --version
4.4.2
labadmin@puppet-client1:~$ sudo ps -ef | grep puppet
[sudo] password for labadmin: 
avahi     3483     1  0 Apr27 ?        00:00:00 avahi-daemon: running [puppet-client1.local]
root      4261     1  0 12:03 ?        00:00:02 /opt/puppetlabs/puppet/bin/ruby /opt/puppetlabs/puppet/bin/puppet agent
labadmin  6185     1  0 18:37 ?        00:00:03 /opt/puppetlabs/puppet/bin/ruby /opt/puppetlabs/bin/puppet agent --server puppet.sunny.net
labadmin  6838  4453  0 21:44 pts/0    00:00:00 grep --color=auto puppet
labadmin@puppet-client1:~$ 

#start puppet agent
labadmin@puppet-client1:$sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true

#try this command to generate certificates
$puppet agent --test

Sign certificate on server.

#list certificate. make sure you run it as 'sudo' otherwise you will not see 
#anything. As you see client1 certificate is waiting to be signed by server
labadmin@puppet:$ sudo /opt/puppetlabs/bin/puppet cert list
  "puppet-client1.sunny.net" (SHA256) F1:54:9D:D0:15:E6:4C:A9:97:20:A0:A5:82:A1:82:EA:3F:0F:F8:56:36:72:EE:7E:6C:BC:5B:D3:BC:89:F3:AE

#sign certificate for client1
labadmin@puppet:$ sudo /opt/puppetlabs/bin/puppet cert sign puppet-client1.sunny.net
Notice: Signed certificate request for puppet-client1.sunny.net
Notice: Removing file Puppet::SSL::CertificateRequest puppet-client1.sunny.net at '/etc/puppetlabs/puppet/ssl/ca/requests/puppet-client1.sunny.net.pem'

#list all certificates (signed and unsigned), '+' mean signed. server certificate 
#is by default signed
labadmin@puppet:$ sudo /opt/puppetlabs/bin/puppet cert list --all
+ "puppet-client1.sunny.net" (SHA256) 1D:D7:74:54:93:DE:A9:9E:95:B2:6A:83:F4:66:11:DE:BA:BA:98:70:02:50:5F:1F:66:FB:53:83:3E:67:30:9C
+ "puppet.sunny.net"         (SHA256) 50:1A:0C:74:45:98:9A:70:8E:33:AE:70:D8:FF:6F:06:E4:F5:22:8F:97:8F:8D:4A:40:66:DF:6B:13:4F:3A:CB (alt names: "DNS:puppet", "DNS:puppet.sunny.net")
labadmin@puppet:$

This completes the handshaking between server and agent. We are now ready to write our first puppet manifest. We will start with default manifest ‘site.pp’  in this directory /etc/puppetlabs/code/environments/production/manifests.

Create file site.pp and add these contents in the file

labadmin@puppet:/etc/puppetlabs/code/environments/production/manifests$ cat site.pp
              node default {}
              node "puppet-client1" {
              file {"/home/labadmin/hello-puppet.txt":
                              ensure => "file",
                              owner => "labadmin",
                              content => "Congratulations! you have created a test file using puppet\n",
               }
            }

This is a simple manifest with file resource for client1 (hostname puppet-client1). In the file resource we are looking for file ‘hello-puppet.txt’ under directory /home/labadmin, we desire it to be file, owner ‘labadmin’ and its contents ‘Congratulations! you have created a test file using puppet’. If this file is not present in agent, puppet will create it. If file is present but contents doesn’t match, puppet will update contents

Validate our manifest for any syntax. Run this command on server

puppet parser validate site.pp

First make sure our resources in client1 are not in state as desired in manifest.

#no file in /home/labadmin
labadmin@puppet-client1:~$ pwd
/home/labadmin
labadmin@puppet-client1:~$ ls
examples.desktop  puppetlabs-release-pc1-trusty.deb

Now let’s pull client1 configuration from server and check resource state. First do a dry run

#do a dry run
labadmin@puppet-client1:~$ puppet agent --server=puppet --onetime --no-daemonize --verbose --noop
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Applying configuration version '1462070893'
Notice: /Stage[main]/Main/Node[puppet-client1]/File[/home/labadmin/hello-puppet.txt]/ensure: current_value absent, should be file (noop)
Notice: Node[puppet-client1]: Would have triggered 'refresh' from 1 events
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Applied catalog in 0.10 seconds
labadmin@puppet-client1:~$ 

#run puppet to pull configuration file. 
labadmin@puppet-client1:~$ puppet agent --test

#check is file been created with right contents
labadmin@puppet-client1:~$ cat hello-puppet.txt 
Congratulations! you have created a test file using puppet
labadmin@puppet-client1:~$ 

Update manifest to change file mode.

#add mode field in the manifest file 'site.pp'
node default {}
node "puppet-client1" {
file {"/home/labadmin/hello-puppet.txt":
        ensure => "file",
        owner => "labadmin",
        mode => "666",
        content => "Congratulations! you have created a test file using puppet\n",
}
}

#do dry run on client1
labadmin@puppet-client1:~$ puppet agent --server=puppet --onetime --no-daemonize --verbose --noop
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Applying configuration version '1462108972'
Notice: /Stage[main]/Main/Node[puppet-client1]/File[/home/labadmin/hello-puppet.txt]/mode: current_value 0644, should be 0666 (noop)
Notice: Node[puppet-client1]: Would have triggered 'refresh' from 1 events
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Applied catalog in 0.04 seconds

#as you can see file mode is '644'
labadmin@puppet-client1:~$ ls -la hello-puppet.txt 
-rw-r--r-- 1 labadmin labadmin 59 Apr 30 21:53 hello-puppet.txt

#pull configuration from server
labadmin@puppet-client1:~$ puppet agent --test
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet-client1.sunny.net
Info: Applying configuration version '1462109031'
Notice: /Stage[main]/Main/Node[puppet-client1]/File[/home/labadmin/hello-puppet.txt]/mode: mode changed '0644' to '0666'
Notice: Applied catalog in 0.03 seconds

#as you can see file mode changed to '666'
labadmin@puppet-client1:~$ ls -la hello-puppet.txt 
-rw-rw-rw- 1 labadmin labadmin 59 Apr 30 21:53 hello-puppet.txt
labadmin@puppet-client1:~$

Delete file

#update desire state to 'absent'
node default {}
node "puppet-client1" {
file {"/home/labadmin/hello-puppet.txt":
        ensure => "absent",
        owner => "labadmin",
        mode => "666",
        content => "Congratulations! you have created a test file using puppet\n",
}
}

#
labadmin@puppet-client1:~$ ls -la hello-puppet.txt
-rw-rw-rw- 1 labadmin labadmin   59 Apr 30 21:53 hello-puppet.txt

#do a dry run
labadmin@puppet-client1:~$ puppet agent --server=puppet --onetime --no-daemonize --verbose --noop
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Applying configuration version '1462196853'
Notice: /Stage[main]/Main/Node[puppet-client1]/File[/home/labadmin/hello-puppet.txt]/ensure: current_value file, should be absent (noop)
Notice: Node[puppet-client1]: Would have triggered 'refresh' from 1 events
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Applied catalog in 0.04 seconds

#pull configuration from server
labadmin@puppet-client1:~$ puppet agent --test
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet-client1.sunny.net
Info: Applying configuration version '1462196909'
Info: Computing checksum on file /home/labadmin/hello-puppet.txt
Info: /Stage[main]/Main/Node[puppet-client1]/File[/home/labadmin/hello-puppet.txt]: Filebucketed /home/labadmin/hello-puppet.txt to puppet with sum 3b70fe8cd273dfaec69f4356bee53529
Notice: /Stage[main]/Main/Node[puppet-client1]/File[/home/labadmin/hello-puppet.txt]/ensure: removed
Notice: Applied catalog in 0.07 seconds

#as you can see file has been deleted from client1
labadmin@puppet-client1:~$ ls -l
total 24
drwxr-xr-x 2 labadmin labadmin 4096 Apr 29 17:15 Desktop
-rw-r--r-- 1 labadmin labadmin 8980 Apr 27 17:58 examples.desktop
-rw-r--r-- 1 root     root     5244 Apr 19 17:53 puppetlabs-release-pc1-trusty.deb
labadmin@puppet-client1:~$ 

 

References

https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules

https://docs.puppet.com/puppet/latest/reference/type.html#package-attribute-root

 

 

 

 

 

Lab-21:Openstack configuration cleanup

This is a short lab to demonstrate how to clean up Openstack configuration using cli commands.

I have setup Openstack with two tenants, one instance in each tenant. Tenant subnet and network,  a public router with gateway and tenant interfaces connected to router. There is a sequence you need to follow i.e. you can not delete subnet & router before deleting instance. This sequence works for me

  1. Delete instances
  2. Delete router interfaces
  3. Clear router gateway
  4. Delete router
  5. Delete tenant subnets
  6. Delete tenant networks

Delete instances.

[root@localhost ~(keystone_admin)]# nova delete Tenant1_VM1
[root@localhost ~(keystone_admin)]# nova delete Tenant2_VM1

Delete router interfaces. This step will delete router interfaces towards tenants, you need to specify router-id and interface subnet-id to delete router interfaces. In my case I have two interfaces on router, one for each tenant

[root@localhost ~(keystone_admin)]# neutron router-list
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name       | external_gateway_info                                                                                                                                                                      | distributed | ha    |
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| ee34dbdc-2368-4cb9-ba50-8f13e00ae389 | pub_router | {"network_id": "3ac45bab-e08b-47ff-b01e-5b0ddb9127ca", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "684fa6ab-4fb9-406a-9264-2c53afa8d9ff", "ip_address": "167.254.209.87"}]} | False       | False |
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
[root@localhost ~(keystone_admin)]# neutron router-show ee34dbdc-2368-4cb9-ba50-8f13e00ae389
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                                                                      |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                                                                                                                       |
| distributed           | False                                                                                                                                                                                      |
| external_gateway_info | {"network_id": "3ac45bab-e08b-47ff-b01e-5b0ddb9127ca", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "684fa6ab-4fb9-406a-9264-2c53afa8d9ff", "ip_address": "167.254.209.87"}]} |
| ha                    | False                                                                                                                                                                                      |
| id                    | ee34dbdc-2368-4cb9-ba50-8f13e00ae389                                                                                                                                                       |
| name                  | pub_router                                                                                                                                                                                 |
| routes                |                                                                                                                                                                                            |
| status                | ACTIVE                                                                                                                                                                                     |
| tenant_id             | 5dc8330acb6f4fb8a91f2abb839f7773                                                                                                                                                           |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# neutron port-list
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                             |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| 2c7eaaed-88d7-47cb-99ac-740c691f488e |      | fa:16:3e:13:58:d5 | {"subnet_id": "2c3446dd-5c59-43a7-a067-4cb0f18511e6", "ip_address": "192.168.12.1"}   |
| 51a1f2ed-eef5-4527-bce9-153d6a7986cd |      | fa:16:3e:00:db:be | {"subnet_id": "395d7a7d-7479-4b6e-b184-c9638ff19beb", "ip_address": "192.168.11.1"}   |
| a22ca662-353b-4081-9754-1eb3a2e07ad8 |      | fa:16:3e:01:b9:2d | {"subnet_id": "c8a1061a-1ed9-43c8-a18f-684307644d68", "ip_address": "10.0.0.2"}       |
| c6076190-d44f-4601-8110-df3b6744ceb8 |      | fa:16:3e:67:5a:ca | {"subnet_id": "684fa6ab-4fb9-406a-9264-2c53afa8d9ff", "ip_address": "167.254.209.87"} |
| e85179c0-08a0-47bf-95ce-a7a59d526b78 |      | fa:16:3e:0e:3a:23 | {"subnet_id": "2c3446dd-5c59-43a7-a067-4cb0f18511e6", "ip_address": "192.168.12.2"}   |
| e962925b-c9ff-4d86-8761-6d88e04491fa |      | fa:16:3e:c9:36:69 | {"subnet_id": "395d7a7d-7479-4b6e-b184-c9638ff19beb", "ip_address": "192.168.11.2"}   |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron router-interface-delete ee34dbdc-2368-4cb9-ba50-8f13e00ae389 2c3446dd-5c59-43a7-a067-4cb0f18511e6
Removed interface from router ee34dbdc-2368-4cb9-ba50-8f13e00ae389.
[root@localhost ~(keystone_admin)]# neutron router-interface-delete ee34dbdc-2368-4cb9-ba50-8f13e00ae389 395d7a7d-7479-4b6e-b184-c9638ff19beb
Removed interface from router ee34dbdc-2368-4cb9-ba50-8f13e00ae389.
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# neutron router-list
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name       | external_gateway_info                                                                                                                                                                      | distributed | ha    |
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| ee34dbdc-2368-4cb9-ba50-8f13e00ae389 | pub_router | {"network_id": "3ac45bab-e08b-47ff-b01e-5b0ddb9127ca", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "684fa6ab-4fb9-406a-9264-2c53afa8d9ff", "ip_address": "167.254.209.87"}]} | False       | False |
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

Clear router gateway

[root@localhost ~(keystone_admin)]# neutron help | grep gateway
  gateway-device-create             Create a network gateway device.
  gateway-device-delete             Delete a given network gateway device.
  gateway-device-list               List network gateway devices for a given tenant.
  gateway-device-show               Show information for a given network gateway device.
  gateway-device-update             Update a network gateway device.
  net-gateway-connect               Add an internal network interface to a router.
  net-gateway-create                Create a network gateway.
  net-gateway-delete                Delete a given network gateway.
  net-gateway-disconnect            Remove a network from a network gateway.
  net-gateway-list                  List network gateways for a given tenant.
  net-gateway-show                  Show information of a given network gateway.
  net-gateway-update                Update the name for a network gateway.
  router-gateway-clear              Remove an external network gateway from a router.
  router-gateway-set                Set the external network gateway for a router.
[root@localhost ~(keystone_admin)]# neutron router-gateway-clear help
Unable to find router with name 'help'
[root@localhost ~(keystone_admin)]# neutron help router-gateway-clear
usage: neutron router-gateway-clear [-h] [--request-format {json,xml}] ROUTER

Remove an external network gateway from a router.

positional arguments:
  ROUTER                ID or name of the router.

optional arguments:
  -h, --help            show this help message and exit
  --request-format {json,xml}
                        The XML or JSON request format.
[root@localhost ~(keystone_admin)]# neutron router-gateway-clear pub_router
Removed gateway from router pub_router
[root@localhost ~(keystone_admin)]# neutron router-list
+--------------------------------------+------------+-----------------------+-------------+-------+
| id                                   | name       | external_gateway_info | distributed | ha    |
+--------------------------------------+------------+-----------------------+-------------+-------+
| ee34dbdc-2368-4cb9-ba50-8f13e00ae389 | pub_router | null                  | False       | False |
+--------------------------------------+------------+-----------------------+-------------+-------+
[root@localhost ~(keystone_admin)]#

Finally delete router

[root@localhost ~(keystone_admin)]# neutron router-delete pub_router
Deleted router: pub_router
[root@localhost ~(keystone_admin)]# neutron router-list

[root@localhost ~(keystone_admin)]#

Delete tenant subnet and network

[root@localhost ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+----------------+------------------+------------------------------------------------------+
| id                                   | name           | cidr             | allocation_pools                                     |
+--------------------------------------+----------------+------------------+------------------------------------------------------+
| c8a1061a-1ed9-43c8-a18f-684307644d68 | private_subnet | 10.0.0.0/24      | {"start": "10.0.0.2", "end": "10.0.0.254"}           |
| 684fa6ab-4fb9-406a-9264-2c53afa8d9ff | public_subnet  | 167.254.209.0/24 | {"start": "167.254.209.87", "end": "167.254.209.95"} |
| 395d7a7d-7479-4b6e-b184-c9638ff19beb | Tenant1_subnet | 192.168.11.0/24  | {"start": "192.168.11.2", "end": "192.168.11.254"}   |
| 2c3446dd-5c59-43a7-a067-4cb0f18511e6 | Tenant2_subnet | 192.168.12.0/24  | {"start": "192.168.12.2", "end": "192.168.12.254"}   |
+--------------------------------------+----------------+------------------+------------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron help | grep subnet
  subnet-create                     Create a subnet for a given tenant.
  subnet-delete                     Delete a given subnet.
  subnet-list                       List subnets that belong to a given tenant.
  subnet-show                       Show information of a given subnet.
  subnet-update                     Update subnet's information.
  subnetpool-create                 Create a subnetpool for a given tenant.
  subnetpool-delete                 Delete a given subnetpool.
  subnetpool-list                   List subnetpools that belong to a given tenant.
  subnetpool-show                   Show information of a given subnetpool.
  subnetpool-update                 Update subnetpool's information.
[root@localhost ~(keystone_admin)]# neutron subnet delete private_subnet
Unknown command [u'subnet', u'delete', u'private_subnet']
[root@localhost ~(keystone_admin)]# neutron subnet-delete private_subnet
Deleted subnet: private_subnet
[root@localhost ~(keystone_admin)]# neutron subnet-delete public_subnet
Deleted subnet: public_subnet
[root@localhost ~(keystone_admin)]# neutron subnet-delete Tenant1_subnet
Deleted subnet: Tenant1_subnet
[root@localhost ~(keystone_admin)]# neutron subnet-delete Tenant2_subnet
Deleted subnet: Tenant2_subnet
[root@localhost ~(keystone_admin)]# neutron subnet-list

[root@localhost ~(keystone_admin)]#


[root@localhost ~(keystone_admin)]# neutron net-list
+--------------------------------------+-------------+---------+
| id                                   | name        | subnets |
+--------------------------------------+-------------+---------+
| c44c3620-122a-450f-99ab-839c7798084d | Tenant1_net |         |
| a289276d-15eb-4397-af1a-67313eb9fa99 | private     |         |
| 3ac45bab-e08b-47ff-b01e-5b0ddb9127ca | public      |         |
| ff9c3eb7-f88f-42bb-af5f-ea810dad7505 | Tenant2_net |         |
+--------------------------------------+-------------+---------+
[root@localhost ~(keystone_admin)]# neutron net-delete Tenant1_net
Deleted network: Tenant1_net
[root@localhost ~(keystone_admin)]# neutron net-delete Tenant2_net
Deleted network: Tenant2_net
[root@localhost ~(keystone_admin)]# neutron net-delete private
Deleted network: private
[root@localhost ~(keystone_admin)]# neutron net-delete public
Deleted network: public
[root@localhost ~(keystone_admin)]# neutron net-list

[root@localhost ~(keystone_admin)]#

Delete nova flavor and security group

[root@localhost ~(keystone_admin)]# nova flavor-list
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| 81a85a3e-d809-4619-8ff7-f589936b1d20 | m2.nano | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]# nova help | grep flavor
    flavor-access-add           Add flavor access for the given tenant.
    flavor-access-list          Print access information about the given
                                flavor.
    flavor-access-remove        Remove flavor access for the given tenant.
    flavor-create               Create a new flavor
    flavor-delete               Delete a specific flavor
    flavor-key                  Set or unset extra_spec for a flavor.
    flavor-list                 Print a list of available 'flavors' (sizes of
    flavor-show                 Show details about the given flavor.
[root@localhost ~(keystone_admin)]# nova flavor-delete m2.nano
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| 81a85a3e-d809-4619-8ff7-f589936b1d20 | m2.nano | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]#


[root@localhost ~(keystone_admin)]# neutron security-group-list
+--------------------------------------+---------+----------------------------------------------------------------------+
| id                                   | name    | security_group_rules                                                 |
+--------------------------------------+---------+----------------------------------------------------------------------+
| 0d4b02eb-c67c-49eb-b45f-d7038482f02f | default | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, remote_group_id: 0d4b02eb-c67c-49eb-b45f-d7038482f02f |
|                                      |         | ingress, IPv6, remote_group_id: 0d4b02eb-c67c-49eb-b45f-d7038482f02f |
| 0db6e683-2aaf-4a8f-9513-e3e86e006457 | mysec   | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, 22/tcp                                                |
|                                      |         | ingress, IPv4, icmp                                                  |
| 6af71703-55ac-4abc-9188-d212f12a8267 | default | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, remote_group_id: 6af71703-55ac-4abc-9188-d212f12a8267 |
|                                      |         | ingress, IPv6, remote_group_id: 6af71703-55ac-4abc-9188-d212f12a8267 |
| ce23d4c6-23c7-4569-abb3-2da61db2ad9f | default | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, remote_group_id: ce23d4c6-23c7-4569-abb3-2da61db2ad9f |
|                                      |         | ingress, IPv6, remote_group_id: ce23d4c6-23c7-4569-abb3-2da61db2ad9f |
+--------------------------------------+---------+----------------------------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron help | grep security-group
  security-group-create             Create a security group.
  security-group-delete             Delete a given security group.
  security-group-list               List security groups that belong to a given tenant.
  security-group-rule-create        Create a security group rule.
  security-group-rule-delete        Delete a given security group rule.
  security-group-rule-list          List security group rules that belong to a given tenant.
  security-group-rule-show          Show information of a given security group rule.
  security-group-show               Show information of a given security group.
  security-group-update             Update a given security group.
[root@localhost ~(keystone_admin)]# neutron security-group-delete mysec
Deleted security_group: mysec
[root@localhost ~(keystone_admin)]# neutron security-group-list
+--------------------------------------+---------+----------------------------------------------------------------------+
| id                                   | name    | security_group_rules                                                 |
+--------------------------------------+---------+----------------------------------------------------------------------+
| 0d4b02eb-c67c-49eb-b45f-d7038482f02f | default | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, remote_group_id: 0d4b02eb-c67c-49eb-b45f-d7038482f02f |
|                                      |         | ingress, IPv6, remote_group_id: 0d4b02eb-c67c-49eb-b45f-d7038482f02f |
| 6af71703-55ac-4abc-9188-d212f12a8267 | default | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, remote_group_id: 6af71703-55ac-4abc-9188-d212f12a8267 |
|                                      |         | ingress, IPv6, remote_group_id: 6af71703-55ac-4abc-9188-d212f12a8267 |
| ce23d4c6-23c7-4569-abb3-2da61db2ad9f | default | egress, IPv4                                                         |
|                                      |         | egress, IPv6                                                         |
|                                      |         | ingress, IPv4, remote_group_id: ce23d4c6-23c7-4569-abb3-2da61db2ad9f |
|                                      |         | ingress, IPv6, remote_group_id: ce23d4c6-23c7-4569-abb3-2da61db2ad9f |
+--------------------------------------+---------+----------------------------------------------------------------------+
[root@localhost ~(keystone_admin)]#

This completes clean up, let’s do final check

[root@localhost ~(keystone_admin)]# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[root@localhost ~(keystone_admin)]# neutron net-list

[root@localhost ~(keystone_admin)]# neutron subnet-list

[root@localhost ~(keystone_admin)]# ip netns
[root@localhost ~(keystone_admin)]# nova flavor-list
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]# neutron router-list

[root@localhost ~(keystone_admin)]#

How about OVS bridge & flows.

OVS bridges (br-int,br-tun  & br-ex) are still there as these are not provisioned by Openstack cli. Flows related to instance vlan-id add/strip, vxlan add/stip are deleted but default flows remained in the bridge

[root@localhost ~(keystone_admin)]# neutron router-list

[root@localhost ~(keystone_admin)]# ovs-vsctl show
fa6cb700-bc18-4368-b333-38f5f857655a
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a0a000a"
            Interface "vxlan-0a0a000a"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.0.1", out_key=flow, remote_ip="10.10.0.10"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.4.0"
[root@localhost ~(keystone_admin)]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0xaf13b266b8c0ad46, duration=572576.465s, table=0, n_packets=887, n_bytes=101738, idle_age=65534, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x0, duration=562185.792s, table=0, n_packets=1343, n_bytes=136697, idle_age=65534, hard_age=65534, priority=1,in_port=2 actions=resubmit(,4)
 cookie=0xaf13b266b8c0ad46, duration=572576.465s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=572576.464s, table=2, n_packets=869, n_bytes=100226, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0xaf13b266b8c0ad46, duration=572576.464s, table=2, n_packets=18, n_bytes=1512, idle_age=65534, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0xaf13b266b8c0ad46, duration=572576.464s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=572576.464s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=572576.463s, table=6, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=572576.463s, table=10, n_packets=1343, n_bytes=136697, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xaf13b266b8c0ad46,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0xaf13b266b8c0ad46, duration=572576.463s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
 cookie=0xaf13b266b8c0ad46, duration=572576.445s, table=22, n_packets=18, n_bytes=1512, idle_age=65534, hard_age=65534, priority=0 actions=drop
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0xaf13b266b8c0ad46, duration=572612.933s, table=0, n_packets=2230, n_bytes=238435, idle_age=65534, hard_age=65534, priority=0 actions=NORMAL
 cookie=0xaf13b266b8c0ad46, duration=572612.927s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=572612.921s, table=24, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
[root@localhost ~(keystone_admin)]#

***********************
compute node
***********************
Last login: Sat Apr 16 16:55:42 2016 from r2100471-win7-2.fnc.net.local
[labadmin@localhost ~]$ su -
Password:
Last login: Sat Apr 16 16:55:47 EDT 2016 on pts/2
[root@localhost ~]# ovs-vsctl show
4973e933-214d-4d54-b241-db3b33e16526
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a0001"
            Interface "vxlan-0a0a0001"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.0.10", out_key=flow, remote_ip="10.10.0.1"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port mmport
            Interface mmport
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.4.0"
[root@localhost ~]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x9a20a7bf7b554be4, duration=831110.208s, table=0, n_packets=1559, n_bytes=163853, idle_age=65534, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x9a20a7bf7b554be4, duration=572879.553s, table=0, n_packets=869, n_bytes=100226, idle_age=65534, hard_age=65534, priority=1,in_port=2 actions=resubmit(,4)
 cookie=0x9a20a7bf7b554be4, duration=831110.208s, table=0, n_packets=8, n_bytes=648, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x9a20a7bf7b554be4, duration=831110.208s, table=2, n_packets=1205, n_bytes=120821, idle_age=65534, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x9a20a7bf7b554be4, duration=831110.208s, table=2, n_packets=354, n_bytes=43032, idle_age=65534, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0x9a20a7bf7b554be4, duration=831110.208s, table=3, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x9a20a7bf7b554be4, duration=831110.207s, table=4, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x9a20a7bf7b554be4, duration=831110.207s, table=6, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
 cookie=0x9a20a7bf7b554be4, duration=831110.207s, table=10, n_packets=869, n_bytes=100226, idle_age=65534, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,cookie=0x9a20a7bf7b554be4,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x9a20a7bf7b554be4, duration=831110.207s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)
 cookie=0x9a20a7bf7b554be4, duration=831110.196s, table=22, n_packets=117, n_bytes=13334, idle_age=65534, hard_age=65534, priority=0 actions=drop
[root@localhost ~]#

 

 

 

Lab-20:Debugging Openstack Neutron

While I was working on Lab-19 I came across  neutron issues. I learned a lot while debugging and resolving these issues. In this lab I will show in detail how I resolved issues

Issue-1:

Immediately after installing  I checked the status of Openstack and found neutron-l3-agent ‘inactive’. I know l3 is a required agent for router function so definitely needs to be active. Try below command to make it active. Note:it is good practice to check agent status

[root@localhost ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-cert:                    active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             active
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       inactive    (disabled on boot)
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    active
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
openvswitch:                            active
dbus:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==

[root@localhost network-scripts(keystone_admin)]# service neutron-l3-agent start

Note: I am not sure why l3-agent shows ‘diabled on boot’ and came up as ‘inactive’

Issue-2:

My instance boot was successful. Nova list show instance active and IP address 192.168.11.3 assigned to it but ping to instance failed. This could be either network issue or dhcp issue.

Note: nova list command show instance IP address but that doesn’t mean instance actual got the IP.

I rebooted my instance (nova reboot Tenant1_VM1) and checked dhcp interface counts to see if it is receiving any packets. As you can see Rx packets count is not incrementing, it means instance dhcp discover messages are not making upto dhcp server

[root@localhost ~(keystone_admin)]# ip netns exec qdhcp-c44c3620-122a-450f-99ab-839c7798084d ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tape962925b-c9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.11.2  netmask 255.255.255.0  broadcast 192.168.11.255
        inet6 fe80::f816:3eff:fec9:3669  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:c9:36:69  txqueuelen 0  (Ethernet)
        RX packets 41  bytes 1994 (1.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# ip netns exec qdhcp-c44c3620-122a-450f-99ab-839c7798084d ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tape962925b-c9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.11.2  netmask 255.255.255.0  broadcast 192.168.11.255
        inet6 fe80::f816:3eff:fec9:3669  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:c9:36:69  txqueuelen 0  (Ethernet)
        RX packets 41  bytes 1994 (1.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Check if dhcp process is running and restart dhcp agent. In neutron dhcp function is provided by dnsmasq

[root@localhost ~(keystone_admin)]# ps aux | grep dnsmasq
nobody    2615  0.0  0.0  15552   904 ?        S    Apr11   0:00 dnsmasq 
--no-hosts --no-resolv --strict-order --except-interface=lo 
--pid-file=/var/lib/neutron/dhcp/c44c3620-122a-450f-99ab-839c7798084d/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/c44c3620-122a-450f-99ab-839c7798084d/host --addn-hosts=/var/lib/neutron/dhcp/c44c3620-122a-450f-99ab-839c7798084d/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/c44c3620-122a-450f-99ab-839c7798084d/opts --dhcp-leasefile=/var/lib/neutron/dhcp/c44c3620-122a-450f-99ab-839c7798084d/leases --dhcp-match=set:ipxe,175 --bind-interfaces --interface=tape962925b-c9 --dhcp-range=set:tag0,192.168.11.0,static,86400s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq-neutron.conf --domain=openstacklocal
root     19160  0.0  0.0 112648   960 pts/0    S+   12:33   0:00 grep --color=auto dnsmasq
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# service neutron-dhcp-agent restart

This didn’t resolve issue, I still can’t ping my instance

I wanted to make sure my instance actually got the IP address. The best way to check it is by console-log command. console-log command provides detail boot log of an instance, it dumps instance interface info. Try ‘nova console-log <tenant name>’ command and see if instance has IP address

[root@localhost ~(keystone_admin)]# nova list
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks                 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| a33591b6-c325-454d-a4b0-50ba82d0b257 | Tenant1_VM1 | ACTIVE | -          | Running     | Tenant1_net=192.168.11.3 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
[root@localhost ~(keystone_admin)]# ip netns
qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389
qdhcp-c44c3620-122a-450f-99ab-839c7798084d
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389 ping 192.168.11.3
PING 192.168.11.3 (192.168.11.3) 56(84) bytes of data.
From 192.168.11.1 icmp_seq=1 Destination Host Unreachable
From 192.168.11.1 icmp_seq=2 Destination Host Unreachable
From 192.168.11.1 icmp_seq=3 Destination Host Unreachable
From 192.168.11.1 icmp_seq=4 Destination Host Unreachable
^C
--- 192.168.11.3 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3999ms
pipe 4

#I am not showing the complete log only the part I am interested in
[root@localhost ~(keystone_admin)]# nova console-log Tenant1_VM1
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
############ debug start ##############
### /etc/init.d/sshd start
Starting dropbear sshd: OK
route: fscanf
### ifconfig -a
eth0      Link encap:Ethernet  HWaddr FA:16:3E:DB:A1:50
          inet6 addr: fe80::f816:3eff:fedb:a150/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1284 (1.2 KiB)  TX bytes:1132 (1.1 KiB)
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
### route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
route: fscanf

As you can see from console-log my instance doesn’t have IP address.Eth0 interface has no IP address, also  dhcp discover is timing out. This mean dhcp packets are getting dropped by the network somewhere, something is not set correctly

I dumped br-int, br-tun bridges on compute and network nodes. I noticed that  vxlan port was missing in br-tun bridge on compute node.  I knew neutron-openvswitch plugin is responsible for setting up br-tun & br-int so I restarted this agent, it resolved the issue and now br-tun on compute node has vxlan port. I rebooted the instance (nova reboot Tenant1_VM1) and checked the console-log. No change, instance still doesn’t have IP address, bummer…

#vxlan port is missing from br-tun bridge on compute node
[root@localhost ~]# ovs-vsctl show
4973e933-214d-4d54-b241-db3b33e16526
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo8d2aef86-ef"
            tag: 1
            Interface "qvo8d2aef86-ef"
    ovs_version: "2.4.0"

# restart openvswitch-agent
[root@localhost ~(keystone_admin)]# service neutron-openvswitch-agent status

#vxlan port created
[root@localhost ~]# ovs-vsctl show
4973e933-214d-4d54-b241-db3b33e16526
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a0001"
            Interface "vxlan-0a0a0001"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.0.10", out_key=flow, remote_ip="10.10.0.1"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo8d2aef86-ef"
            tag: 2
            Interface "qvo8d2aef86-ef"
    ovs_version: "2.4.0"
[root@localhost ~(keystone_admin)]# nova list
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks                 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| a33591b6-c325-454d-a4b0-50ba82d0b257 | Tenant1_VM1 | ACTIVE | -          | Running     | Tenant1_net=192.168.11.3 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
[root@localhost ~(keystone_admin)]# ip netns
qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389
qdhcp-c44c3620-122a-450f-99ab-839c7798084d
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389 ping 192.168.11.3
PING 192.168.11.3 (192.168.11.3) 56(84) bytes of data.
From 192.168.11.1 icmp_seq=1 Destination Host Unreachable
From 192.168.11.1 icmp_seq=2 Destination Host Unreachable
From 192.168.11.1 icmp_seq=3 Destination Host Unreachable
From 192.168.11.1 icmp_seq=4 Destination Host Unreachable
^C
--- 192.168.11.3 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3999ms
pipe 4
[root@localhost ~(keystone_admin)]# nova console-log Tenant1_VM1

 

Now it is personal bring it on, take out big guns.. tcpdump

Started from the source of the problem, tenant instance. I started with tcpdump on linux bridge interfaces (tap & qvb) and rebooted my instance.

neutron_debugging-1

[root@localhost ~]# tcpdump -i qvb90ebb2d6-19 udp
tcpdump: WARNING: qvb90ebb2d6-19: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qvb90ebb2d6-19, link-type EN10MB (Ethernet), capture size 65535 bytes
12:03:23.199952 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:20:22:f0 (oui Unknown), length 295
12:03:23.201467 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:20:22:f0 (oui Unknown), length 307

As you can see dhcp discover packets received on bridge interfaces  but no reply packets. So Linux bridge is not the culprit .. move on

Next I did tcpdump on br-int interface qvo.  This interface also receiving  dhcp packets

We have covered all virtual interfaces on compute node, other interfaces on br-int and br-tun are internal interfaces tcpdump will not work on them. This link shows a cool trick how to create mirror port for internal bridge port and run tcpdump on it.

Here am creating mirror port for br-int internal port patch-tun which is connected to br-tun bridge. I will then run tcpdump on it

#create a dummy port name mmport and set state to UP
$ip link add name mmport type dummy
$ip link set dev mmport up

#Add device mmport to bridge br-int:
$ovs-vsctl add-port br-int mmport

[root@localhost ~]# ovs-vsctl show
4973e933-214d-4d54-b241-db3b33e16526
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a0001"
            Interface "vxlan-0a0a0001"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.0.10", out_key=flow, remote_ip="10.10.0.1"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qvo90ebb2d6-19"
            tag: 4
            Interface "qvo90ebb2d6-19"
        Port mmport
            Interface mmport
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.4.0"
[root@localhost ~]#
#Create mirror of patch-tun to mmport. I don't understand the command but believe 
#me it works, just cut & paste
[root@localhost ~]# ovs-vsctl -- set Bridge br-int mirrors=@m  -- --id=@mmport \
> get Port mmport  -- --id=@patch-tun get Port patch-tun \
> -- --id=@m create Mirror name=mmirror select-dst-port=@patch-tun \
> select-src-port=@patch-tun output-port=@mmport select_all=1
c171aa59-313a-4e7f-b4ae-e0568fe6ab7a
[root@localhost ~]#

 

run tcpdump on dummy mirror port, as you can see it is receiving dhcp discover messages.

[root@localhost ~]# tcpdump -i mmport | grep DHCP
tcpdump: WARNING: mmport: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on mmport, link-type EN10MB (Ethernet), capture size 65535 bytes
12:52:13.625834 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:20:22:f0 (oui Unknown), length 295
12:52:13.627315 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:20:22:f0 (oui Unknown), length 307

Try these commands to delete mirror port

#ovs-vsctl clear Bridge br-int mirrors
#ovs-vsctl del-port br-int mmport
#ip link delete dev mmport

Next I tried ens5 (physical) interface. Network and compute nodes are connected on this interface.  This interface is also receiving dhcp messages. So the issue is not compute node

Compute node seems to be having right so I moved debugging to network node. tcpdump on ens5 looks good. tcpdump on qvo interface is not good , no dhcp messages received on it. I created mirror port on br-int on patch-tun bridge and found that it is not receiving dhcp messages.  So something is wrong in br-tun on network node. Looks like I have identified the culprit.

I closely analyzed br-tun bridge port  and flows. After googling and learning about OVS flow table also comparing flow table with compute node br-tun  (both nodes table should be identical)  I found a flow was missing in network node br-tun

I added the flow manually and rebooted my instance. My instance successfully fetched an IP address from dhcp server. I am able to ping the instance.

[root@localhost ~(keystone_admin)]# ovs-ofctl add-flow br-tun "in_port=2 priority=1 table=0 actions=resubmit(,4)"
[root@localhost ~(keystone_admin)]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0xaf13b266b8c0ad46, duration=10418.686s, table=0, n_packets=0, n_bytes=0, idle_age=10418, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x0, duration=28.013s, table=0, n_packets=0, n_bytes=0, idle_age=28, priority=1,in_port=2 actions=resubmit(,4)
 cookie=0xaf13b266b8c0ad46, duration=10418.686s, table=0, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=2, n_packets=0, n_bytes=0, idle_age=10418, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=2, n_packets=0, n_bytes=0, idle_age=10418, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=3, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.055s, table=4, n_packets=0, n_bytes=0, idle_age=10418, priority=1,tun_id=0x24 actions=mod_vlan_vid:3,resubmit(,10)
 cookie=0xaf13b266b8c0ad46, duration=10418.030s, table=4, n_packets=0, n_bytes=0, idle_age=10418, priority=1,tun_id=0x3e actions=mod_vlan_vid:4,resubmit(,10)
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=4, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.684s, table=6, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.684s, table=10, n_packets=0, n_bytes=0, idle_age=10418, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xaf13b266b8c0ad46,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0xaf13b266b8c0ad46, duration=10418.684s, table=20, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=resubmit(,22)
 cookie=0xaf13b266b8c0ad46, duration=10418.666s, table=22, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
[root@localhost ~(keystone_admin)]#

 

Here is the overview of br-tun flow table. This link explained it quite well. OVS tables are made of match and actions criteria. These are the tables I have in my br-tun bridge

  1. Table-0:by default all packets lands in table-0
    1. match: In_port.
    2. actions: In_port=1 (traffic from br-int) send packet to table-1. In_port=2 (traffic from remote br-tun) send packets to table-4
  2. Table-2,
    1. match: unicast, broadcast and multicast
    2. actions: unicast packet send packet to table-20. broadcast or multi-cast send packet to table-22
  3. Table-20
    1. actions: send packets to table-22
  4. Table-22
    1. match: dl_vlan.
    2. actions: strip vlan, add vxlan tag and send it out of port-2
  5. Table-4
    1. match: vxlan tunnel-id.
    2. actions: add dl_vlan and send it to table-10
  6. Table-10
    1. actions: this table strips vxlan tunnel-id and insert flow in table-20
neutron_debugging-2
br-tun flow tables

 

 

 

 

Lab-19:Openstack multi-node deployment using Packstack

I finally managed to get second machine. In this lab I will demonstrate Openstack deployment in two node environment. I will setup controller and network node in one machine and compute node in another machine. As usual I will be using packstack.

Pre-condition:

For this lab I am using CentOS 7. I have installed CentOS 7 on two machines. I have two physical interfaces on both machines (enp1s0 & ens5). I am using enp1s0 for remote access to machines and ens5 to connect them together. ens5 will be used for Openstack API and tunnel communication.

# cat /etc/*elease

CentOS Linux release 7.2.1511 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

CentOS Linux release 7.2.1511 (Core)
CentOS Linux release 7.2.1511 (Core)
[root@controller ~]#

Follow these steps to prep for packstack installation

  • Install yum-utils on both nodes
    $yum install -y yum-utils
  • Set SELINUX in permissive mode on both nodes (vi /etc/selinux/config)
    SELINUX=permissive
  • disable Network Manager on both nodes
    $sudo systemctl disable NetworkManager
  • Disable firewall on both nodes

$systemctl disable firewalld

  • sudo yum update -y
  • reboot both nodes

I followed this link to load latest Openstack, in my case Liberty. Try below commands on controller node

#add these to your environment file
[root@localhost ~]# cat /etc/environment
LANG=en_US.utf-8
LC_ALL=en_US.utf-8

[root@localhost ~]# sudo yum install -y centos-release-openstack-liberty
[root@localhost ~]# sudo yum update -y
[root@localhost ~]# sudo yum install -y openstack-packstack

Update IP address for ens5 interface on controller/network and compute node. I have updated file in this directory /etc/sysconfig/network-scripts/ifcfg-ens5 and then restart network manager ‘sudo service network restart’

controller/network node = 10.10.0.1
compute node = 10.10.0.10

#This is the example of my ens5 file in controller/network node
[root@localhost network-scripts(keystone_admin)]# cat ifcfg-ens5
HWADDR=00:0A:CD:2A:14:08
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
NAME=ens5
UUID=2bb5b0d3-1369-4039-b07f-5deebfc25bd9
ONBOOT=yes
IPADDR=10.10.0.1
PREFIX=24

Make sure you can ssh to compute node from controller node

[root@localhost network-scripts]# ssh -l labadmin 10.10.0.10
The authenticity of host '10.10.0.10 (10.10.0.10)' can't be established.
ECDSA key fingerprint is 48:06:a3:81:f4:62:4e:1e:3f:73:9f:34:12:1d:17:af.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.0.10' (ECDSA) to the list of known hosts.
labadmin@10.10.0.10's password:
Last login: Fri Apr  8 17:06:13 2016
[labadmin@localhost ~]$

Procedure:

  • Generate an answer-file and edit it to suite your topology. Below are the changes I have made in my answer-file. you can find my answer file multi-node-answer-file-lab_19.txt
 $packstack --gen-answer-file=multi-node-answer-file-lab_19.txt

#these are the changes I made in my answer-file
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_CINDER_INSTALL=n
CONFIG_CONTROLLER_HOST=10.10.0.1
CONFIG_COMPUTE_HOSTS=10.10.0.10
CONFIG_NETWORK_HOSTS=10.10.0.1
CONFIG_LBAAS_INSTALL=y
CONFIG_NEUTRON_FWAAS=y
CONFIG_NEUTRON_VPNAAS=y
CONFIG_PROVISION_OVS_BRIDGE=y
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_OVS_TUNNEL_IF=ens5
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=admin
  • Start Packstack with newly created answer file
 $packstack --answer-file multi-node-answer-file-lab_9.txt

#after 10-15 min you will see this message
**** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host 10.10.0.1. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://10.10.0.1/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * Because of the kernel update the host 10.10.0.1 requires reboot.
 * Because of the kernel update the host 10.10.0.10 requires reboot.
 * The installation log file is available at: /var/tmp/packstack/20160422-202525-FLQT1Q/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20160422-202525-FLQT1Q/manifests
  • On a browser point to this location ‘http://10.10.0.1/dashboard&#8217;. Login as username:admin and password:admin. Note: check the admin user password in ‘keystonerc_admin’ file located in same directory from where packstack was started. Delete routers,networks and flavors from GUI
  • Source admin resources and create networks and tenants
[root@localhost ~]# . keystonerc_admin
[root@localhost ~(keystone_admin)]#
  • Check Openstack status make sure all require components are ‘active’. Note:for me neutron-l3-agent was ‘inactive’ I  tried below command to make it active
[root@localhost ~]#service neutron-l3-agent start
  • Create a new flavor
[root@localhost ~(keystone_admin)]# nova flavor-create m2.nano auto 128 1 1
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| 81a85a3e-d809-4619-8ff7-f589936b1d20 | m2.nano | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# nova flavor-list
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| 81a85a3e-d809-4619-8ff7-f589936b1d20 | m2.nano | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]#
  • Create public networks & router
[root@localhost ~(keystone_admin)]# neutron net-create public --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 3ac45bab-e08b-47ff-b01e-5b0ddb9127ca |
| mtu                       | 0                                    |
| name                      | public                               |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 30                                   |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 5dc8330acb6f4fb8a91f2abb839f7773     |
+---------------------------+--------------------------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# neutron subnet-create --disable-dhcp public 167.254.209.0/24 \
--name public_subnet --allocation-pool start=167.254.209.87,end=167.254.209.95 --gateway-ip 167.254.209.126

Created a new subnet:
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "167.254.209.87", "end": "167.254.209.95"} |
| cidr              | 167.254.209.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | False                                                |
| gateway_ip        | 167.254.209.126                                      |
| host_routes       |                                                      |
| id                | 684fa6ab-4fb9-406a-9264-2c53afa8d9ff                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | public_subnet                                        |
| network_id        | 3ac45bab-e08b-47ff-b01e-5b0ddb9127ca                 |
| subnetpool_id     |                                                      |
| tenant_id         | 5dc8330acb6f4fb8a91f2abb839f7773                     |
+-------------------+------------------------------------------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# neutron router-create pub_router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | ee34dbdc-2368-4cb9-ba50-8f13e00ae389 |
| name                  | pub_router                           |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 5dc8330acb6f4fb8a91f2abb839f7773     |
+-----------------------+--------------------------------------+
#set gateway on public router
[root@localhost ~(keystone_admin)]# neutron router-gateway-set pub_router public
Set gateway for router pub_router
  • Create two tenants, Tenant1 & Tenant2
[root@localhost ~(keystone_admin)]# keystone tenant-create --name Tenant1
[root@localhost ~(keystone_admin)]# keystone tenant-create --name Tenant2
[root@localhost ~(keystone_admin)]# neutron tenant-list 
+----------------------------------+----------+---------+
|                id                |   name   | enabled |
+----------------------------------+----------+---------+
| 34a95df6b5bf4744a3fdd9d9b433c8d0 | Tenant1  |   True  |
| b8e204f9e5c74ac387ff431972bfc9fb | Tenant2  |   True  |
| 5dc8330acb6f4fb8a91f2abb839f7773 |  admin   |   True  |
| 0eb0466edb0c4032985289299ba48455 |   demo   |   True  |
| 7250ab3844684a20ab654d38b353060b | services |   True  |
+----------------------------------+----------+---------+
  • Create Tenant network and attach them to router interface
[root@localhost ~(keystone_admin)]# neutron net-create Tenant1_net
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | c44c3620-122a-450f-99ab-839c7798084d |
| mtu                       | 0                                    |
| name                      | Tenant1_net                          |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 36                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 5dc8330acb6f4fb8a91f2abb839f7773     |
+---------------------------+--------------------------------------+

[root@localhost ~(keystone_admin)]# neutron subnet-create --name Tenant1_subnet \
>   --dns-nameserver 8.8.8.8 Tenant1_net 192.168.11.0/24
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.11.2", "end": "192.168.11.254"} |
| cidr              | 192.168.11.0/24                                    |
| dns_nameservers   | 8.8.8.8                                            |
| enable_dhcp       | True                                               |
| gateway_ip        | 192.168.11.1                                       |
| host_routes       |                                                    |
| id                | 395d7a7d-7479-4b6e-b184-c9638ff19beb               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | Tenant1_subnet                                     |
| network_id        | c44c3620-122a-450f-99ab-839c7798084d               |
| subnetpool_id     |                                                    |
| tenant_id         | 5dc8330acb6f4fb8a91f2abb839f7773                   |
+-------------------+----------------------------------------------------+
#add interface to public router
[root@localhost ~(keystone_admin)]# neutron router-interface-add pub_router Tenant1_subnet
Added interface 51a1f2ed-eef5-4527-bce9-153d6a7986cd to router pub_router.
[root@localhost ~(keystone_admin)]#
  • Create ssh keypair and add it to nova
#create keypair for tenant1
[root@localhost ~(keystone_admin)]# ssh-keygen -f tenant1_rsa -t rsa -b 2048 -N ''
Generating public/private rsa key pair.
Your identification has been saved in tenant1_rsa.
Your public key has been saved in tenant1_rsa.pub.
The key fingerprint is:
f9:36:17:06:b1:ab:8f:11:ab:46:0e:37:ca:c8:29:0f root@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|          .      |
|           o     |
|          o      |
|         . o     |
|        S . o    |
|    . +  = . .   |
|E. + * .+ + .    |
|..+ o o. = o     |
| o.  .. . .      |
+-----------------+
[root@localhost ~(keystone_admin)]# nova keypair-add --pub-key tenant1_rsa.pub tenant1
[root@localhost ~(keystone_admin)]#
  • Create a new security group and rule to allow ssh and ICMP protocols for the instance
#create security group and add rule
[root@localhost ~(keystone_admin)]# neutron security-group-create mysec
[root@localhost ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp mysec
[root@localhost ~(keystone_admin)]# neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 mysec
  • Boot instance for Tenant1
#boot instances
[root@localhost ~(keystone_admin)]# nova boot --poll --flavor m2.nano --image cirros \
   --nic net-id=c44c3620-122a-450f-99ab-839c7798084d --key-name tenant1 Tenant1_VM1 --security-groups mysec
+--------------------------------------+------------------------------------------------+
| Property                             | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          |                                                |
| OS-EXT-SRV-ATTR:host                 | -                                              |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                              |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                              |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | -                                              |
| OS-SRV-USG:terminated_at             | -                                              |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| adminPass                            | e7oXTwZCSiKA                                   |
| config_drive                         |                                                |
| created                              | 2016-04-11T17:41:09Z                           |
| flavor                               | m2.nano (81a85a3e-d809-4619-8ff7-f589936b1d20) |
| hostId                               |                                                |
| id                                   | a33591b6-c325-454d-a4b0-50ba82d0b257           |
| image                                | cirros (4dc2a2dc-3f23-406f-804a-964995930174)  |
| key_name                             | tenant1                                        |
| metadata                             | {}                                             |
| name                                 | Tenant1_VM1                                    |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| security_groups                      | mysec                                          |
| status                               | BUILD                                          |
| tenant_id                            | 5dc8330acb6f4fb8a91f2abb839f7773               |
| updated                              | 2016-04-11T17:41:10Z                           |
| user_id                              | 1e95e3d6d7a64dfc9f5548361b2b2ed7               |
+--------------------------------------+------------------------------------------------+

Server building... 100% complete
Finished
[root@localhost network-scripts(keystone_admin)]# nova list
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks                 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| 0b48cd50-04ef-40b1-a3a5-69e61bb2b2df | Tenant1_VM1 | ACTIVE | -          | Running     | Tenant1_net=192.168.11.5 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
[root@localhost network-scripts(keystone_admin)]#

At this point instance started but could not be reached. Ping from router namespace to VM (192.168.11.5) failed. I checked console-log for VM and found that it  couldn’t get IP from DHCP server. Below messages from console-log show no dhcp offer message

[root@localhost ~(keystone_admin)]# nova console-log Tenant1_VM1
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...

I checked configuration and found that vxlan interface was missing from br-tun bridge in compute node. After restarting openvswitch agent on controller node vxlan interface created. But it didn’t resolve dhcp issue, VM still doesn’t have IP address

$service neutron-openvswitch-agent restart

[root@localhost ~]# ovs-vsctl show
4973e933-214d-4d54-b241-db3b33e16526
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo8d2aef86-ef"
            tag: 1
            Interface "qvo8d2aef86-ef"
    ovs_version: "2.4.0"

# restart openvswitch fixed the problem vxlan port issue
[root@localhost ~(keystone_admin)]# service neutron-openvswitch-agent restart
[root@localhost ~]# ovs-vsctl show
4973e933-214d-4d54-b241-db3b33e16526
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a0001"
            Interface "vxlan-0a0a0001"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.0.10", out_key=flow, remote_ip="10.10.0.1"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo8d2aef86-ef"
            tag: 2
            Interface "qvo8d2aef86-ef"
    ovs_version: "2.4.0"

Upon further debugging I found that a flow was missing from br-tun bridge in network node. Restarting openvswitch didn’t resolve this issue so  I manually created the flow. This resolved dhcp issue and VM successfully fetched IP address from dhcp.

[root@localhost ~(keystone_admin)]# ovs-ofctl add-flow br-tun "in_port=2 priority=1 table=0 actions=resubmit(,4)"
[root@localhost ~(keystone_admin)]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0xaf13b266b8c0ad46, duration=10418.686s, table=0, n_packets=0, n_bytes=0, idle_age=10418, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x0, duration=28.013s, table=0, n_packets=0, n_bytes=0, idle_age=28, priority=1,in_port=2 actions=resubmit(,4)
 cookie=0xaf13b266b8c0ad46, duration=10418.686s, table=0, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=2, n_packets=0, n_bytes=0, idle_age=10418, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=2, n_packets=0, n_bytes=0, idle_age=10418, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=3, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.055s, table=4, n_packets=0, n_bytes=0, idle_age=10418, priority=1,tun_id=0x24 actions=mod_vlan_vid:3,resubmit(,10)
 cookie=0xaf13b266b8c0ad46, duration=10418.030s, table=4, n_packets=0, n_bytes=0, idle_age=10418, priority=1,tun_id=0x3e actions=mod_vlan_vid:4,resubmit(,10)
 cookie=0xaf13b266b8c0ad46, duration=10418.685s, table=4, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.684s, table=6, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop
 cookie=0xaf13b266b8c0ad46, duration=10418.684s, table=10, n_packets=0, n_bytes=0, idle_age=10418, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xaf13b266b8c0ad46,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0xaf13b266b8c0ad46, duration=10418.684s, table=20, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=resubmit(,22)
 cookie=0xaf13b266b8c0ad46, duration=10418.666s, table=22, n_packets=0, n_bytes=0, idle_age=10418, priority=0 actions=drop

#ping to VM is successful
[root@localhost ~(keystone_admin)]# ip netns  exec qdhcp-c44c3620-122a-450f-99ab-839c7798084d ping 192.168.11.5
PING 192.168.11.5 (192.168.11.5) 56(84) bytes of data.
64 bytes from 192.168.11.5: icmp_seq=1 ttl=64 time=1.89 ms
64 bytes from 192.168.11.5: icmp_seq=2 ttl=64 time=0.497 ms

Check out this link for details on how I resolved dhcp issue in my setup

Try these commands to create second tenant (Tenant2)

ssh-keygen -f tenant2_rsa -t rsa -b 2048 -N ''
nova keypair-add --pub-key tenant2_rsa.pub tenant2
neutron net-create Tenant2_net
neutron subnet-create --name Tenant2_subnet \
--dns-nameserver 8.8.8.8 Tenant2_net 192.168.12.0/24
 neutron router-interface-add pub_router Tenant2_subnet
 nova boot --poll --flavor m2.nano --image cirros \
   --nic net-id=ff9c3eb7-f88f-42bb-af5f-ea810dad7505 \
--key-name tenant2 Tenant2_VM1 --security-groups mysec
[root@localhost ~(keystone_admin)]# nova list
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks                 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| 0b48cd50-04ef-40b1-a3a5-69e61bb2b2df | Tenant1_VM1 | ACTIVE | -          | Running     | Tenant1_net=192.168.11.5 |
| b3a7d7e6-eb4b-4c21-9b9d-974680c35cd6 | Tenant2_VM1 | ACTIVE | -          | Running     | Tenant2_net=192.168.12.3 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
[root@localhost ~(keystone_admin)]# ip netns
qdhcp-ff9c3eb7-f88f-42bb-af5f-ea810dad7505
qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389
qdhcp-c44c3620-122a-450f-99ab-839c7798084d
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389 192.168.12.3
exec of "192.168.12.3" failed: No such file or directory
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-ee34dbdc-2368-4cb9-ba50-8f13e00ae389 ping 192.168.12.3
PING 192.168.12.3 (192.168.12.3) 56(84) bytes of data.
64 bytes from 192.168.12.3: icmp_seq=1 ttl=64 time=1.04 ms
64 bytes from 192.168.12.3: icmp_seq=2 ttl=64 time=0.341 ms
64 bytes from 192.168.12.3: icmp_seq=3 ttl=64 time=0.387 ms
64 bytes from 192.168.12.3: icmp_seq=4 ttl=64 time=0.332 ms
^C
--- 192.168.12.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms

This is the topology shown on Openstack Horizon gui

openstack_multinode_3

This is the topology I drew

openstack_multinode_1
Two node topology
openstack_multinode_2
Two node topology with traffic flow

 

Observations:

If you get below error while installing openstack-packstack change repos in location /etc/yum.repos.d to baseurl instead of mirrorlist on both nodes and try again

baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-liberty/

Loaded plugins: fastestmirror
 Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os error was
 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"

You can also disable fastestmirror in this file /etc/yum/pluginconf.d/fastestmirror.conf

enable=0

I encountered below error on compute node while running packstack. You need to upgrade lvm2 on compute node to resolve it
$yum upgrade lvm2

 ---
 ERROR : Error appeared during Puppet run: 10.10.0.10_nova.pp
 Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-nova-compute' returned 1: Transaction check error:
 You will find full trace in log /var/tmp/packstack/20160408-202825-TbrmD6/manifests/10.10.0.10_nova.pp.log

Lab-18:Load Balancer as a service (LBaas) using Openstack

In this lab I will demonstrate how to setup Load Balancer (LB) function in Openstack Neutron. Openstack provides LBaas service using HAproxy. HAproxy is an open source high availability load balancer for TCP and HTTP based application.

This is a logical picture of LBaas

lbaas_5

VIP: – Vitual IP address. We can call it LB address

POOL – Logical binding of members. A pool contains attributes like load balancing method (Round Robin) and protocols (HTTP,TCP) listen to etc

Member: These are actual servers for which load balancing is performed.

Health Monitor:- Health monitor monitors health of pool members. This is done either pinging members or HTTP  GET request. If member fails to report status or report failure it is removed from the pool

Pre-condition:

Install Openstack using packstack. Follow this link  to install Openstack in one machine. Start packstack with lbaas. You can use my earlier lab-13 to deploy Redhat Openstack

sudo packstack --allinone --os-neutron-lbaas-install=y --os-ceilometer-install=n --os-cinder-install=n --nagios-install=n

I have installed Openstack using packstack on my RHEL 7. I have created two private networks (192.168.11.0 & 192.168.12.0) and one public network (xxx.254.209.0). I have external network connectivity thru physical port enp1s0, no floating IP address created. Each tenant has one instance. Please refer to previous lab to setup this topology. This is the picture of my initial topology

lbaas_1

Procedure:

  • Delete firstTenant_firstVM

>nova delete firstTenant_firstVM

  • Add security group rule to default security-group to allow HTTP traffic. This is important otherwise servers will not accept HTTP request.

>neutron security-group-rule-create –protocol http default

  • Create  firstTenant instances. These VMs will serve as Web servers
#create VMs for firstTenant
[root@localhost ~(keystone_tenant1)]#nova boot --poll --flavor m2.nano --image cirros \
  --nic net-id=d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92 --key-name firstTenant \
firstTenant_firstVM --security-groups default

[root@localhost ~(keystone_tenant1)]#nova boot --poll --flavor m2.nano --image cirros 
\   --nic net-id=d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92 --key-name firstTenant 
\ firstTenant_secondVM --security-groups default 
[root@localhost ~(keystone_tenant1)]# nova list
+--------------------------------------+----------------------+--------+------------+-------------+------------------------------+
| ID                                   | Name                 | Status | Task State | Power State | Networks                     |
+--------------------------------------+----------------------+--------+------------+-------------+------------------------------+
| 363df149-a248-4c67-b3c5-2da0af96ccbe | firstTenant_firstVM  | ACTIVE | -          | Running     | firstTenant_net=192.168.11.5 |
| a4ec9ada-775e-44f0-93ae-846ab9b96364 | firstTenant_secondVM | ACTIVE | -          | Running     | firstTenant_net=192.168.11.6 |
+--------------------------------------+----------------------+--------+------------+-------------+------------------------------+
[root@localhost ~(keystone_tenant1)]#
  • Login to firstTenant VM1 & VM2 and start poor man’s web server on them. Note:This a very slow web server implementation. I didn’t have any choice cirros image doesn’t come with many goodies. Try this link if you are interested in experimenting with other simple web server. My server uses Linux Netcat utility, it is a very powerful tool try this link to learn more about nc
#ssh to first tenant second VM and start command line web server
[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 ssh -i tenant1_rsa cirros@192.168.11.6
The authenticity of host '192.168.11.6 (192.168.11.6)' can't be established.
RSA key fingerprint is 2e:5f:0f:53:61:e2:5e:ea:2a:d2:82:b2:98:67:fd:4b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.11.6' (RSA) to the list of known hosts.
$
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:49:24:2A
          inet addr:192.168.11.6  Bcast:192.168.11.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe49:242a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:119 errors:0 dropped:0 overruns:0 frame:0
          TX packets:136 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:15801 (15.4 KiB)  TX bytes:14741 (14.3 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

#start command line web server. Edit the text depending on which VM command is executed,
#cut & paste below line. This will start nc process to listen on port 80 (HTTP)
$while true; do { echo -e 'HTTP/1.1 200 OK\r\n\r\n'; echo "This is Server-2";} | sudo nc -lp 80;sleep 1;  done

#ssh to first tenant first VM and start command line web server
[root@localhost ~(keystone_tenant1)]# ip netns
qdhcp-8460a127-ee67-474f-bbe2-9f5916097f2d
qdhcp-d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92
qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95
[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 ssh -i tenant1_rsa cirros@192.168.11.5
$
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:D4:6B:DF
          inet addr:192.168.11.5  Bcast:192.168.11.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fed4:6bdf/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:7374 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6895 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:542829 (530.1 KiB)  TX bytes:578034 (564.4 KiB)
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

#start command line web server. Edit the text depending on which VM command is executed,
#cut & paste below line. This will start nc process to listen on port 80 (HTTP)
$while true; do { echo -e 'HTTP/1.1 200 OK\r\n\r\n'; echo "This is Server-1";} | sudo nc -lp 80;sleep 1;  done
  • Lets create the load balancer (LB). First we need to create load balancer pool. Pool contains attributes like load balancer method, protocol LB should be listening for and the subnet-id of  the pool. Pool should be on the same subnet as servers, in my case 192.168.11.0.  I am using ROUND_ROBIN method and HTTP protocol
[root@localhost ~(keystone_tenant1)]# neutron subnet-list
+--------------------------------------+---------------------+------------------+------------------------------------------------------+
| id                                   | name                | cidr             | allocation_pools                                     |
+--------------------------------------+---------------------+------------------+------------------------------------------------------+
| b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d | public_subnet       | 167.254.209.0/24 | {"start": "167.254.209.86", "end": "167.254.209.88"} |
| 079e2bad-589f-456b-9fc9-81c04b925dd3 | firstTenant_subnet  | 192.168.11.0/24  | {"start": "192.168.11.2", "end": "192.168.11.254"}   |
| be4cd35d-4a9c-46b1-86df-5173c0263029 | secondTenant_subnet | 192.168.12.0/24  | {"start": "192.168.12.2", "end": "192.168.12.254"}   |
+--------------------------------------+---------------------+------------------+------------------------------------------------------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-pool-create --lb-method ROUND_ROBIN --name lbaas_pool --protocol HTTP --subnet-id 079e2bad-589f-456b-9fc9-81c04b925dd3
Created a new pool:
+------------------------+--------------------------------------+
| Field                  | Value                                |
+------------------------+--------------------------------------+
| admin_state_up         | True                                 |
| description            |                                      |
| health_monitors        |                                      |
| health_monitors_status |                                      |
| id                     | 1ae0d964-1ce3-4664-85f3-55271251cd30 |
| lb_method              | ROUND_ROBIN                          |
| members                |                                      |
| name                   | lbaas_pool                           |
| protocol               | HTTP                                 |
| provider               | haproxy                              |
| status                 | PENDING_CREATE                       |
| status_description     |                                      |
| subnet_id              | 079e2bad-589f-456b-9fc9-81c04b925dd3 |
| tenant_id              | ad0e0f45e48045efba0e5d831222c30c     |
| vip_id                 |                                      |
+------------------------+--------------------------------------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-pool-list
+--------------------------------------+------------+----------+-------------+----------+----------------+--------+
| id                                   | name       | provider | lb_method   | protocol | admin_state_up | status |
+--------------------------------------+------------+----------+-------------+----------+----------------+--------+
| 1ae0d964-1ce3-4664-85f3-55271251cd30 | lbaas_pool | haproxy  | ROUND_ROBIN | HTTP     | True           | ACTIVE |
+--------------------------------------+------------+----------+-------------+----------+----------------+--------+
[root@localhost ~(keystone_tenant1)]#
  • Add members to load balancer pool. In my case I am adding two VMs of firstTenant
[root@localhost ~(keystone_tenant1)]# neutron lb-member-create --address 192.168.11.5 --protocol-port 80 lbaas_pool
Created a new member:
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| address            | 192.168.11.5                         |
| admin_state_up     | True                                 |
| id                 | 2eaeadda-4f62-4d4d-932a-ed44708370dd |
| pool_id            | 1ae0d964-1ce3-4664-85f3-55271251cd30 |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | ad0e0f45e48045efba0e5d831222c30c     |
| weight             | 1                                    |
+--------------------+--------------------------------------+
[root@localhost ~(keystone_tenant1)]# neutron lb-member-create --address 192.168.11.6 --protocol-port 80 lbaas_pool
Created a new member:
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| address            | 192.168.11.6                         |
| admin_state_up     | True                                 |
| id                 | c7240126-88ba-4139-8e54-c43b4cafdd12 |
| pool_id            | 1ae0d964-1ce3-4664-85f3-55271251cd30 |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | ad0e0f45e48045efba0e5d831222c30c     |
| weight             | 1                                    |
+--------------------+--------------------------------------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-member-list
+--------------------------------------+--------------+---------------+--------+----------------+--------+
| id                                   | address      | protocol_port | weight | admin_state_up | status |
+--------------------------------------+--------------+---------------+--------+----------------+--------+
| 2eaeadda-4f62-4d4d-932a-ed44708370dd | 192.168.11.5 |            80 |      1 | True           | ACTIVE |
| c7240126-88ba-4139-8e54-c43b4cafdd12 | 192.168.11.6 |            80 |      1 | True           | ACTIVE |
+--------------------------------------+--------------+---------------+--------+----------------+--------+
[root@localhost ~(keystone_tenant1)]#
  • Create Virtual IP (VIP). VIP should be in the same subnet as members, in my case 192.168.11.0/24. As you can see VIP IP address 192.168.11.7
[root@localhost ~(keystone_tenant1)]# neutron lb-vip-create --name lbaas_vip --protocol-port 80 --protocol HTTP --subnet-id  079e2bad-589f-456b-9fc9-81c04b925dd3 lbaas_pool
Created a new vip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| address             | 192.168.11.7                         |
| admin_state_up      | True                                 |
| connection_limit    | -1                                   |
| description         |                                      |
| id                  | f6c83181-3c7c-44db-9f89-02c7baf34a4e |
| name                | lbaas_vip                            |
| pool_id             | 1ae0d964-1ce3-4664-85f3-55271251cd30 |
| port_id             | 5735947b-48d4-45a9-950a-be99fb60edeb |
| protocol            | HTTP                                 |
| protocol_port       | 80                                   |
| session_persistence |                                      |
| status              | PENDING_CREATE                       |
| status_description  |                                      |
| subnet_id           | 079e2bad-589f-456b-9fc9-81c04b925dd3 |
| tenant_id           | ad0e0f45e48045efba0e5d831222c30c     |
+---------------------+--------------------------------------+
[root@localhost ~(keystone_tenant1)]#
  • We need external connectivity to our LB. For that we need to create floating IP address and attach it to VIP. This command will internally create NAT rule in router iptables for VIP (192.168.11.7)
    • Create floating IP address
    • Associate floating IP address to VIP
[root@localhost ~(keystone_tenant1)]# nova floating-ip-create public
+--------------------------------------+----------------+-----------+----------+--------+
| Id                                   | IP             | Server Id | Fixed IP | Pool   |
+--------------------------------------+----------------+-----------+----------+--------+
| 839d3bbf-6ba6-49ea-b0c3-ac14c73c437d | xxx.254.209.86 | -         | -        | public |
+--------------------------------------+----------------+-----------+----------+--------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-vip-show f6c83181-3c7c-44db-9f89-02c7baf34a4e
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| address             | 192.168.11.7                         |
| admin_state_up      | True                                 |
| connection_limit    | -1                                   |
| description         |                                      |
| id                  | f6c83181-3c7c-44db-9f89-02c7baf34a4e |
| name                | lbaas_vip                            |
| pool_id             | 1ae0d964-1ce3-4664-85f3-55271251cd30 |
| port_id             | 5735947b-48d4-45a9-950a-be99fb60edeb |
| protocol            | HTTP                                 |
| protocol_port       | 80                                   |
| session_persistence |                                      |
| status              | ACTIVE                               |
| status_description  |                                      |
| subnet_id           | 079e2bad-589f-456b-9fc9-81c04b925dd3 |
| tenant_id           | ad0e0f45e48045efba0e5d831222c30c     |
+---------------------+--------------------------------------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron help floatingip-associate
usage: neutron floatingip-associate [-h] [--request-format {json,xml}]
                                    [--fixed-ip-address FIXED_IP_ADDRESS]
                                    FLOATINGIP_ID PORT
Create a mapping between a floating IP and a fixed IP.
positional arguments:
  FLOATINGIP_ID         ID of the floating IP to associate.
  PORT                  ID or name of the port to be associated with the
                        floating IP.
optional arguments:
  -h, --help            show this help message and exit
  --request-format {json,xml}
                        The XML or JSON request format.
  --fixed-ip-address FIXED_IP_ADDRESS
                        IP address on the port (only required if port has
                        multiple IPs).
#This command takes floating IP id and VIP port-id. Check 'lb-vip-show' command for
#vip port-id
[root@localhost ~(keystone_tenant1)]# neutron floatingip-associate 839d3bbf-6ba6-49ea-b0c3-ac14c73c437d 5735947b-48d4-45a9-950a-be99fb60edeb
Associated floating IP 839d3bbf-6ba6-49ea-b0c3-ac14c73c437d
  • This completes the LB provisioning. Let’s check our configuration data
#Let's check router to make sure NAT rule for VIP is setup
[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 167.254.209.86/32 -j DNAT --to-destination 192.168.11.7
-A neutron-l3-agent-POSTROUTING ! -i qg-fb0745d5-0f ! -o qg-fb0745d5-0f -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d xxx.254.209.86/32 -j DNAT --to-destination 192.168.11.7
-A neutron-l3-agent-float-snat -s 192.168.11.7/32 -j SNAT --to-source xxx.254.209.86
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-fb0745d5-0f -j SNAT --to-source xxx.254.209.88
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source xxx.254.209.88
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
[root@localhost ~(keystone_tenant1)]#
#lbaas name space created
[root@localhost ~(keystone_tenant1)]# ip netns
qlbaas-1ae0d964-1ce3-4664-85f3-55271251cd30
qdhcp-8460a127-ee67-474f-bbe2-9f5916097f2d
qdhcp-d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92
qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# ip netns exec qlbaas-1ae0d964-1ce3-4664-85f3-55271251cd30 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tap5735947b-48: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.11.7  netmask 255.255.255.0  broadcast 192.168.11.255
        inet6 fe80::f816:3eff:fe17:b98e  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:17:b9:8e  txqueuelen 0  (Ethernet)
        RX packets 46  bytes 3718 (3.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61  bytes 5594 (5.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[root@localhost ~(keystone_tenant1)]# ip netns exec qlbaas-1ae0d964-1ce3-4664-85f3-55271251cd30 ip route
default via 192.168.11.1 dev tap5735947b-48
192.168.11.0/24 dev tap5735947b-48  proto kernel  scope link  src 192.168.11.7
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# nova floating-ip-list
+--------------------------------------+----------------+--------------------------------------+--------------+--------+
| Id                                   | IP             | Server Id                            | Fixed IP     | Pool   |
+--------------------------------------+----------------+--------------------------------------+--------------+--------+
| 839d3bbf-6ba6-49ea-b0c3-ac14c73c437d | xxx.254.209.86 | 9865e801-38da-5791-ac1c-c10be03452c8 | 192.168.11.7 | public |
+--------------------------------------+----------------+--------------------------------------+--------------+--------+
[root@localhost ~(keystone_tenant1)]#
  • This is our new topology with load balancer configured
lbaas_2
Neutron with Load Balancer configured
lbaas_3
Traffic flow with LBaas

 

lbaas_4
Another view of traffic flow
  • Time to test our LB. Open a terminal on local host or remote host. First check you have ping connectivity to floating IP address. If ping is successful run below curl command to test load balancing.
  • As you can see LB is doing its job, it is doing round robin load balancing to Server-1 & Server-2. Note: As I mentioned earlier our servers are very slow so have some patience while running curl command. Command  takes ~1-2 min to complete
[labadmin@localhost ~]$ curl --url http://167.254.209.86
This is Server-1
[labadmin@localhost ~]$ curl --url http://167.254.209.86
This is Server-2
[labadmin@localhost ~]$ curl --url http://167.254.209.86
This is Server-1
[labadmin@localhost ~]$ curl --url http://167.254.209.86
This is Server-2
[labadmin@localhost ~]$
  • We can even test LB from second tenant VM. This method is useful if you don’t have external connectivity. SSH to second tenant VM, check to make sure it has ping connectivity to VIP address (192.168.11.7). Then run curl commands
[root@localhost ~(keystone_admin)]# ip netns
qlbaas-1ae0d964-1ce3-4664-85f3-55271251cd30
qdhcp-8460a127-ee67-474f-bbe2-9f5916097f2d
qdhcp-d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92
qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 ssh -i tenant2_rsa cirros@192.168.12.5
$
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:EE:07:30
          inet addr:192.168.12.5  Bcast:192.168.12.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:feee:730/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:692 errors:0 dropped:0 overruns:0 frame:0
          TX packets:672 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:60471 (59.0 KiB)  TX bytes:56456 (55.1 KiB)
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
$ ping 192.168.11.7
PING 192.168.11.7 (192.168.11.7): 56 data bytes
64 bytes from 192.168.11.7: seq=0 ttl=63 time=0.614 ms
64 bytes from 192.168.11.7: seq=1 ttl=63 time=0.561 ms
64 bytes from 192.168.11.7: seq=2 ttl=63 time=0.410 ms
^C
--- 192.168.11.7 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.410/0.528/0.614 ms
$ curl --url http://192.168.11.7
This is Server-1
$ curl --url http://192.168.11.7
This is Server-2
$ curl --url http://192.168.11.7
This is Server-1

Health Monitoring function

  • LBaas provides  health monitoring function. Health monitor monitors the health of pool members, this is done either sending ping or HTTP GET. If member doesn’t reply within a set time period member declared dead and removed from LB algorithm. Since my servers are web based, I am using HTTP method. Note: Healthmonitor function didn’t work reliably for me because my web servers are slow so members keep getting timed out and removed from the pool
[root@localhost ~(keystone_tenant1)]# neutron lb-healthmonitor-create --delay 5 --type HTTP --max-retries 3 --timeout 2
Created a new health_monitor:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| delay          | 5                                    |
| expected_codes | 200                                  |
| http_method    | GET                                  |
| id             | 97b68a9c-9aa3-4cdf-94e4-92396bd2f268 |
| max_retries    | 3                                    |
| pools          |                                      |
| tenant_id      | ad0e0f45e48045efba0e5d831222c30c     |
| timeout        | 2                                    |
| type           | HTTP                                 |
| url_path       | /                                    |
+----------------+--------------------------------------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-healthmonitor-associate 97b68a9c-9aa3-4cdf-94e4-92396bd2f268 lbaas_pool
Associated health monitor 97b68a9c-9aa3-4cdf-94e4-92396bd2f268
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-healthmonitor-show 97b68a9c-9aa3-4cdf-94e4-92396bd2f268
+----------------+-----------------------------------------------------------------------------------------------------+
| Field          | Value                                                                                               |
+----------------+-----------------------------------------------------------------------------------------------------+
| admin_state_up | True                                                                                                |
| delay          | 5                                                                                                   |
| expected_codes | 200                                                                                                 |
| http_method    | GET                                                                                                 |
| id             | 97b68a9c-9aa3-4cdf-94e4-92396bd2f268                                                                |
| max_retries    | 3                                                                                                   |
| pools          | {"status": "ACTIVE", "status_description": null, "pool_id": "1ae0d964-1ce3-4664-85f3-55271251cd30"} |
| tenant_id      | ad0e0f45e48045efba0e5d831222c30c                                                                    |
| timeout        | 2                                                                                                   |
| type           | HTTP                                                                                                |
| url_path       | /                                                                                                   |
+----------------+-----------------------------------------------------------------------------------------------------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# neutron lb-healthmonitor-list
+--------------------------------------+------+----------------+
| id                                   | type | admin_state_up |
+--------------------------------------+------+----------------+
| 97b68a9c-9aa3-4cdf-94e4-92396bd2f268 | HTTP | True           |
+--------------------------------------+------+----------------+
[root@localhost ~(keystone_tenant1)]# neutron lb-healthmonitor-disassociate 97b68a9c-9aa3-4cdf-94e4-92396bd2f268 lbaas_pool
Disassociated health monitor 97b68a9c-9aa3-4cdf-94e4-92396bd2f268
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_admin)]# neutron help lb-healthmonitor-create
usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
                                       [-c COLUMN] [--max-width <integer>]
                                       [--prefix PREFIX]
                                       [--request-format {json,xml}]
                                       [--tenant-id TENANT_ID]
                                       [--admin-state-down]
                                       [--expected-codes EXPECTED_CODES]
                                       [--http-method HTTP_METHOD]
                                       [--url-path URL_PATH] --delay DELAY
                                       --max-retries MAX_RETRIES --timeout
                                       TIMEOUT --type {PING,TCP,HTTP,HTTPS}

Create a health monitor.
optional arguments:
  -h, --help            show this help message and exit
  --request-format {json,xml}
                        The XML or JSON request format.
  --tenant-id TENANT_ID
                        The owner tenant ID.
  --admin-state-down    Set admin state up to false.
  --expected-codes EXPECTED_CODES
                        The list of HTTP status codes expected in response
                        from the member to declare it healthy. This attribute
                        can contain one value, or a list of values separated
                        by comma, or a range of values (e.g. "200-299"). If
                        this attribute is not specified, it defaults to "200".
  --http-method HTTP_METHOD
                        The HTTP method used for requests by the monitor of
                        type HTTP.
  --url-path URL_PATH   The HTTP path used in the HTTP request used by the
                        monitor to test a member health. This must be a string
                        beginning with a / (forward slash).
  --delay DELAY         The time in seconds between sending probes to members.
  --max-retries MAX_RETRIES
                        Number of permissible connection failures before
                        changing the member status to INACTIVE. [1..10]
  --timeout TIMEOUT     Maximum number of seconds for a monitor to wait for a
                        connection to be established before it times out. The
                        value must be less than the delay value.
  --type {PING,TCP,HTTP,HTTPS}
                        One of the predefined health monitor types.
output formatters:
  output formatter options
  -f {shell,table,value}, --format {shell,table,value}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated
table formatter:
  --max-width <integer>
                        Maximum display width, 0 to disable
shell formatter:
  a format a UNIX shell can parse (variable="value")
  --prefix PREFIX       add a prefix to all variable names
[root@localhost ~(keystone_admin)]#

Lab-17:Openstack deep dive – Floating IP address

The goal of this lab is to deep dive into Openstack floating IP address. The purpose of floating IP address is to provide external connectivity to an instance. For external I mean outside of machine connectivity, physical network connectivity. Why it’s called floating IP, because these IPs are not mapped to  any virtual or physical interfaces. Floating IP addresses can be used on demand and if not required released to the pool. Floating IP is used by neutron router to perform NAT function for an instance. Neutron supports two types of NAT

  1. N:1 NAT without floating IP address. In this case router external interface IP address is used for NAT function. Neutron uses PAT for traffic mapping to individual VMs. This link provide information on various NAT types
  2. 1:1 NAT with floating IP address. In this case each VM assigned a public IP using floating IP address

Below actions required  on user part to associate floating IP to an instance

  1. Create a pool of floating IP addresses
  2. Get a floating IP address from the pool
  3. Assign floating IP address to an instance

This is a picture of 1:1 NAT operation on neutron router using floating IP

openstack_floatingip
Neutron router with 1:1 NAT function using floating IP

Now lets try this in the lab. I have Openstack in a machine. A physical interface (enp1s0) mapped to br-ex bridge. enp1s0 connected to public network. Floating IP pool created with public IP address.

#Here a pool of public IP xxx.254.209.86 to xxx.254.209.88 created. IP addresses
#from this pool will be allocated to floating IP and router interface facing 
#public network
[root@localhost ~(keystone_admin)]#neutron subnet-create --disable-dhcp public xxx.254.209.0/24 \
--name public_subnet --allocation-pool start=xxx.254.209.86,end=xxx.254.209.88
[root@localhost ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+---------------------+------------------+------------------------------------------------------+
| id                                   | name                | cidr             | allocation_pools                                     |
+--------------------------------------+---------------------+------------------+------------------------------------------------------+
| b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d | public_subnet       | xxx.254.209.0/24 | {"start": "xxx.254.209.86", "end": "xxx.254.209.88"} |

[root@localhost ~(keystone_admin)]# neutron subnet-show b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "xxx.254.209.86", "end": "xxx.254.209.88"} |
| cidr              | xxx.254.209.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | False                                                |
| gateway_ip        | xxx.254.209.126                                      |
| host_routes       |                                                      |
| id                | b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | public_subnet                                        |
| network_id        | 4fc7ff44-38f5-4895-856e-fad5b81f53b2                 |
| subnetpool_id     |                                                      |
| tenant_id         | e5b04b788a814a489a366eb91970512c                     |
+-------------------+------------------------------------------------------+
[root@localhost ~(keystone_admin)]#

The instance name is firstTenant_firstVM.  Lets create a floating IP address, this action will get a free IP address from the public sub-net pool we created earlier

#instance source address 192.168.11.5, it part of network 192.168.11.0/24
[root@localhost ~(keystone_tenant1)]# nova list
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
| ID                                   | Name                | Status | Task State | Power State | Networks                     |
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
| 363df149-a248-4c67-b3c5-2da0af96ccbe | firstTenant_firstVM | ACTIVE | -          | Running     | firstTenant_net=192.168.11.5 |
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+

#create floating IP address. 
[root@localhost ~(keystone_tenant1)]# nova floating-ip-list
+----+----+-----------+----------+------+
| Id | IP | Server Id | Fixed IP | Pool |
+----+----+-----------+----------+------+
+----+----+-----------+----------+------+

#As you can see IP address xxx.254.209.87 allocated as floating IP address
[root@localhost ~(keystone_tenant1)]# nova floating-ip-create public
+--------------------------------------+----------------+-----------+----------+--------+
| Id                                   | IP             | Server Id | Fixed IP | Pool   |
+--------------------------------------+----------------+-----------+----------+--------+
| 704c207d-dc2a-47ae-8c3f-a5c1a1f58ed0 | xxx.254.209.87 | -         | -        | public |
+--------------------------------------+----------------+-----------+----------+--------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# nova floating-ip-list
+--------------------------------------+----------------+-----------+----------+--------+
| Id                                   | IP             | Server Id | Fixed IP | Pool   |
+--------------------------------------+----------------+-----------+----------+--------+
| 704c207d-dc2a-47ae-8c3f-a5c1a1f58ed0 | xxx.254.209.87 | -         | -        | public |
+--------------------------------------+----------------+-----------+----------+--------+
[root@localhost ~(keystone_tenant1)]#

Next step is to assign this floating IP to our Tenant VM. This action will create NAT rules in neutron router iptables to translate VM internal IP address to floating IP address and vice versa.

#iptables before floating IP associated with instance
[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-POSTROUTING ! -i qg-4d2c2605-5d ! -o qg-4d2c2605-5d -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-4d2c2605-5d -j SNAT --to-source xxx.254.209.86
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source xxx.254.209.86
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
[root@localhost ~(keystone_tenant1)]#
#Associate floating IP address to VM. As you can see instance show part of two
#networks 192.168.11.5 & xxx.254.209.87
[root@localhost ~(keystone_tenant1)]# nova add-floating-ip firstTenant_firstVM xxx.254.209.87 
[root@localhost ~(keystone_tenant1)]# nova list 
+--------------------------------------+---------------------+--------+------------+-------------+----------------------------------------------+ 
| ID                                   | Name                | Status | Task State | Power State | Networks                                     | 
+--------------------------------------+---------------------+--------+------------+-------------+----------------------------------------------+ 
| 363df149-a248-4c67-b3c5-2da0af96ccbe | firstTenant_firstVM | ACTIVE | -          | Running     | firstTenant_net=192.168.11.5, 167.254.209.87 |
 +--------------------------------------+---------------------+--------+------------+-------------+----------------------------------------------+ 

#iptables after floating IP associated to VM, as you can see SNAT & DNAT rules added
#into iptables 
[root@localhost ~(keystone_tenant1)]# 
ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S -t nat 
-P PREROUTING ACCEPT -P INPUT ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT 
-N neutron-l3-agent-OUTPUT -N neutron-l3-agent-POSTROUTING 
-N neutron-l3-agent-PREROUTING -N neutron-l3-agent-float-snat 
-N neutron-l3-agent-snat -N neutron-postrouting-bottom 
-A PREROUTING -j neutron-l3-agent-PREROUTING -A OUTPUT 
-j neutron-l3-agent-OUTPUT -A POSTROUTING -j neutron-l3-agent-POSTROUTING 
-A POSTROUTING -j neutron-postrouting-bottom 
-A neutron-l3-agent-OUTPUT -d xxx.254.209.87/32 -j DNAT --to-destination 192.168.11.5 
-A neutron-l3-agent-POSTROUTING ! -i qg-4d2c2605-5d ! -o qg-4d2c2605-5d -m conntrack ! --ctstate DNAT -j ACCEPT 
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697 
-A neutron-l3-agent-PREROUTING -d xxx.254.209.87/32 -j DNAT --to-destination 192.168.11.5 
-A neutron-l3-agent-float-snat -s 192.168.11.5/32 -j SNAT --to-source xxx.254.209.87 
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat 
-A neutron-l3-agent-snat -o qg-4d2c2605-5d -j SNAT --to-source xxx.254.209.86 
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source 167.254.209.86 
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat 

[root@localhost ~(keystone_tenant1)]# nova floating-ip-list 
+--------------------------------------+----------------+--------------------------------------+--------------+--------+ 
| Id                                   | IP             | Server Id                            | Fixed IP     | Pool   |
 +--------------------------------------+----------------+--------------------------------------+--------------+--------+ 
| 3f128a57-65d7-4677-ad9b-ef9a2ed8df4c | xxx.254.209.87 | 363df149-a248-4c67-b3c5-2da0af96ccbe | 192.168.11.5 | public | 
+--------------------------------------+----------------+--------------------------------------+--------------+--------+ 
[root@localhost ~(keystone_tenant1)]#

Let’s try ping from VM to outside machine and  see if ping is successful

#login to Tenant instance using router namespace
[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 ssh -i tenant1_rsa cirros@192.168.11.5
$
$
$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.11.1    0.0.0.0         UG    0      0        0 eth0
192.168.11.0    *               255.255.255.0   U     0      0        0 eth0

#ping to external network gateway, as seen ping is successful
$ ping xxx.254.209.126
PING xxx.254.209.126 (xxx.254.209.126): 56 data bytes
64 bytes from xxx.254.209.126: seq=0 ttl=254 time=12.211 ms
64 bytes from xxx.254.209.126: seq=1 ttl=254 time=13.454 ms
64 bytes from xxx.254.209.126: seq=2 ttl=254 time=11.197 ms
^C
--- xxx.254.209.126 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 11.197/12.287/13.454 ms
$

release floating  IP from VM

[root@localhost ~(keystone_tenant1)]# nova remove-floating-ip firstTenant_firstVM xxx.254.209.87
[root@localhost ~(keystone_tenant1)]# nova list
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
| ID                                   | Name                | Status | Task State | Power State | Networks                     |
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
| 363df149-a248-4c67-b3c5-2da0af96ccbe | firstTenant_firstVM | ACTIVE | -          | Running     | firstTenant_net=192.168.11.5 |
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
[root@localhost ~(keystone_tenant1)]# nova floating-ip-list
+--------------------------------------+----------------+-----------+----------+--------+
| Id                                   | IP             | Server Id | Fixed IP | Pool   |
+--------------------------------------+----------------+-----------+----------+--------+
| 3f128a57-65d7-4677-ad9b-ef9a2ed8df4c | xxx.254.209.87 | -         | -        | public |
+--------------------------------------+----------------+-----------+----------+--------+
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_tenant1)]# ip netns
qdhcp-8460a127-ee67-474f-bbe2-9f5916097f2d
qdhcp-d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92
qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95
[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-POSTROUTING ! -i qg-4d2c2605-5d ! -o qg-4d2c2605-5d -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-4d2c2605-5d -j SNAT --to-source xxx.254.209.86
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source xxx.254.209.86
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
[root@localhost ~(keystone_tenant1)]#

N:1 NAT

Neutron router creates N:1 NAT rules after external gateway created on it . In this case no floating IP is required for VM to communicate with external network. Router uses its external interface to SNAT outgoing traffic. For incoming traffic router maintains VM IP,port# tuple to forward traffic to right VM.

So the question is why we need floating IP, why not use N:1 NAT. The reason is with N:1 NAT external host doesn’t have visibility to VM, say an external host wants to talk to a VM it can use router external interface IP as destination but if VM has not spoken yet there will  be no VM IP, port# tuple and without it, its impossible for router to forward traffic to right VM.

[root@localhost ~(keystone_tenant1)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-POSTROUTING ! -i qg-4d2c2605-5d ! -o qg-4d2c2605-5d -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-4d2c2605-5d -j SNAT --to-source xxx.254.209.86
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source xxx.254.209.86
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
[root@localhost ~(keystone_tenant1)]#
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 36  bytes 3492 (3.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 36  bytes 3492 (3.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-6e984730-a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet xxx.254.209.86  netmask 255.255.255.0  broadcast xxx.254.209.255
        inet6 fe80::f816:3eff:fefc:a279  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:fc:a2:79  txqueuelen 0  (Ethernet)
        RX packets 1044292781  bytes 908482799231 (846.0 GiB)
        RX errors 0  dropped 493  overruns 0  frame 0
        TX packets 859  bytes 80763 (78.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-984d5059-79: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.12.1  netmask 255.255.255.0  broadcast 192.168.12.255
        inet6 fe80::f816:3eff:fe5a:1c13  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:5a:1c:13  txqueuelen 0  (Ethernet)
        RX packets 2195  bytes 187021 (182.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2437  bytes 202408 (197.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-ebfe34b9-a8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.11.1  netmask 255.255.255.0  broadcast 192.168.11.255
        inet6 fe80::f816:3eff:fe49:5524  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:49:55:24  txqueuelen 0  (Ethernet)
        RX packets 7089  bytes 595194 (581.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7472  bytes 545622 (532.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#clear gateway on router. NAT rules removed from iptablee. ping from VM
#to public network failed
[root@localhost ~(keystone_admin)]# neutron router-gateway-clear pub_router
Removed gateway from router pub_router
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S -t nat          -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
[root@localhost ~(keystone_admin)]#

Lab-16:Firewall as a service (FWaas) using Openstack

In this lab I will demonstrate how to create neutron firewall. Neutron realized firewall by adding firewall rules into Openstack router iptables. We will examine router  iptables  before and after firewall has been created.

For this lab I am creating two tenants connected to same public router. Tenants are in different sub-nets. I will use ping operation between tenants to test firewall. I am not using external network so floating IP is not applied in this lab

Pre-condition:

  • Machine with RHEL 7 installed. User subscribed to RedHat
  • Subscribe to Redhat for Enterprise Linux and also for Openstack 7.0
>sudo subscription-manager register
>sudo subscription-manager subscribe --auto
>sudo subscription-manager list --consumed
  • Subscribe required Repos
>sudo subscription-manager repos --disable=*
>sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-7.0-rpms
  • Install the necessary yum packages, adjust the repository priority, and update
>yum install -y yum-utils
>yum update -y
  • Disable NetworkManager then reboot
>sudo systemctl disable NetworkManager
>sudo systemctl enable network
  • Set selinux to permissive mode. Edit config file (/etc/selinux/config)
    SELINUX=permissive
  • Reboot machine
>reboot
  • Install the packStack.
>sudo yum install -y openstack-packstack

Procedure:

  • start packstack, create public network
sudo packstack --allinone --neutron-fwaas=y --os-ceilometer-install=n --os-cinder-install=n --nagios-install=n
  •  Openstack creates default networks for you, I prefer to delete them before creating my own networks & router. The easier way to do this is thru Openstack Horizon gui.
  • On your browser point to ‘http://<controller ip>/dashboard’ username:admin. for password cat the file keystone_admin and use OS_PASSWORD as password. Under ‘Routers’ & ‘Networks’ delete everything
      • Create public network, tenants and users in admin domain
#source admin resource file
> . /root/keystonerc_admin

#create a new flavor
>nova flavor-create m2.nano auto 128 1 1

#create public network
>neutron net-create public --router:external=True

#create public subnet
>neutron subnet-create --disable-dhcp public xxx.254.209.0/24 \
--name public_subnet --allocation-pool start=xxx.254.209.86,end=xxx.254.209.88 \
--gateway xxx.254.209.126
#create router & set gateway to router
>neutron router-create pub_router
>neutron router-gateway-set pub_router public
#create tenants
>keystone tenant-create --name firstTenant
>keystone tenant-create --name secondTenant
  • Create first tenant, tenant network, subnet and SSH keypair
#create private tenant network
>neutron net-create firstTenant_net

#create sub-network for tenant network
>neutron subnet-create --name firstTenant_subnet \
  --dns-nameserver 8.8.8.8 firstTenant_net 192.168.11.0/24

#Add tenant network to router interface
>neutron router-interface-add pub_router firstTenant_subnet
#create SSH keypair for tenant1 & add it to nova
>ssh-keygen -f tenant1_rsa -t rsa -b 2048 -N ''
>nova keypair-add --pub-key tenant1_rsa.pub firstTenant
#Make sure we allow ICMP and SSH traffic to instances 
>neutron security-group-create firstTenantSec
>neutron security-group-rule-create --protocol icmp firstTenantSec 
>neutron security-group-rule-create --protocol tcp \   
--port-range-min 22 --port-range-max 22 firstTenantSec
  • Create second tenant,tenant network,subnet and SSH keypair
 #create second tenant network 
>neutron net-create secondTenant_net 

#create sub-network for tenant network 
>neutron subnet-create --name secondTenant_subnet \
--dns-nameserver 8.8.8.8 secondTenant_net 192.168.12.0/24
#add tenant to router interface
>neutron router-interface-add pub_router secondTenant_subnet
#create SSH keypair for tenant2 and add it to nova 
>ssh-keygen -f tenant2_rsa -t rsa -b 2048 -N '' 
>nova keypair-add --pub-key tenant2_rsa.pub secondTenant
#Make sure we allow ICMP and SSH traffic to instances
>neutron security-group-create secondTenantSec
>neutron security-group-rule-create --protocol icmp secondTenantSec
>neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 secondTenantSec
  • check configuration to make sure networks are created
[root@localhost ~(keystone_admin)]# neutron net-list
+--------------------------------------+------------------+-------------------------------------------------------+
| id                                   | name             | subnets                                               |
+--------------------------------------+------------------+-------------------------------------------------------+
| 4fc7ff44-38f5-4895-856e-fad5b81f53b2 | public           | b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d 167.254.209.0/24 |
| d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92 | firstTenant_net  | 079e2bad-589f-456b-9fc9-81c04b925dd3 192.168.11.0/24  |
| 8460a127-ee67-474f-bbe2-9f5916097f2d | secondTenant_net | be4cd35d-4a9c-46b1-86df-5173c0263029 192.168.12.0/24  |
+--------------------------------------+------------------+-------------------------------------------------------+

[root@localhost ~(keystone_admin)]# neutron port-list
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                             |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| 024ea0a9-fee3-45e9-af38-d946ad3870a8 |      | fa:16:3e:9b:e5:0d | {"subnet_id": "be4cd35d-4a9c-46b1-86df-5173c0263029", "ip_address": "192.168.12.2"}   |
| 309e3dc4-f7d0-4a8b-8ef8-1a05a6bfcfb8 |      | fa:16:3e:18:4d:1d | {"subnet_id": "079e2bad-589f-456b-9fc9-81c04b925dd3", "ip_address": "192.168.11.2"}   |
| 49b99e64-4c4a-4671-8236-2bbeab7fc8af |      | fa:16:3e:d4:6b:df | {"subnet_id": "079e2bad-589f-456b-9fc9-81c04b925dd3", "ip_address": "192.168.11.5"}   |
| 4d2c2605-5d59-4994-8eb1-efc6fed8eae1 |      | fa:16:3e:8b:f2:58 | {"subnet_id": "b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d", "ip_address": "167.254.209.86"} |
| 984d5059-7911-4c43-888e-aa39970c574b |      | fa:16:3e:5a:1c:13 | {"subnet_id": "be4cd35d-4a9c-46b1-86df-5173c0263029", "ip_address": "192.168.12.1"}   |
| ebfe34b9-a820-46da-a339-f84691b67968 |      | fa:16:3e:49:55:24 | {"subnet_id": "079e2bad-589f-456b-9fc9-81c04b925dd3", "ip_address": "192.168.11.1"}   |
| f083a638-d47e-4365-b2b9-613c2e64c3d6 |      | fa:16:3e:ee:07:30 | {"subnet_id": "be4cd35d-4a9c-46b1-86df-5173c0263029", "ip_address": "192.168.12.5"}   |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron router-list
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name       | external_gateway_info                                                                                                                                                                      | distributed | ha    |
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| fbdc95f9-aa97-4203-997b-c0cc09021a95 | pub_router | {"network_id": "4fc7ff44-38f5-4895-856e-fad5b81f53b2", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "b7b1dcc6-0322-4fa5-b7aa-bb36c92b192d", "ip_address": "167.254.209.86"}]} | False       | False |
+--------------------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
[root@localhost ~(keystone_admin)]#
  • Launch instances for first and second tenant
#launch instance in first tenant
>nova boot --poll --flavor m2.nano --image cirros \
  --nic net-id=d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92 --key-name firstTenant \
firstTenant_firstVM --security-groups firtTenantSec

#launch instance in second tenant
>nova boot --poll --flavor m2.nano --image cirros \
  --nic net-id=8460a127-ee67-474f-bbe2-9f5916097f2d --key-name secondTenant \
secondTenant_firstVM --security-groups secondTenantSec
  • Create firewall to drop ICMP packets
#create firewall rule and policy
[root@localhost ~(keystone_admin)]# neutron firewall-rule-create --name fwaas-rule --protocol icmp  --action deny
[root@localhost ~(keystone_admin)]# neutron firewall-policy-create --firewall-rules fwaas-rule fwaas-policy

#create firewall with firewall policy uuid
>neutron firewall-create <firewall-policy-uuid> 
[root@localhost ~(keystone_admin)]# neutron firewall-create 5e496550-a4f6-4196-a553-569d04a5d2ca

#show firewall info
[root@localhost ~(keystone_admin)]# neutron firewall-list
[root@localhost ~(keystone_admin)]# neutron firewall-show 6c13e919-0244-42f0-aa10-ed46c9ad371f
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| admin_state_up     | True                                 |
| description        |                                      |
| firewall_policy_id | 5e496550-a4f6-4196-a553-569d04a5d2ca |
| id                 | 6c13e919-0244-42f0-aa10-ed46c9ad371f |
| name               |                                      |
| router_ids         | fbdc95f9-aa97-4203-997b-c0cc09021a95 |
| status             | ACTIVE                               |
| tenant_id          | e5b04b788a814a489a366eb91970512c     |
+--------------------+--------------------------------------+
[root@localhost ~(keystone_admin)]#

  • This is our final topology with tenant VMs and IP addresses

openstack_fwaas

  • Check if our firewall is working
[root@localhost ~(keystone_admin)]# ip netns
qdhcp-8460a127-ee67-474f-bbe2-9f5916097f2d
qdhcp-d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92
qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95

#firewall in openstack router is realized using iptables. as you can see from below 
#log rule has been added in router to drop ICMP packets
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N neutron-filter-top
-N neutron-l3-agent-FORWARD
-N neutron-l3-agent-INPUT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-fwaas-defau
-N neutron-l3-agent-iv46c13e919
-N neutron-l3-agent-local
-N neutron-l3-agent-ov46c13e919
-A INPUT -j neutron-l3-agent-INPUT
-A FORWARD -j neutron-filter-top
-A FORWARD -j neutron-l3-agent-FORWARD
-A OUTPUT -j neutron-filter-top
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A neutron-filter-top -j neutron-l3-agent-local
-A neutron-l3-agent-FORWARD -o qr-+ -j neutron-l3-agent-iv46c13e919
-A neutron-l3-agent-FORWARD -i qr-+ -j neutron-l3-agent-ov46c13e919
-A neutron-l3-agent-FORWARD -o qr-+ -j neutron-l3-agent-fwaas-defau
-A neutron-l3-agent-FORWARD -i qr-+ -j neutron-l3-agent-fwaas-defau
-A neutron-l3-agent-INPUT -m mark --mark 0x1 -j ACCEPT
-A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
-A neutron-l3-agent-fwaas-defau -j DROP
-A neutron-l3-agent-iv46c13e919 -m state --state INVALID -j DROP
-A neutron-l3-agent-iv46c13e919 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-l3-agent-iv46c13e919 -p icmp -j DROP
-A neutron-l3-agent-ov46c13e919 -m state --state INVALID -j DROP
-A neutron-l3-agent-ov46c13e919 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-l3-agent-ov46c13e919 -p icmp -j DROP

#login to first tenant VM and ping to second tenant VM, ping failed due to firewall
#rule
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 ssh -i tenant1_rsa cirros@192.168.11.5
$
$
$ ping 192.168.12.5
PING 192.168.12.5 (192.168.12.5): 56 data bytes
^C
--- 192.168.12.5 ping statistics ---
8 packets transmitted, 0 packets received, 100% packet loss

#ping to router interface for second tenant VM is ok
$ ping 192.168.12.1
PING 192.168.12.1 (192.168.12.1): 56 data bytes
64 bytes from 192.168.12.1: seq=0 ttl=64 time=0.206 ms
64 bytes from 192.168.12.1: seq=1 ttl=64 time=0.169 ms
^C
--- 192.168.12.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.169/0.187/0.206 ms

#ping to router interface for first tenant VM is ok
$ ping 192.168.11.1
PING 192.168.11.1 (192.168.11.1): 56 data bytes
64 bytes from 192.168.11.1: seq=0 ttl=64 time=0.169 ms
64 bytes from 192.168.11.1: seq=1 ttl=64 time=0.190 ms
64 bytes from 192.168.11.1: seq=2 ttl=64 time=0.278 ms
^C
--- 192.168.11.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.169/0.212/0.278 ms
$
  • Delete firewall rule and check ping connectivity again
[root@localhost ~(keystone_admin)]# neutron firewall-list
+--------------------------------------+------+--------------------------------------+
| id                                   | name | firewall_policy_id                   |
+--------------------------------------+------+--------------------------------------+
| 6c13e919-0244-42f0-aa10-ed46c9ad371f |      | 5e496550-a4f6-4196-a553-569d04a5d2ca |
+--------------------------------------+------+--------------------------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# neutron firewall-delete 6c13e919-0244-42f0-aa10-ed46c9ad371f
Deleted firewall: 6c13e919-0244-42f0-aa10-ed46c9ad371f
[root@localhost ~(keystone_admin)]# neutron firewall-list

[root@localhost ~(keystone_admin)]# ip netns
qdhcp-8460a127-ee67-474f-bbe2-9f5916097f2d
qdhcp-d3c2f3a3-b2f8-4d9f-824d-cc20dd38fd92
qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95

#ICMP rule from iptables are deleted
[root@localhost ~(keystone_admin)]# ip netns exec qrouter-fbdc95f9-aa97-4203-997b-c0cc09021a95 iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N neutron-filter-top
-N neutron-l3-agent-FORWARD
-N neutron-l3-agent-INPUT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-local
-A INPUT -j neutron-l3-agent-INPUT
-A FORWARD -j neutron-filter-top
-A FORWARD -j neutron-l3-agent-FORWARD
-A OUTPUT -j neutron-filter-top
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A neutron-filter-top -j neutron-l3-agent-local
-A neutron-l3-agent-INPUT -m mark --mark 0x1 -j ACCEPT
-A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
[root@localhost ~(keystone_admin)]#

#ping from firt tenant VM to second tenant VM is successfully passed
$ ping 192.168.12.5
PING 192.168.12.5 (192.168.12.5): 56 data bytes
64 bytes from 192.168.12.5: seq=0 ttl=63 time=0.801 ms
64 bytes from 192.168.12.5: seq=1 ttl=63 time=0.403 ms
64 bytes from 192.168.12.5: seq=2 ttl=63 time=0.401 ms
64 bytes from 192.168.12.5: seq=3 ttl=63 time=0.284 ms
64 bytes from 192.168.12.5: seq=4 ttl=63 time=0.321 ms
^C
--- 192.168.12.5 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.284/0.442/0.801 ms
$