Highly available VIPs on OpenStack VMs with VRRP
Release Date: Update date: Total Words:1386
Reading time: 7m
by谢先斌
IP: 上海
Share
Copy Url
Virtual IPs (VIPs) are widely used to enable high availabilty for IT infrastructure services. Although the use case is very common, it is surprisingly tricky to set up in an OpenStack environment without a deeper knowledge about neutron and its port security mechanisms. In this blog post I will walk you through the creation of a pair of VMs that share a VIP via the Virtual Router Redundancy Protocol (VRRP) on an OpenStack cloud.
Neutron setup
My OpenStack project has an internal network (192.168.0.0/24) and a router connecting it to an external network (10.98.208.0/24). To setup a HA VIP I need 3 ports that I named vip-port
, vm1-port
and vm2-port
. The VM ports will connect my virtual machines to the internal network, the VIP port actually is just a dummy port that allocates an internal IP address. Neutron will not instantiate this port, it s just a database entry that “blocks” an IP address that will later be used on the VM ports. I want both VMs to be able to communicate via their own IP and the VIP, but the neutron port security mechanisms will drop all traffic not coming from the IP and MAC address pair owned by the VM. Therefore I need to explicitly tell neutron to also allow the IP address of the VIP on the VM ports. This can be done with the --allowed-address-pair
parameter on the neutron port-create
command:
$ neutron port-create --name vip-port demo-net
Created a new port:
+-----------------------+-------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-------------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| device_id | |
| device_owner | |
| fixed_ips | {"subnet_id": "9bc8ad9b-fa50-41c3-8381-7b86e8fd554b", "ip_address": "192.168.0.14"} |
| id | ae020587-e870-4e38-b72a-6c8980bb92f6 |
| mac_address | fa:16:3e:03:84:81 |
| name | vip-port |
| network_id | f2592a01-e31d-4bdc-8aa7-ca66f938eb83 |
| security_groups | da933f24-4b8f-4478-909a-ca19aece379d |
| status | DOWN |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+-----------------------+-------------------------------------------------------------------------------------+
$ neutron port-create --name vm1-port --allowed-address-pair ip_address=192.168.0.14 demo-net
Created a new port:
+-----------------------+-------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-------------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | {"ip_address": "192.168.0.14", "mac_address": "fa:16:3e:69:b2:d2"} |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| device_id | |
| device_owner | |
| fixed_ips | {"subnet_id": "9bc8ad9b-fa50-41c3-8381-7b86e8fd554b", "ip_address": "192.168.0.15"} |
| id | 9ef8a695-0409-43db-9878-bf6b555dcfee |
| mac_address | fa:16:3e:69:b2:d2 |
| name | vm1-port |
| network_id | f2592a01-e31d-4bdc-8aa7-ca66f938eb83 |
| security_groups | da933f24-4b8f-4478-909a-ca19aece379d |
| status | DOWN |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+-----------------------+-------------------------------------------------------------------------------------+
$ neutron port-create --name vm2-port --allowed-address-pair ip_address=192.168.0.14 demo-net
Created a new port:
+-----------------------+-------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-------------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | {"ip_address": "192.168.0.14", "mac_address": "fa:16:3e:9e:8c:66"} |
| binding:host_id | |
| binding:profile | {} |
| binding:vif_details | {} |
| binding:vif_type | unbound |
| binding:vnic_type | normal |
| device_id | |
| device_owner | |
| fixed_ips | {"subnet_id": "9bc8ad9b-fa50-41c3-8381-7b86e8fd554b", "ip_address": "192.168.0.16"} |
| id | 928c8761-b98b-4c2f-be41-e4ab5ee82eab |
| mac_address | fa:16:3e:9e:8c:66 |
| name | vm2-port |
| network_id | f2592a01-e31d-4bdc-8aa7-ca66f938eb83 |
| security_groups | da933f24-4b8f-4478-909a-ca19aece379d |
| status | DOWN |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+-----------------------+-------------------------------------------------------------------------------------+
To allow VRRP traffic between the VMs I will now create a security group and assign it to the VM ports we created. Just for convenience reasons I will also add rules for allowing incoming SSH (for login) and ICMP (for ping) traffic.
$ neutron security-group-rule-create --protocol 112 vrrp
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | a6602f3b-9c5b-427d-a2b1-fa589e4cabd5 |
| port_range_max | |
| port_range_min | |
| protocol | 112 |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | 171563a6-25ce-43dd-8b4f-4fbc5109330e |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+-------------------+--------------------------------------+
$ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 vrrp
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | 717af275-c4bf-41de-9494-2942fa0eae2a |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | 171563a6-25ce-43dd-8b4f-4fbc5109330e |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+-------------------+--------------------------------------+
$ neutron security-group-rule-create --protocol icmp vrrp
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | 1bd25b24-33a1-4f13-be8a-361a9c3bff21 |
| port_range_max | |
| port_range_min | |
| protocol | icmp |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | 171563a6-25ce-43dd-8b4f-4fbc5109330e |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+-------------------+--------------------------------------+
$ neutron port-update --security-group vrrp vm1-port
Updated port: vm1-port
$ neutron port-update --security-group vrrp vm2-port
Updated port: vm2-port
As I want to access the VMs and the VIP from outside the cloud I create three floating IPs from my external network called floating
and assign
them to the three ports.
$ neutron floatingip-create floating
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.98.208.217 |
| floating_network_id | 74621f62-b07b-4543-83e5-f6d8504b1aae |
| id | 55009db5-3720-4880-943e-266690356748 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+---------------------+--------------------------------------+
$ neutron floatingip-create floating
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.98.208.218 |
| floating_network_id | 74621f62-b07b-4543-83e5-f6d8504b1aae |
| id | 4377aa79-d2f4-4977-ad63-4d9f8a7f2a42 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+---------------------+--------------------------------------+
$ neutron floatingip-create floating
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.98.208.219 |
| floating_network_id | 74621f62-b07b-4543-83e5-f6d8504b1aae |
| id | 73dea280-4edf-49cc-8432-c3f56d87d531 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 619a60d46b1349f7998116abec50b088 |
+---------------------+--------------------------------------+
$ neutron floatingip-associate 55009db5-3720-4880-943e-266690356748 ae020587-e870-4e38-b72a-6c8980bb92f6
Associated floating IP 55009db5-3720-4880-943e-266690356748
$ neutron floatingip-associate 4377aa79-d2f4-4977-ad63-4d9f8a7f2a42 9ef8a695-0409-43db-9878-bf6b555dcfee
Associated floating IP 4377aa79-d2f4-4977-ad63-4d9f8a7f2a42
$ neutron floatingip-associate 73dea280-4edf-49cc-8432-c3f56d87d531 928c8761-b98b-4c2f-be41-e4ab5ee82eab
Associated floating IP 73dea280-4edf-49cc-8432-c3f56d87d531
That is all that is needed within neutron.
Create the VMs
Now I am able to create two Ubuntu VMs and conenct them to the VM ports:
$ nova boot --flavor m1.small --image ec83428f-39e3-4675-a3c6-eff37238dbbe --key-name my_keypair --nic port-id=9ef8a695-0409-43db-9878-bf6b555dcfee vm1
$ nova boot --flavor m1.small --image ec83428f-39e3-4675-a3c6-eff37238dbbe --key-name my_keypair --nic port-id=928c8761-b98b-4c2f-be41-e4ab5ee82eab vm2
I wait until the VMs are running and log in to install keepalived:
$ ssh ubuntu@10.98.208.218
ubuntu@vm1:~$ sudo apt install -y keepalived
To configure keepalived I lookup the name of the network interface on the VM:
ubuntu@vm1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:69:b2:d2 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.15/24 brd 192.168.0.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe69:b2d2/64 scope link
valid_lft forever preferred_lft forever
With the interface name and the IP of the vip-port I can now continue to configure keepalived by editing /etc/keepalived/keepalived.conf
:
vrrp_instance VIP_1 {
state MASTER
interface ens3
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass supersecretpassword
}
virtual_ipaddress {
192.168.0.14
}
}
Now I restart the keepalived service:
ubuntu@vm1:~$ sudo systemctl restart keepalived
And repeat those steps on the second VM.
Checking the output of ip a on the VMs I see that one of my VMs assigned the VIP IP address 192.168.0.14 to its ens3 interface.
Testing failover
I open a terminal and ping the VIP:
$ ping 10.98.208.217
PING 10.98.208.217 (10.98.208.217): 56 data bytes
64 bytes from 10.98.208.217: icmp_seq=0 ttl=59 time=23.294 ms
64 bytes from 10.98.208.217: icmp_seq=1 ttl=59 time=20.879 ms
64 bytes from 10.98.208.217: icmp_seq=2 ttl=59 time=20.152 ms
...
In a second terminal I log in to the VM that currently has the VIP listed under the ens3 and stop the keepalived:
ubuntu@vm1:~$ sudo service keepalived stop
If everything worked I should now see that the ping in my first terminal is still working and checking ip a on both VMs shows that the VIP failed over to the other VM.