Almost all VPN tutorials on Internet cover the simplest possible case of interconnecting two remote LANs. This is not really helpfull, because the real world requirements are more complex than that. Usually there are multiple networks in each location (DMZ, LAN, MGMT, OPS, etc...) and more than just two locations.
These requirements call for multiple VPN tunnels and this is when you quickly abandon policy-based VPNs in favor of route-based VPNs. TINC comes in mind and turns out to be a perfect fit.
UPDATE - 2015-03-29 - The corresponding Subnet=172.16.0.XXX/24 was added to each host file. This allows TINC daemons to advertise not only the networks behind them, but also their own IPs. It is useful when one needs communication not only between subnets, but also between firewalls. Thanks Jordan Rinke for pointing this out.
IP Allocation
For our example let's assume we have 3 locations: Dublin, Hamburg, and Boston. Each location has 3 networks: DMZ, LAN, and MGMT. The firewalls in each location have these public IPs
- fw-dub - 11.11.11.11
- fw-ham - 22.22.22.22
- fw-bos - 33.33.33.33
To make things more interesting let's assume there is no consistent IP allocation scheme, some networks use 10.xxx.xxx.xxx, some 192.168.xxx.xxx. All is messy, just like in real life, but we have a cheat-sheet - a list of existing networks:
- dub-dmz - 10.0.0.0/16
- dub-lan - 10.1.0.0/16
- dub-mgmt - 10.2.0.0/16
- ham-dmz - 10.10.0.0/24
- ham-lan - 10.10.2.0/24
- ham-mgmt - 10.10.255.0/24
- bos-dmz - 192.168.0.0/24
- bos-lan - 192.168.1.0/24
- bos-mgmt - 192.168.2.0/24
VPN Transfer Network
There are more ways to configure TINC, we are going to use it in "router mode" with a "VPN Transfer Network". This VPN transfer network is nothing special, it's just an IP range from which we pick a different IP for each TINC daemon. The remote networks will be accessible via TINC, e.g. routed via these IPs.
- dub - 172.16.0.1/24
- ham - 172.16.0.2/24
- bos - 172.16.0.3/24
TINC Installation
Install TINC on each firewall
apt-get install tinc
TINC allows you to configure multiple VPNs on a single server (one for MySQL replication, one for Billing Department, another for Counter-Strike players, etc...). In other words, it allows one server to belong to multiple VPNs at the same time. Each such VPN is configured in a separate directory /etc/tinc/<vpn>.
We are going to configure only one VPN and will call it 'firma' (German for company). For this VPN we will need a directory.
On each firewall create an /etc/tinc/firma/hosts directory structure
mkdir -p /etc/tinc/firma/hosts
TINC Configuration
Each TINC daemon needs an RSA key pair. Let's generate one in each location.
tincd -n firma -K 4096
That will create two files:
- /etc/tinc/firma/rsa_key.priv
- /etc/tinc/firma/rsa_key.pub
To configure TINC we need some additional configuration files:
- tinc.conf file
- host files (one for each location)
- hook-scripts (of which we will use only tinc-up)
These files will be located here:
- /etc/tinc/firma/tinc.conf
- /etc/tinc/firma/hosts/dub
- /etc/tinc/firma/hosts/ham
- /etc/tinc/firma/hosts/bos
- /etc/tinc/firma/tinc-up
TINC Configuration - Dublin
Let's start with Dublin:
/etc/tinc/firma/tinc.conf
Name = dub ConnectTo = ham
Create /etc/tinc/firma/hosts/dub
Address = 11.11.11.11 Subnet = 172.16.0.1/32 Subnet = 10.0.0.0/16 Subnet = 10.1.0.0/16 Subnet = 10.2.0.0/16
then append the Dublin public key to it
cat /etc/tinc/firma/rsa_key.pub >> /etc/tinc/firma/hosts/dub
and copy it to Hamburg and Boston.
Create /etc/tinc/firma/tinc-up
#!/bin/bash ip link set $INTERFACE up ip addr add 172.16.0.1/24 dev $INTERFACE ip route add 10.10.0.0/24 dev $INTERFACE # ham-dmz ip route add 10.10.2.0/24 dev $INTERFACE # ham-lan ip route add 10.10.255.0/24 dev $INTERFACE # ham-mgmt ip route add 192.168.0.0/24 dev $INTERFACE # bos-dmz ip route add 192.168.1.0/24 dev $INTERFACE # bos-lan ip route add 192.168.2.0/24 dev $INTERFACE # bos-mgmt
and make it executable:
chmod 755 /etc/tinc/firma/tinc-up
TINC Configuration - Hamburg
Now let's do the same in Hamburg:
/etc/tinc/firma/tinc.conf
Name = ham
Create /etc/tinc/firma/hosts/ham
Address = 22.22.22.22 Subnet = 172.16.0.2/32 Subnet = 10.10.0.0/24 Subnet = 10.10.2.0/24 Subnet = 10.10.255.0/24
then append the Hamburg public key to it
cat /etc/tinc/firma/rsa_key.pub >> /etc/tinc/firma/hosts/ham
and copy it to Dublin and Boston.
Create /etc/tinc/firma/tinc-up
#!/bin/bash ip link set $INTERFACE up ip addr add 172.16.0.2/24 dev $INTERFACE ip route add 10.0.0.0/16 dev $INTERFACE # dub-dmz ip route add 10.1.0.0/16 dev $INTERFACE # dub-lan ip route add 10.2.0.0/16 dev $INTERFACE # dub-mgmt ip route add 192.168.0.0/24 dev $INTERFACE # bos-dmz ip route add 192.168.1.0/24 dev $INTERFACE # bos-lan ip route add 192.168.2.0/24 dev $INTERFACE # bos-mgmt
TINC Configuration - Boston
And finally Boston
/etc/tinc/firma/tinc.conf
Name = bos ConnectTo = dub ConnectTo = ham
Create /etc/tinc/firma/hosts/bos
Address = 33.33.33.33 Subnet = 172.16.0.3/32 Subnet = 192.168.0.0/24 Subnet = 192.168.1.0/24 Subnet = 192.168.2.0/24
then append the Boston public key to it
cat /etc/tinc/firma/rsa_key.pub >> /etc/tinc/firma/hosts/bos
and copy it to Dublin and Hamburg.
Create /etc/tinc/firma/tinc-up
#!/bin/bash ip link set $INTERFACE up ip addr add 172.16.0.3/24 dev $INTERFACE ip route add 10.0.0.0/16 dev $INTERFACE # dub-dmz ip route add 10.1.0.0/16 dev $INTERFACE # dub-lan ip route add 10.2.0.0/16 dev $INTERFACE # dub-mgmt ip route add 10.10.0.0/24 dev $INTERFACE # ham-dmz ip route add 10.10.2.0/24 dev $INTERFACE # ham-lan ip route add 10.10.255.0/24 dev $INTERFACE # ham-mgmt
Final notes on TINC
TINC uses two types of connections. TCP "control" connections and UDP "data" connections.
The TCP "control" connections are configured by ConnectTo directive in tinc.conf file. TINC can connect to zero, one, or multiple TINC daemons. These connections are used for peer authentication, for exchanging encryption keys, and - this is unique to TINC - for exchanging routing information of all available subnets on the VPN.
This way all TINC daemons get to know all peers and their associated subnets, even if they are not directly connected.
The UDP "data" connections, on the other hand, get always created directly between the right endpoints, thus forming an on-demand full mesh topology.
This is really cool, isn't it?