Architecture

This document shows a reference architecture for ntc-rosetta-conf. Note that you are free to deploy it in any way that works for you though.

First thing to bear in mind when deploying ntc-rosetta-conf is that each instance represents a single device. This means that if you have 100 routers you will need 100 instances. This might sound a bit cumbersome but it has the advantage you limit your blast radius in case an instance fails for some reason and it also helps scaling out the solution by spreading the instances across many servers.

To avoid having to run all those instances manually, having to manage all the different ports to avoid collisions and having to remember which port belongs to which router, we recommend running this behind some dockerized solution and behind a load balancer.

A continuation you can see an example of such type of deployment using docker-compose and haproxy.

haproxy

Let’s start looking at the configuration file:

global
    maxconn 2048
    ulimit-n 51200
    tune.ssl.default-dh-param 2048

defaults
    timeout connect 5000
    timeout client 50000
    timeout server 50000
    option http-server-close
    option httpclose
    mode http
    balance roundrobin

frontend https-in
    mode http
    # listen on port 65443 and enable mTLS and http/2
    bind 0.0.0.0:65443 ssl crt /etc/haproxy/rosetta.pem ca-file /etc/haproxy/ca.pem verify optional alpn h2

    # forward SSL headers to rosetta
    http-request set-header X-SSL                       %[ssl_fc]
    http-request set-header X-SSL-Client-Verify         %[ssl_c_verify]
    http-request set-header X-SSL-Client-DN             %{+Q}[ssl_c_s_dn]
    http-request set-header X-SSL-Client-CN             %{+Q}[ssl_c_s_dn(cn)]
    http-request set-header X-SSL-Issuer                %{+Q}[ssl_c_i_dn]
    http-request set-header X-SSL-Client-Not-Before     %{+Q}[ssl_c_notbefore]
    http-request set-header X-SSL-Client-Not-After      %{+Q}[ssl_c_notafter]

    # configure rules to forward requests to the different instances of rosetta
    use_backend rtr00 if { path -i -m beg /rtr00 }
    use_backend rtr01 if { path -i -m beg /rtr01 }


backend rtr00
    mode http

    # remove /rtr0x from the url
    reqrep ^([^\ ]*\ /)rtr00[/]?(.*)     \1\2
    server rtr00 172.21.33.100:8443 proto h2

backend rtr01
    mode http

    # remove /rtr0x from the url
    reqrep ^([^\ ]*\ /)rtr01[/]?(.*)     \1\2
    server rtr00 172.21.33.101:8443 proto h2

Let’s try to summarize what’s going on:

  1. First we have some globals and defaults, we can ignore those.
  2. Next we define a frontend, this is what we are going to consume from the outside. The frontend is going to be responsible of terminating TLS and enforcing mTLS and forwarding SSL headers to the different instances instances of ntc-rosetta-conf. Finally, the frontend is going to look at the URL path, look for /rtr0{0,1} and forward the requests to the corresponding instance of ntc-rosetta-conf.
  3. Finally, we are going to define a backend per instance of ntc-rosetta-conf. In this example we have two of them. The backend needs to specify how to connect to it and also it needs to remove the /rtr0x bit from the URL as that’s not part of our service.

docker-compose

docker-compose is going to be responsible of instantiating the loadbalancer and both instances of ntc-rosetta-conf. There isn’t a lot of magic here. Just mount the volumes with the configuration for haproxy, the data directories for each instance of ntc-rosetta-conf and disable ssl on them as it will be terminated on the loadbalancer:

---
version: '2.2'

services:
    loadbalancer:
        image: haproxy:2.0-alpine
        volumes:
            - ./haproxy:/etc/haproxy
        command: [
            "haproxy",
            "-f", "/etc/haproxy/haproxy.cfg",
        ]
        ports:
            - 65443:65443
        networks:
            net1:
                ipv4_address: 172.21.33.10
                ipv6_address: 2001:db8:33::10

    rtr00:
        build:
            context: ../../..
            dockerfile: Dockerfile
            args:
                PYTHON: 3.6
        networks:
            net1:
                ipv4_address: 172.21.33.100
                ipv6_address: 2001:db8:33::100
        volumes:
            - ./data/rtr00:/data
        command: [
            "ntc-rosetta-conf",
            "serve",
            "--datamodel", "openconfig",
            "--pid-file", "/tmp/ntc-rosetta-conf-demo.pid",
            "--log-level", "debug",
            "--data-file", "/data/data.json",
            "--port", "8443",
            "--disable-ssl",
        ]

    rtr01:
        build:
            context: ../../..
            dockerfile: Dockerfile
            args:
                PYTHON: 3.6
        networks:
            net1:
                ipv4_address: 172.21.33.101
                ipv6_address: 2001:db8:33::101
        volumes:
            - ./data/rtr01:/data
        command: [
            "ntc-rosetta-conf",
            "serve",
            "--datamodel", "openconfig",
            "--pid-file", "/tmp/ntc-rosetta-conf-demo.pid",
            "--log-level", "debug",
            "--data-file", "/data/data.json",
            "--port", "8443",
            "--disable-ssl",
        ]

networks:
    net1:
        driver: bridge
        enable_ipv6: true
        ipam:
            config:
            - subnet: 172.21.33.0/24
            - subnet: 2001:db8:33::/64

After everything is up now you should be able to access each particular instance via /rtr00 and /rtr01 respectively. For instance; https://rosetta:65443/rtr00/restconf/data/openconfig-interfaces:interfaces

This can look like it’s going to be a lot if you have hundreds or thousands of devices but as you probably figured already these two configuration files are very easy to template and automate.