Blog

Leaf Spine design based on CLOS Architecture

As we move towards the Cloud and the given disruptive changes in technology space there has been a lot of improvements changes in data center network architecture. Traditional Datacenter architectures are based on a three-tier architecture which consists of Access →Distributin →Core switches.T his worked well and there has been a lot of advantages to this design.

Three tier architecture has its own advantages in terms of

  1. Failures can be easily isolated to Pods.
  2. Security and performance issues can be isolated on a Pod level and an easy to identify and troubleshoot.

Now, what changes were virtualization and cloud-scale architecture.E-W traffic constitutes to 80% of the data center traffic nowadays which caused increased need of bandwidth inside the DC .3 tier architecture was designed with N-S traffic in mind and so was the case for older infrastructures. Servers which are connected to different pods have to go through increased hops to reach the destination and thereby increasing the traffic congestion overall.

Networks based on Clos network architecture aka Leaf-spine architecture are specifically designed for large-scale computing needs.

This is a clos switching 3 stage architecture. Invented by Edson Erwin and formalized by Charles clos .

Coming to DC network the design looks like this.

Here the major problems if hop count and scalability problems with 3 tier architecture have been addressed as each server connected to Leaf switch are equidistant from one another and all packets in E-W traffic has to go through same hop count. Regarding scalability here, we are free to do horizontal scaling and in case if we need to add more computers and more bandwidth adding a Leaf (if requires spine) will do the job.

Sources

https://www.nanog.org/sites/default/files/monday.general.hanks.multistage.10.pdf

https://lenovopress.com/lp0573.pdf

ONOS tutorial with mininet :Part 2

As we have progressed through learning SDN essentials by installing ONOS and testing out a simple network topology with one switch and two hosts .Now its time to take it to another level by adding a routing in between .

Note we wont be using any real simulated router in between for this setup as the intention here is to test the network topology with a router in between .Ip forwarding functionality in linux which is our base server for mininet and ONOS will be used for the routing purpose.

The initial setup and configuration involves creating a python code to create the required topology.

#!/usr/bin/python

"""
linuxrouter.py: Example network with Linux IP router
This example converts a Node into a router using IP forwarding
already built into Linux.
The example topology creates a router and three IP subnets:
    - 192.168.1.0/24 (r0-eth1, IP: 192.168.1.1)
    - 172.16.0.0/12 (r0-eth2, IP: 172.16.0.1)
    - 10.0.0.0/8 (r0-eth3, IP: 10.0.0.1)
Each subnet consists of a single host connected to
a single switch:
    r0-eth1 - s1-eth1 - h1-eth0 (IP: 192.168.1.100)
    r0-eth2 - s2-eth1 - h2-eth0 (IP: 172.16.0.100)
    r0-eth3 - s3-eth1 - h3-eth0 (IP: 10.0.0.100)
The example relies on default routing entries that are
automatically created for each router interface, as well
as 'defaultRoute' parameters for the host interfaces.
Additional routes may be added to the router or hosts by
executing 'ip route' or 'route' commands on the router or hosts.
"""


from mininet.topo import Topo
from mininet.net import Mininet
from mininet.node import Node,Controller, OVSKernelSwitch, RemoteController
from mininet.log import setLogLevel, info
from mininet.cli import CLI


class LinuxRouter( Node ):
    "A Node with IP forwarding enabled."

    def config( self, **params ):
        super( LinuxRouter, self).config( **params )
        # Enable forwarding on the router
        self.cmd( 'sysctl net.ipv4.ip_forward=1' )

    def terminate( self ):
        self.cmd( 'sysctl net.ipv4.ip_forward=0' )
        super( LinuxRouter, self ).terminate()


class NetworkTopo( Topo ):
    "A LinuxRouter connecting three IP subnets"

    def build( self, **_opts ):
    #    net = Mininet(controller=RemoteController, switch=OVSKernelSwitch)

     #   c1 = net.addController('c1', controller=RemoteController, ip="10.128.0.4")
         #     c2 = net.addController('c2', controller=RemoteController, ip="127.0.0.1", port=6633)
        defaultIP = '192.168.1.1/24'  # IP address for r0-eth1
        router = self.addNode( 'r0', cls=LinuxRouter, ip=defaultIP )

        s1, s2, s3 = [ self.addSwitch( s ) for s in ( 's1', 's2', 's3' ) ]

        self.addLink( s1, router, intfName2='r0-eth1',
                      params2={ 'ip' : defaultIP } )  # for clarity
        self.addLink( s2, router, intfName2='r0-eth2',
                      params2={ 'ip' : '172.16.0.1/12' } )
        self.addLink( s3, router, intfName2='r0-eth3',
                      params2={ 'ip' : '10.0.0.1/8' } )

        h1 = self.addHost( 'h1', ip='192.168.1.100/24',
                           defaultRoute='via 192.168.1.1' )
        h2 = self.addHost( 'h2', ip='172.16.0.100/12',
                           defaultRoute='via 172.16.0.1' )
        h3 = self.addHost( 'h3', ip='10.0.0.100/8',
                           defaultRoute='via 10.0.0.1' )

        for h, s in [ (h1, s1), (h2, s2), (h3, s3) ]:
            self.addLink( h, s )


def run():
    "Test linux router"
    topo = NetworkTopo()
    #c = RemoteController('c', '10.128.0.4')
    #net.addController(c)
    #net = Mininet( topo=topo )  # controller is used by s1-s3
    net = Mininet(topo=topo,controller=RemoteController, switch=OVSKernelSwitch)

    c1 = net.addController('c1', controller=RemoteController, ip="10.128.0.4")
    #net.addController(c)
    net.start()
    info( '*** Routing Table on Router:\n' )
    info( net[ 'r0' ].cmd( 'route' ) )
    CLI( net )
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    run()

This will create a topology



Running the python code will execute all the steps and create the above mentioned topology .
We have mentioned our ONOS controller installed in the same server as controller to the code.

root@master1:/home/sreejithkj52# python top1.py 
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3 r0 
*** Adding switches:
s1 s2 s3 
*** Adding links:
(h1, s1) (h2, s2) (h3, s3) (s1, r0) (s2, r0) (s3, r0) 
*** Configuring hosts
h1 h2 h3 r0 
*** Starting controller
c0 c1 
*** Starting 3 switches
s1 s2 s3 ...
*** Routing Table on Router:
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.0.0.0       U     0      0        0 r0-eth3
172.16.0.0      0.0.0.0         255.240.0.0     U     0      0        0 r0-eth2
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 r0-eth1
*** Starting CLI:

Topology view in ONOS


From the devices view we will be able to see the configured three switch and its details.


To view the hosts attached to the switches click on the host view section.

After the configuration all the hosts will be reachable to each other .

mininet> h1 ping h3
PING 10.0.0.100 (10.0.0.100) 56(84) bytes of data.
64 bytes from 10.0.0.100: icmp_seq=1 ttl=63 time=26.1 ms
64 bytes from 10.0.0.100: icmp_seq=2 ttl=63 time=0.285 ms
mininet> h1 ping h2
PING 172.16.0.100 (172.16.0.100) 56(84) bytes of data.
64 bytes from 172.16.0.100: icmp_seq=1 ttl=63 time=7.86 ms
64 bytes from 172.16.0.100: icmp_seq=2 ttl=63 time=0.240 ms
mininet> h3 ping h2
PING 172.16.0.100 (172.16.0.100) 56(84) bytes of data.
64 bytes from 172.16.0.100: icmp_seq=1 ttl=63 time=6.37 ms
64 bytes from 172.16.0.100: icmp_seq=2 ttl=63 time=0.233 ms

Note the time delay for the first packet ,this is the time required to contact SDN controller and get the enforced flows after doing this step for the first packet flow will be populated in all sdn enabled switches and there is no need to contact controller any more further communication will happen directly .

mininet> h1 ifconfig
h1-eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.100  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::ac12:bbff:fe99:8b0d  prefixlen 64  scopeid 0x20<link>
        ether ae:12:bb:99:8b:0d  txqueuelen 1000  (Ethernet)
        RX packets 1189  bytes 96316 (94.0 KiB)
        RX errors 0  dropped 1182  overruns 0  frame 0
        TX packets 21  bytes 1642 (1.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We will continue doing further test by enabling a python webserver from h1 and accessing it through other hosts .

mininet> h1 python -m SimpleHTTPServer 80 &
mininet> h2 wget -O - h1
--2018-02-23 12:46:49--  http://192.168.1.100/
Connecting to 192.168.1.100:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 604 [text/html]
Saving to: ‘STDOUT’

-                     0%[                    ]       0  --.-KB/s               <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bash_logout">.bash_logout</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".profile">.profile</a>
<li><a href=".ssh/">.ssh/</a>
<li><a href=".viminfo">.viminfo</a>
<li><a href="customtopo.py">customtopo.py</a>
<li><a href="gitpulltest/">gitpulltest/</a>
<li><a href="gitsync/">gitsync/</a>
<li><a href="playbooks/">playbooks/</a>
<li><a href="top1.py">top1.py</a>
</ul>
<hr>
</body>
</html>
-                   100%[===================>]     604  --.-KB/s    in 0s      

2018-02-23 12:46:49 (191 MB/s) - written to stdout [604/604]

Accessing from h3

mininet> h3 wget -O - h1
--2018-02-23 12:48:09--  http://192.168.1.100/
Connecting to 192.168.1.100:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 604 [text/html]
Saving to: ‘STDOUT’

-                     0%[                    ]       0  --.-KB/s               <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bash_logout">.bash_logout</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".profile">.profile</a>
<li><a href=".ssh/">.ssh/</a>
<li><a href=".viminfo">.viminfo</a>
<li><a href="customtopo.py">customtopo.py</a>
<li><a href="gitpulltest/">gitpulltest/</a>
<li><a href="gitsync/">gitsync/</a>
<li><a href="playbooks/">playbooks/</a>
<li><a href="top1.py">top1.py</a>
</ul>
<hr>
</body>
</html>
-                   100%[===================>]     604  --.-KB/s    in 0s      

2018-02-23 12:48:09 (202 MB/s) - written to stdout [604/604]

This tutorial has demonstrated how easy is to setup a custom topology in mininet and connecting the same to ONOS controller.

ONOS tutorial with mininet : Part 1

ONOS

ONOS is an SDN controller specifically designed for service providers.Intention is to create a software defined network operating systems intended to integrate all network applications and functions in a viritualized format.The current ONOS version is 1.12.0.

Mininet

A network emulator which can create virtual switches,hosts and connect to SDN controllers. Mininet can be installed in your laptop and complex networking solutions and topologies can be tested out with ease .

Topology

S1-Switch which will be used to connect two hosts

H1 -host 1

H2 -host 2

The topology we are attempting to create here is a single switch and two hosts connected .SDN controller ONOS will be controlling the traffic flows between the devices

ONOS Installation

root@master1: wget -c http://downloads.onosproject.org/release/onos-1.12.0.tar.gz
root@master1:tar xzf onos-1.12.0.tar.gz
root@master1: mv onos-1.12.0 onos
root@master1:/opt/onos/bin/onos-service start
root@master1:/opt# /opt/onos/bin/onos-service start
karaf: JAVA_HOME not set; results may vary
Welcome to Open Network Operating System (ONOS)!
     ____  _  ______  ____     
    / __ \/ |/ / __ \/ __/   
   / /_/ /    / /_/ /\ \     
   \____/_/|_/\____/___/     
                               
Documentation: wiki.onosproject.org      
Tutorials:     tutorials.onosproject.org 
Mailing lists: lists.onosproject.org     

Come help out! Find out how at: contribute.onosproject.org 

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown ONOS.

onos> app -s
onos> app download onos-appfwd
onos> feature:list | grep onos-app
onos> feature:install onos-apps-fwd
onos> list | grep onos-*
onos> app activate org.onosproject.openflow
onos> app -a -s



We can check the enabled applications in ONOS GUI

mininet configuration

root@master1:/home/sreejithkj52# sudo mn --controller remote,ip=10.128.0.4
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 
*** Adding switches:
s1 
*** Adding links:
(h1, s1) (h2, s1) 
*** Configuring hosts
h1 h2 
*** Starting controller
c0 
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet> h1 ping h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=84.1 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.284 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.058 ms

Flows for the device

VMware NSX-T 2.1 add supports for Kubernetes

 

As noted by many industry veterans ,there is real need of matured network virtualization product in the container space.There are many opensource projects which has seen tremendous success in recent years like OPen daylight,Open contrail and ONOS. With all respect to the contributes of these solution there exists a big gap in the product maturity .We still see hesitance from major enterprise players or telecom providers to go all in to these solution s.Most of them are scared about the product stability support and other reason such as integration to theor existing environment.

At the same time in few years VMware NSX has become a highly successful product so successful that it has given long term networking giant Cisco a run for its money through its innovative network virtualization solutions.

Given the adoption of DevOps in IT ,automation is a critical piece which every Infrastructure managers trying to take head on with. For a true infrastructure automation setup ,network virtualization is a must.Considering all these factors VMware decision to support and integrate with Kubernetes is a great move which will increase NSX adoption rate with opensource projects.

Hope we will continued support for this initiative from VMware .

I will be starting a new blog series for Kubernetes Integration with VMware soon. 

 

 

 

VMware NSX Controller is now Photon OS

 

 

A welcome improvement from VMware as they changes controller OS which was based on Photon OS.

Photon OS is a light weight Linux operating system .The need for such light weight system has been quite evident as the container technologies began to mature ,more and more developers are now developing and building their applications in container formats like Docker ,Rkt etc .

When we say Docker the main argument comes along with that is it is really the next step of evolution from Virtual Machines.The container craze is going so fast that VMware find itself in defensive position some times.

Given the mature SDDC frame work which VMware has i believe VMware is in a great position to take advantage of these recent developments in the infrastructure space.

NSX-T is where VMware sees its future and given the pace at which container technology and private cloud is growing there is a really well defined space for a mature networking product .Bu as with other open source technologies developers would definitenly love to see VMware products gets integrated seamlessly with other tools .

VMware slowly integrating Photon OS into some of its core offerings is really a well minded strategic move.

Creating private network in docker

Create a private network using docker network command

docker network create --subnet=172.18.0.0/16 kubenet

Assign the ip to container using —ip

docker run --net kubenet ---ip 172.18.0.4 -it -d ubuntu

Building modern web applications-Part 2

Nginx reverse proxy has been configured and service is running after the successful configuration of nginx.You might get a “Bad Gateway error “ .if we try to connect to nginx using web browser.This means packets are hitting nginx reverse proxy file ,but since our backend server is not initialized and started yet it is giving a gateway error.

Checkpoint 1: Nginx web server configured

Configuring app server in google cloud

Here we will configure a node.js application as an app server for our testing.The application is simple and it will be listening on port 7555 for any incoming connections and will show a dialog box to create a new user, And when we click on “Create User”,a new user will be created in the mysql server which we will be using as a backend service.

Node.js app-****(this part has been taken from the link https://hackernoon.com/setting-up-node-js-with-a-database-part-1-3f2461bdd77f
Thanks to Robert Tod

Creating and initializing node.js app

Install Node.js
Install MySQL
Create a HTTP API for writing to the database
Create some HTML and JS to POST to the API
Use Knex migrations to create a user database schema

root@web01:~/tutorial_node_database# ls
index.js knexfile.js migrations node_modules package.json public store.js
root@web01:~/tutorial_node_database# cat index.js
const express = require(‘express’)
const bodyParser = require(‘body-parser’)
const store = require(‘./store’)
const app = express()
app.use(express.static(‘public’))
app.use(bodyParser.json())
app.post(‘/createUser’, (req, res) => {
store
.createUser({
username: req.body.username,
password: req.body.password
})
.then(() => res.sendStatus(200))
})
app.listen(7555, () => {
console.log(‘Server running on http://localhost:7555’)
})

root@web01:~/tutorial_node_database# cat knexfile.js
module.exports = {
client: ‘mysql’,
connection: {
user: ‘root’,
password: ‘sree’,
database: ‘tutorial_node_database’
}
}
root@web01:~/tutorial_node_database# cat package.json
{
“name”: “tutorial_node_database”,
“version”: “1.0.0”,
“description”: “”,
“main”: “index.js”,
“scripts”: {
“test”: “echo \”Error: no test specified\” && exit 1″
},
“author”: “”,
“license”: “ISC”,
“dependencies”: {
“body-parser”: “^1.18.1”,
“express”: “^4.15.4”,
“knex”: “^0.13.0”,
“mysql”: “^2.14.1”
}
}
root@web01:~/tutorial_node_database#
root@web01:~/tutorial_node_database# cat store.js
const knex = require(‘knex’)(require(‘./knexfile’))
module.exports = {
createUser ({ username, password }) {
console.log(`Add user ${username} with password ${password}`)
return knex(‘users’).insert({
username,
password
})
}
}
root@web01:~/tutorial_node_database#

root@web01:~/tutorial_node_database# cd public/
root@web01:~/tutorial_node_database/public# ls
app.js index.html
root@web01:~/tutorial_node_database/public# cat app.js
const CreateUser = document.querySelector(‘.CreateUser’)
CreateUser.addEventListener(‘submit’, (e) => {
e.preventDefault()
const username = CreateUser.querySelector(‘.username’).value
const password = CreateUser.querySelector(‘.password’).value
post(‘/createUser’, { username, password })
})
function post (path, data) {
return window.fetch(path, {
method: ‘POST’,
headers: {
‘Accept’: ‘application/json’,
‘Content-Type’: ‘application/json’
},
body: JSON.stringify(data)
})
}

root@web01:~/tutorial_node_database/public# cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Node database tutorial</title>
</head>
<body>
<form class=”CreateUser”>
<h1>Create a new user</h1>
<input type=”text” class=”username” placeholder=”username”>
<input type=”password” class=”password” placeholder=”password”>
<input type=”submit” value=”Create user”>
</form>
<script src=”/app.js”></script>
</body>
</html>
root@web01:~/tutorial_node_database/public#

Configure the knexfile.js appropriately to connect to the database.
Use knex to create a new user

root@web01:~/tutorial_node_database# knex migrate:make new_user_for_node
Created Migration: /root/tutorial_node_database/migrations/20170924060137_new_user_for_node.js
root@web01:~/tutorial_node_database#

Copy the below contents

exports.up = function (knex) {
return knex.schema.createTable(‘user’, function (t) {
t.increments(‘id’).primary()
t.string(‘username’).notNullable()
t.string(‘password’).notNullable()
t.timestamps(false, true)
})
}
exports.down = function (knex) {
return knex.schema.dropTableIfExists(‘user’)

Move to working directory and start node

root@e253b80241fc:/tutorial-node-database# node .
Server running on http://localhost:7555

Checkpoint-2

Installing and configuring mysql server.
Create an instance in google cloud.

root@web01:/# sudo apt-get install mysql-server
root@web01:/# service mysql restart

We have configured three instances and the application will be accessible through nginx webserver
Testing the application
We can see from the live node console that user got added

root@e253b80241fc:/tutorial-node-database# node .
Server running on http://localhost:7555
Add user sree6 with password sree6

We check the DB server we can see the user got added.

mysql>
mysql> show databases;
+------------------------+
| Database |
+------------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| tutorial_node_database |
+------------------------+
5 rows in set (0.04 sec)
mysql> use tutorial_node_database
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql>
mysql>
mysql>
mysql>
mysql> select * from user;
+----+----------+----------+---------------------+---------------------+
| id | username | password | created_at | updated_at |
+----+----------+----------+---------------------+---------------------+
| 1 | sree6 | sree6 | 2017-09-24 11:51:37 | 2017-09-24 11:51:37 |
+----+----------+----------+---------------------+---------------------+
1 row in set (0.00 sec)
mysql> exit

Uploading code to GitHub

Initializing Github and adding a repository

git init
git add .
git commit -m “First commit”
 
git remote add origin remote repository URL
git remote -v
git push origin master
 
If not working
git fetch origin master
 
git push origin master --force

Building modern web applications-Part 1

Stage: 1 – Building a three-tier web application on Google Cloud

 

The intent of this tutorial is getting comfortable with the app dependency flows of modern applications in the cloud and how to migrate flawlessly to the latest container based and serverless technologies.

Application has always been the focal point in the enterprise datacenter.Even with all latest trending technologies which are gaining prominence on a daily basis , the end result expected is same from all tools and platforms.Building the better application and augment business strategies.

Three tier app model has been around for long and it has server application really well.

In this tutorial as part of stage -1.

Infrastructure setup has nginx configured as a reverse proxy,node.js application which will serve as an app server and a MySQL database server.

Data flow happens like this.

[Nginx(Ubuntu 16.04) —>node.js (Ubuntu 16.04)—>mysql(Ubuntu 16.04)].

Entire setup will run on Google Cloud Compute Engine.Create web server ubuntu vm using google cloud management interface.

https://cloud.google.com/compute/docs/quickstart-linux

or using a simple gcloud command line.

gcloud compute instances create example-instance-1 example-instance-2 example-instance-3 --zone us-central1-a

Configuring nginx as reverse proxy

root@web01:~#apt-get update
root@web01:~#apt-get install nginx

This will get nginx installed

root@webserver:~# service nginx status
● nginx.service -- A high-performance web server and a reverse proxy server
  Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
  Active: active (running) since Sat 2017-09-23 11:30:28 UTC; 2min 23s ago
 Process: 1522 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
 Process: 1398 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 1542 (nginx)
   Tasks: 2
  Memory: 10.3M
     CPU: 27ms
  CGroup: /system.slice/nginx.service
          ├─1542 nginx: master process /usr/sbin/nginx -g daemon on; master_process on
          └─1545 nginx: worker process                           
Sep 23 11:30:27 webserver systemd[1]: Starting A high-performance web server and a reverse proxy server…
Sep 23 11:30:28 webserver systemd[1]: Started A high-performance web server and a reverse proxy server.

Create a reverseproxy.conf file

root@webserver:/etc/nginx/sites-available# touch reverseproxy.conf
root@webserver:/etc/nginx/sites-available# cat reverseproxy.conf
http {
   upstream backend {
       server 10.128.0.5:7555;
       server 192.168.1.21;
       server 192.0.0.1 backup;
   }
   server {
       location / {
           proxy_pass http://backend;
       }
   }
}
root@webserver:/etc/nginx/sites-available#

Configuration for reverse proxy in nginx is quite simple.configure the number of backend server with the port with which it has to connect to them in the configuration file

Create a simlink to /etc/nginx/sites-enabled .This will make nginx refer the configuration file and forwards the traffic accordingly.

root@webserver:/etc/nginx/sites-available# ln -s /etc/nginx/sites-available/reverseproxy.conf /etc/nginx/sites-enabled/reverseproxy.conf
root@webserver:/etc/nginx/sites-available#service nginx restart

 

OpenStack DB Error : (pymysql.err.InternalError) (1071,1071 – Specified key was too long; max key length is 767 bytes

Openstack Liberty

DB Error : (pymysql.err.InternalError) (1071,1071 – Specified key was too long; max key length is 767 bytes

 

1) Replace all instances of ‘utf8mb4’ with ‘utf8’ in /etc/mysql/mariadb.conf.d/*2) Add the below to /etc/mysql/conf.d/mysqld_openstack.cnf:

cat /etc/mysql/conf.d/mysqld_openstack.cnf:
[client]
default-character-set = utf8
[mysqld]
bind-address = <<IP>>
default-storage-engine =###
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8’
character-set-server = utf8
[mysql]
default-character-set = utf8

3) Drop the keystone database
4) Restart mysql service
5) Run ‘keystone-manage db_sync’

 Modify the files in /etc/mysql/mariadb.conf.d/ was also necessary to fix.