Experimenting with Scalability of Floodlight Controller in Software Defined Networks PDF

Title Experimenting with Scalability of Floodlight Controller in Software Defined Networks
Author Bhargavi Goswami
Pages 6
File Size 538.1 KB
File Type PDF
Total Downloads 159
Total Views 230

Summary

2017 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT) Experimenting with Scalability of Floodlight Controller in Software Defined Networks Saleh Asadollahi Bhargavi Goswami Computer Science Computer Science Saurashtra University Chri...


Description

Accelerat ing t he world's research.

Experimenting with Scalability of Floodlight Controller in So ware Defined Networks Bhargavi Goswami, Saleh Asadollahi

Related papers

Download a PDF Pack of t he best relat ed papers 

Scalable Emulat or for Soft ware Defined Net work Elvis Gonzales SPHINX: Det ect ing Securit y At t acks in Soft ware-Defined Net works Waqas Ali ICN 2016 - T he Fift eent h Int ernat ional Conference on Net works Carlos West phall

2017 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT)

Experimenting with Scalability of Floodlight Controller in Software Defined Networks Saleh Asadollahi

Bhargavi Goswami

Computer Science Saurashtra University Rajkot, India [email protected]

Computer Science Christ University Bangalore, India [email protected]

Abstract—Software Defined Network is the booming area of research in the domain of networking. With growing number of devices connecting to the global village of internet, it becomes inevitable to adapt to any new technology before testing its scalability in presence of dynamic circumstances. While a lot of research is going on to provide solution to overcome the limitations of the traditional network, it gives a call to research community to test the applicability and caliber to withstand the fault tolerance of the provided solution in the form of SDN Controllers. Out of existing multiple controllers providing the SDN functionalities to the network, one of the stellar controllers is Floodlight Controller. This paper is a contribution towards performance evaluation of scalability of the Floodlight Controller by implementing multiple scenarios experimented on the simulation tool of Mininet, Floodlight Controller and iPerf. Floodlight Controller is tested in the simulation environment by observing throughput and latency parameters of the controller and checked its performance in dynamic networking conditions over Mesh topology by exponentially increasing the number of nodes. Keywords— Software Defined Netwoks (SDN), Mininet, OpenFlow, Floodlight, iPerf, gnuplot I.

INTRODUCTION

Computer networks performed suitably well on traditional network infrastructure and equipments but, circumstances changed as more and more devices connected to the internet. Number of users and devices embraced internet gradually and progressively with time. Because of this, moot pointed problems rose such as: a) lack of a global view of entire network, b) configuration of each complex and individual equipments (routers and switches) with pre-defined commands, c) implementation of high level policy over the network gateways, d) configuration of equipments with low level commands that were time and energy consuming, e) difficulty in recovery of network breakdown and most considerable point is f) crisis of network programmability for upgrades [1]. Software Defined Network emerges as a solution for those problems having the concept of separation of control plan from data plan. By taking out the brain of each device away in the form of controller, complex equipments will appear as a combination of port and Flow tables that are connected to the controller. An SDN controller such as POX [2], OpenDaylight [3], Beacon [4],

978-1-5386-2361-9/17/$31.00 ©2017 IEEE

Ryu [5], NOX [6], etc., configures and manages dynamically each switch according to the requirement. Authors have provided details on these controllers, comparison and difference between each in [7]. Controllers may implement desired changes in the network by installing the suitable forwarding rule through southbound interface such as OpenFlow[8], Opflex[9], NETCONF [10], ForCES [11], POF [12], etc. There are other options for Southbound Interface implementation but, OpenFlow is the first choice of researchers and most famous with current version of 1.5. Northbound interface is the one who suffice the requirement of implementation of business policy over application layer of controllers. Widely used northbound interfaces are for deployment of service policy to define traffic behavior. While SDN has been suitable for home [13], Data Center and enterprise network [14] such as Google, Facebook, etc., separating control plan and bringing it to a remote system raise questions on its scaling capabilities on different scenarios. To address this issue and to throw light upon the scalability of the controller, the authors of this paper presents here the experiments to evaluate the performance of the diversified scenarios addressing its scalability. The rest of the paper is organized in the following manner. Section II is providing helicopter view on floodlight controller. Section III provides details about the simulation test bed set to perform the experiments on scalability with diversified networking conditions. Section IV provides the obtained experimental results and evaluation of performance statistics followed by conclusion and references. II.

FLOODLIGHT SDN CONTROLLER

Floodlight [15] is an open source, Apache licensed, java based OpenFlow controller and one of the momentous contribution from Big Switch Network. At the run time of Floodlight Controller both southbound and northbound interface among all the available set of configured module applications will be activated and available for experimentations. Applications interact with controller to retrieve information and invoke services by using http REST command. Floodlight Controller architecture is shown in Figure 1. It contains Core, Internal and Utility Service that includes various modules where, some of the modules are explained. a) Topology management is in charge for computation of shortest path by using Dijsktra’s

Algorithm, b) Link Discovery is responsible for maintaining the link state information by using LLDPs packets, c) Forwarding module provides flow commute through end to end routing, d) Device Manager keeps account of nodes on the network and Storage Source, e) Virtual Network generate layer 2 realm defined by MAC address.

standardized protocols across the layers. It creates transparent distribution of client server architecture functionalities in cross layer environment similar to real networks with Cisco Nexus 1000V switches. The open source OVS switch is accepted by multiple virtual machines as default switch and can be ported to multiple platforms as and when required to be simulated. To evaluate the statistics related to the performance of controller, mesh topology is implemented over 6 switches with three different scenario having difference in only number of nodes connected to each peripheral switch. Scenario A. 10 hosts connected to each 5 switches (Total of 50 host + 6 Switches + 1 controller). B. 30 hosts connected to each 5 switches (Total of 150 hosts + 6 Switches + 1 controller). C. 60 hosts connected to each 5 switches (Total of 300 host + 6 Switches + 1 controller).

Fig. 1. Floodlight SDN controller architecture

Further, f) Forwarding: It is forwarding application for packets by default that supports topologies for its utilities. g) Static Flow Entry: An application for installing specific flow entry that includes match and action columns for a specific switch that is enabled by default. Through REST APIs we can add, remove and inquire flow entries. h) Firewall: An application to apply Access Control List rules to restrict specific traffic based on specified match. i) Port Down Reconciliation: In the event of port down, it reconciles the flow across the network. j) Learning Switch: A common L2 learning switch. k) Hub: It always floods any incoming packets to all other active ports. Not enabled by default. l) Virtual Network Filter: A simple MAC-based network isolation application that is compatible with OpenStack Essex. Thus, Floodlight controller is part of the Floodlight project by BSN that is a stellar controller and choice of beginners to expert researchers in the domain of SDN. III.

Mininet comes with built in NOX controller, that supports all the basic controller’s functionalities, allows multiple controller implementation as well. In this paper, we do not use default controller of Mininet. The controller used in this experiment is Floodlight which is an enterprise edition, Java based OpenFlow Controller with Apache license, supported by large community of developers from Big Switch Networks. In this paper we use master version of Floodlight 1.2. The base of this controller is Beacon controller.

SIMULATION ENVIRONMENT

As an effort to implement and test the controller’s performance, we created a custom topology with three different scenario having difference in the number of nodes. As a simulator, Mininet [16] is used and as a controller Floodlight. Mininet is installed in virtual machine which gets connected to remotely located Floodlight controller installed in separate virtual machine. Python scripting is done to override the default behavior of the Mininet to customize the topological specification instead to accepting the automatic decision of number of host connecting to switch. Python script of customized topology includes the specification of host to switch, switch to switch and switch to Floodlight controller. The switch used in this experiment is OpenFlow kernel switch, also known as Open vSwitch or OVSSwitch [17] by enabling OpenFlow protocol mode. OVS is known for its open source distributed virtual multilayer implementation over virtual machines to provide environment of soft switch with active multiple

Fig. 2. Interconnectivity of six switches in mesh network

The aim of connecting the switches to mesh topology was to develop a scenario that imposes minimum delay in the transmission of data packets, especially in UDP transmission. Figure 2 shows the interconnectivity of six switches in mesh network with three varieties of scenarios, where difference lies in the number of host connected to the switch

289

The central switch is further connected to only five other switches and no host directly. The purpose was to generate the bottleneck scenario for communication which is the most practical issue in today’s ISP networks. The performance will be evaluated when traffic is generated using TCP and UDP flow together, which is generated and transmitted through the central switch. In this experiment, flow generation is done for UDP and TCP both by generating dynamic flows simulated using iPerf. iPerf is used to actively measure the parameters of the duration, queue behavior, bandwidth, protocol, packet delivery ratio, drop rates and much more. Out of all the stated parameters, we have limited analysis to throughput and latency in this paper. Here, authors have evaluated TCP and UPD flow measurements for bandwidth and latency for all the three scenarios that are analyzed in next section of performance statistics. As mentioned before, the experiment is designed in such a manner that bottleneck situation is generated but, only up to the optimum level till the loss is not encountered. The flooding of packets in the network was limited such that, no loss is encountered and the network was utilized optimally throughout the duration of 100 seconds of simulations run.

hosts by means of 5 different intermediate switches connecting each host to the other. The statistics are logged and plotted using gnuplot for each scenario under low QoS and high throughput rate. It can be observed through the top graph of Figure 3, that average throughput stays in the range of 575 MBps to 775 MBps. Majority of the simulation time, the throughput remains around 731 MBps.

First few seconds the network is in warm up state and thus, no tracing is considered for that duration. For the experiment with fixed number of switches communicating with different number of host connected to switch in each scenario, we have simulated and logged events that happened between first and last host of the network to observe the odds faced by the hosts located at the largest distance from each other. To utilize the bandwidth and queuing resources of the network to maximum, in competence with practical approach, we transmitted UDP datagram of 1470 byte and TCP window size of 85.3 Kbytes when the bandwidth was 5.99 Gbits/sec. To obtain statistics of network performance, iPerf was the right tool to provide us the desired statistics. To provide the statistics in graphical view, Gnuplot is used. Filtering is done using ‘grep’ and ‘awk’ commands for pattern matching and isolation of required parameters. IV. PERFORMANCE ANALISIS The experimental testbed is prepared as stated in previous section and tests are applied and executed on simulation to investigate the throughput and latency over bottleneck network traffic. This section exhibits the results obtained after experimenting with simulations. The simulation configurations are done for all the points as stated in previous section. The performance matrices used in this experiment are throughput and latency as mentioned before with a specific reason. To obtain and analyze accurate throughput, correct measure can be obtained using TCP flow. Similarly, if the requirement is to observe latency, it can be obtained by observing UDP flows. As stated in previous section, we executed simulations with the environment near to real life scenario for connectivity between ISP and Internet user, Client and Server, Wired and Wireless networks, etc. The resultant throughput graph is shown in Figure 3 that shows TCP flow for the entire network of all the three scenarios. First graph shows that central switch is communicating with 50

Fig. 3. Throughput for all the three scenarios.

This statistics are obtained when the network load is stable and number of host is fixed to fifty hosts. In fact this can be considered as a validation of the Floodlight controller that is used in the designed topology in presence of small number of host connecting to the network that may be a small scale factory, business unit or such small home network. It is to be noted that two way communications is happening between client connected to peripheral switch and server connected to the controller. The middle graph of Figure 3 shows the similar parameter configurations but, with 150 numbers of hosts 290

connecting to network of 6 switches. As depicted in the middle graph showing scenario 2, Figure 3 shows the throughput obtained after simulating for about 100 seconds. It was observed from the plotted graph that range of the throughput stays between 450 and 600 Mbps. Majority of the simulation duration kept the throughput stable at 575 Mbps that indicates that throughput is stable and is acceptable without doubt in presence of stable network load and fixed number of nodes, again. Considering the throughput graph provided in bottom most graph of Figure3 is for scenario 3, which is observed to be in range of 475 to 725 Mbps for the same duration of simulation, again. The stability is observed in the graph similar to the previous two scenarios throughput graphs but with more variations. Notice once more that majority of duration of simulation throughput stays stable at 690 Mbps. It is observed that when the number of nodes connected to switches increases, it imposes load on the network. The graph of Figure 3 shows high throughput variations because the bandwidth is optimally utilized by fifty nodes communicating in presence of TCP and UDP flows both. But, throughput gets reduced for higher number of nodes connect and communicate with the same number of switches because, majority of the packets spend long time in the pipeline to reach to the destination due to heavy network load. In Figure 4, authors have provided plotted graph for latency that was observed for UDP flows communicating between the first and last nodes connected to the network through six switches with three different scenarios. It was observed in first graph of Figure 4 that average mean of the observed latency was between 0.01 to 0.02 except few instances of high latency for few packets in the range of three to five. It indicates that drop rate is not even 1 percent of the entire packet traversing through the network. This observation was for the first scenario where, the number of nodes that are communicating is limited to fifty. But, when the number of nodes increases with the same resources, the second and third graph of Figure 4 shows the behavior variations. It was observed in second graph of Figure 4 that drastically latency reduces to less than 0.01 and the number of exceptional instances is only two. This seems to be ideal situation in the presence of interference. But, in third and bottom most graph of Figure 4, it was observed that the latency again increases and reaches in the range of 0.01 and 0.02 same as first scenario. This is due to the optimum utilization of the bandwidth when the connectionless communication is happening with the same network. Being mesh topology, large number of alternate path exist and there is no path that is getting congested during the entire communication with almost zero drop rate which results to lowest latency. In the last graph of Figure 4, it was observed that when the large number of packets are generated by 300 communicating nodes, for first 3 seconds, latency observed was very high which is required for the network traffic blast to get adjusted for first few seconds.

Fig. 4. Latency for all the three scenarios

V. CONCLUSION AND FUTURE SCOPE With this paper, authors have made attempt to address the scalability features of the Floodlight controller by implementing various diversified scenarios in simulation experimental environment. In this paper, authors have provided the clear idea how to create experimental test bed with analysis of obtained statistical results keeping the performance as the central focus. We would conclude this paper by providing positive sign to move forward to the researchers who are looking for implementation of their idea over Floodlight Controller in the domain of Software Defined Networks without any doubt. The controller not just provide the simulation experimental test bed support but, also provides clear explanation for analysis of obtained statistics after the experiments are simulated. The tools suggested, simulated, shown through figures and graphs will help the research community to 291

[7]

further conduct such experiments in the future by implementing their desired parameters though these experiments. This paper will also address the programmers, developers and new bees in the area of SDN, who are looking forward to touch the practical aspects of the SDN by following implementation details provided in the paper. Further, the research team will come up with few more papers on implementation of other SDN controllers in the coming future. The team also has planned to compare the controllers of SDN, once all the stellar controllers are implemented and experimented by them.

[11]

REFERENCES

[12]

[1]

[2] [3] [4]

[5] [6]

Asadollahi, S., Gowsami, B. (2017). Revolution in Existing Network under the Influence of Software Defined Network. Proceedings of the INDIACom 11th, Delhi, March 1-3.2017 IEEE Conference ID: 40353 McCauley, M. (2012). POX, from http://www.noxrepo.org/ OpenDaylight, Linux Foundation Collaborative Project, 2013, form http://www.opendaylight.org Erickson, D. (2013). The Beacon OpenFlow controller. Proceedings of ACM SIGCOMM Workshop Hot Topocs Software Defined Network II, 13-18 p, 2013. Nippon Telegraph and Telephone Corporation, RYU network operating system, 2012, from http://osrg.github.com/ryu Gude al, N. (2008). NOX: Towards an operating system for networks. ACM SIGCOMM - Computer Communication Revie. vol. 38, no. 3, pp. 105–110

[8]

[9]

[10]

[13]

[14]

[15] [16]

[17]

Asadollahi, S., G...


Similar Free PDFs