- Research
- Open Access
Adaptive placement & chaining of virtual network functions with NFV-PEAR
- Gustavo Miotto^{1},
- Marcelo Caggiani Luizelli^{2},
- Weverton Luis da Costa Cordeiro^{1}Email authorView ORCID ID profile and
- Luciano Paschoal Gaspary^{1}
https://doi.org/10.1186/s13174-019-0102-2
© The Author(s) 2019
- Received: 2 July 2018
- Accepted: 4 January 2019
- Published: 4 February 2019
Abstract
The design of flexible and efficient mechanisms for proper placement and chaining of virtual network functions (VNFs) is key for the success of Network Function Virtualization (NFV). Most state-of-the-art solutions, however, consider fixed (and immutable) flow processing and bandwidth requirements when placing VNFs in the Network Points of Presence (N-PoPs). This limitation becomes critical in NFV-enabled networks having highly dynamic flow behavior, and in which flow processing requirements and available N-PoP resources change constantly. To bridge this gap, we present NFV-PEAR, a framework for adaptive VNF placement and chaining. In NFV-PEAR, network operators may periodically (re)arrange previously determined placement and chaining of VNFs, with the goal of maintaining acceptable end-to-end flow performance despite fluctuations of flow processing costs and requirements. In parallel, NFV-PEAR seeks to minimize network changes (e.g., reallocation of VNFs or network flows). The results obtained from an analytical and experimental evaluation provide evidence that NFV-PEAR has potential to deliver more stable operation of network services, while significantly reducing the number of network changes required to ensure end-to-end flow performance.
Keywords
- NFV
- Placement
- Chaining
- Network functions
1 Introduction
Network Function Virtualization (NFV) is a relatively novel paradigm that aims at migrating functions like routing and caching, from proprietary appliances (middleboxes) to software-centric solutions running on virtual machines. Such migration provides several benefits, e.g. reduced total cost of ownership and maintenance, cheaper network function updates (instead of expensive middlebox hardware upgrades), as well as more flexible placement and chaining of network functions in the infrastructure [1].
NFV has experienced advances on various fronts, from the design and deployment of virtual network functions (VNFs) [2, 3] to their operation and management [4, 5]. In spite of progresses made, many research opportunities remain. One is related to the VNF placement and chaining problem. In summary, it involves determining where to place VNFs given a set of Network Points of Presence (N-PoPs), and how to steer network flows between them, as specified in Service Function Chains (SFCs). To materialize flow steering, Software Defined Networking (SDN) [6] can be considered a convenient ally, as it enables VNFs to be placed and chained in a highly flexible way. The complexity of this problem comes however from requirements and constraints that need to be satisfied upon placement and chaining, like computing power (at N-PoPs, where functions will be placed), bandwidth (between N-PoPs), and end-to-end delay. This problem, which was proven to be NP-hard [7], has been widely studied, with several optimization objectives proposed (e.g., minimize operational costs or network resource utilization [3, 8, 9]).
An important limitation of known approaches to VNF placement and chaining is that they consider, when computing how to best deploy a set of SFCs, VNF operating costs and resources available at N-PoPs as being fixed and immutable. In real-world environments, however, both the costs and available resources can change dynamically [10], depending on the network load. As a result, flow processing requirements, as specified in the SFCs, can be violated during peak hours. A traditional approach to overcome this issue is analyzing the behavior of an individual VNF (firewall, for example) and deploying more VNFs in response to increasing loads. The individualized search for local solutions, as in the firewall example, may not lead to a global optimum regarding the balance between supply and demand in function placement and flow chaining. More importantly, it can lead to resource waste because of failure to explore idle flow processing capacity in VNFs/N-PoPs.
To fill in this gap, in a previous work we proposed NFV-PEAR, a framework for adaptive VNF orchestration [11]. Our goal was to enable (re)arrangement of previously assigned network functions and flow chaining, in parallel to the instantiation of new SFCs, in order to deal with the dynamic behavior of flows and fluctuations in resource availability in N-PoPs. To this end, we seek to (re)chain flows through VNFs with available bandwidth and computing power, as well as (re)organize VNFs into N-PoPs with more resources available. In this paper, we extend our previous work by providing: (i) a more detailed discussion on the formal model to ensure the best provision of SFCs in face of dynamic changes in demand and/or costs associated with networking equipment (virtual or not); (ii) an overview of the reference architecture and application programming interface for (re)design and deployment of SFCs, agnostic of virtualization and infrastructure technologies; (iii) a description of a proof-of-concept prototypical implementation of NFV-PEAR; and (iv) a more detailed evaluation on the efficacy and effectiveness of NFV-PEAR. From the results obtained via analytical and experimental evaluation, we observed that network resource consumption became evenly distributed among active VNFs, when compared to non-reconfigurable NFV environments. Furthermore, it became possible to reroute network flows with varying bandwidth/computing power demands, paving way for minimizing flow requirement violations.
The remainder of this paper is organized as follows. In Section 2 we provide empirical evidence on how VNF performance is strongly influenced by varying network load. In Section 3 we present an Integer Linear Programming (ILP) model for adaptive VNF provisioning. In Section 4 we describe NFV-PEAR, our solution for adaptive VNF provisioning and orchestration. In Section 5 we focus on implementation aspects of a proof-of-concept prototype, and in Section 6 we discuss evaluation scenarios and results achieved. In Section 7 we survey most prominent related work. Finally, in Section 8 we close the paper with final considerations and perspectives for future work.
2 Impact of network load on VNF performance
In the context of an adaptive framework such as NFV-PEAR, it is imperative to understand how VNFs are performing. In order to motivate and illustrate how performance indicators are affected as network functions are subjected to different traffic patterns, a series of experiments were carried out in a typical NFV environment. Next, we present a summary of our key findings, considering CPU, throughput, and packet loss metrics.
The experiments were performed in a controlled environment comprised of two servers, A and B, equipped with 1 Intel Xeon processor E5-2420 (1.9GHz, 12 cores and 15MB cache), 32GB of RAM (1333MHz), 1TB SAS hard drive, 1 Gbps network interface card (NIC), and Fedora GNU/Linux 21 (kernel v3.17). The NIC on server A was directly connected to the NIC in Server B. On server A, a KVM hypervisor and an Open vSwitch virtual switch were installed. On top of KVM we deployed a virtual machine with two logical Ethernet interfaces, 1 vCPU and 1GB of RAM. The virtual switch was connected to the physical NIC and to the virtual machine interfaces. On server B, two Docker containers were installed, each with a logical Ethernet interface, and an Open vSwitch connected to the containers and the physical NIC.
The experiment scenario was set up as follows. On server B, the containers worked as Iperf client and server, configured in UDP mode; on server A, the KVM virtual machine forwarded the packets between their interfaces. During the experiment, traffic originated by the Iperf client (running on B) went through the virtual machine (on A) and returned to the Iperf server (in B). This organization was chosen so that the cost of generating traffic would not interfere with the performance of the virtual machine, actual target of our measurement evaluation. In addition, two distinct experiments were performed, with the following configurations: (i) virtual machine with static routing table and (ii) virtual machine with routing by network function running on Click Router [12]. In each experiment, measured CPU usage includes cycles spent by the host operating system, virtual machine, and other processes involved in the experiment configuration. The results shown are an average of 30 runs.
It is important to mention that higher throughput could have been achieved with hardware acceleration technologies (like Intel DPDK and SR-IOV). Such optimizations would certainly push observable bottlenecks to points beyond those plotted in the graphs, but they would inevitably occur. In summary, the results and discussion above reinforce the importance of the adaptive mechanism being proposed in this work. More specifically, NFV-PEAR enables fine-tuning the provisioning of virtualized function chains to counteract VNF performance degradation or changes in network traffic profile.
3 A formal model for dynamic VNF provisioning
To deal with the dynamic behavior of network flows and to reorganize the allocation of VNFs without wasting physical resources or having performance degradation, it is necessary to revisit VNF allocation models and heuristics available in the literature. To this end, we use an adapted version of the model proposed by Luizelli et al. [7, 13], which formalizes the static placement and chaining of virtual functions using a set of constraints in a linear system.
Glossary of symbols and functions related to the optimization model
Symbol | Formal specification | Definition |
---|---|---|
Superscripts and subscripts | ||
P | Physical infrastructure entity | |
S | SFC entity | |
Sets and set objects | ||
\(p \in \mathcal {P}\) | \(p = \left (N^{P}, L^{P}, E^{P}\right)\) | Physical infrastructure instance, composed of nodes and links |
i∈N^{P} | N^{P}={i | i is a N-PoP} | Network points of presence (N-PoPs) in the physical infrastructure |
(i,j)∈L^{P} | \(L^{P} = \left \{(i,j)\,\vert \, i,j \in N^{P}\right \}\) | Unidirectional links connecting pairs of N-PoPs i and j |
〈i, r〉∈E^{P} | \(E^{P} = \{\langle i, r \rangle \,\vert \, i \in N^{P} \land r \in \mathbb {N}^{*}\}\) | Identifier r of the actual location of N-PoP i |
\(m \in \mathcal {F}\) | \(\mathcal {F} = \{m\,\vert \,m\,\text {is a function type}\,\} \) | Types of virtual network functions available |
\(j \in \mathcal {U}_{m}\) | \(\mathcal {U}_{m} = \{j\,\vert \,j\,\text {is an instance of}\, m \in \mathcal {F}\,\}\) | Instances of virtual network function m available |
\(\mathcal {Q}\) | Set of Service function chaining (SFC) requests to be deployed | |
\(q \in \mathcal {Q}\) | \(q = \left (N^{S}_{q}, L^{S}_{q}, E^{S}_{q}\right)\) | A single SFC request, composed of VNFs and their chainings |
\(i \in N^{S}_{q}\) | N^{S}={i | i is a VNF instance or endpoint} | SFC nodes (either a network function instance or an endpoint) |
\((i,j) \in L^{S}_{q}\) | \(L^{S}_{q} = \left \{(i,j)\,\vert \, i,j \in N^{S}\right \}\) | Unidirectional links connecting SFC nodes |
\(\langle i, r \rangle \in E_{q}^{S}\) | \(E^{S}_{q} = \{\langle i, r \rangle \,\vert \, i \in N^{S} \land r \in \mathbb {N}^{*}\}\) | Required physical location r of SFC endpoint i |
\(H^{S}_{q}\) | Distinct forwarding paths (subgraphs) contained in a given SFC q | |
\(H^{H}_{q,i} \in H^{S}_{q}\) | \(H^{H}_{q,i} = \left (N^{H}_{q,i}, L^{H}_{q,i}\right)\) | A possible subgraph (with two endpoints only) of SFC q |
\(N^{H}_{q,i}\) | \(N^{H}_{q,i} \subseteq N^{S}_{q}\) | VNFs that compose the SFC subgraph \(H^{H}_{q,i}\) |
\(L^{H}_{q,i}\) | \(L^{H}_{q,i} \subseteq L^{S}_{q}\) | Links that compose the SFC subgraph \(H^{H}_{q,i}\) |
\(y^{\prime }_{i,m,j}\) | Denotes whether there was a previous VNF placement | |
\(\delta ^{\prime }_{i,q,j}\) | Denotes whether there was a previous assignment of flow to VNF | |
\(\lambda ^{\prime }_{i,j,q,k,l}\) | Denotes whether there was a previous flow chaining | |
Parameters | ||
ϕ | \(\phi \in \mathbb {R}_{+}, \phi \geq 0\) | Percentage of capacity of VNFs that can be violated. |
α, β, and γ | Weight factors of the ILP model. | |
\(C^{P}_{i} \in \mathbb {R}_{+}\) | Computing power capacity of N-PoP i | |
\(B^{P}_{i,j} \in \mathbb {R}_{+}\) | One-way link bandwidth between N-PoPs i and j | |
\(D^{P}_{i,j} \in \mathbb {R}_{+}\) | One-way link delay between N-PoPs i and j | |
\(C^{S}_{q,i} \in \mathbb {R}_{+}\) | Computing power required for network function i of SFC q | |
\(B^{S}_{q,i,j} \in \mathbb {R}_{+}\) | One-way link bandwidth required between nodes i and j of SFC q | |
\(D^{S}_{q} \in \mathbb {R}_{+}\) | Maximum tolerable end-to-end delay of SFC q | |
Functions | ||
\(f^{type}_{m}\) | \(f^{type} : N^{P} \cup N^{S} \rightarrow \mathcal {F}\) | Type of some given virtual network function (VNF) |
\(f^{cpu}_{m,j}\) | \(f^{cpu} : (\mathcal {F} \times \mathcal {U}_{m}) \rightarrow \mathbb {R}_{+}\) | Computing power associated to instance j of VNF type m |
\(f^{delay}_{m}\) | \(f^{delay} : \mathcal {F} \rightarrow \mathbb {R}_{+}\) | Processing delay associated to VNF type m |
Variables | ||
y_{i,m,j}∈Y | \(Y = \{\,y_{i,m,j}\,,\,\forall \,i \in N^{P}, m \in \mathcal {F}, j \in \mathcal {U}_{m}\,\}\) | VNF placement |
δ_{i,q,j}∈Δ | \(\Delta = \left \{\,\delta _{i,q,j}\,,\,\forall \,i \in N^{P}, q \in \mathcal {Q}, j \in N_{q}^{S}\,\right \}\) | Assignment of required network functions/endpoints |
λ_{i,j,q,k,l}∈Λ | \(\Lambda = \left \{\,\lambda _{i,j,q,k,l}\,,\,\forall \,(i,j) \in L^{P}, q \in \mathcal {Q}, (k,l) \in L_{q}^{S}\,\right \}\) | Chaining allocation |
\(\overline {y}_{i,m,j} \in \overline {Y}\) | \(\overline {Y} = \{\,\overline {y}_{i,m,j}\,,\,\forall \,i \in N^{P}, m \in \mathcal {F}, j \in \mathcal {U}_{m}\,\}\) | Denotes whether an VNF placement changes |
\(\overline {\delta }_{i,q,j} \in \overline {\Delta }\) | \(\overline {\Delta } = \left \{\,\overline {\delta }_{i,q,j}\,,\,\forall \,i \in N^{P}, q \in \mathcal {Q}, j \in N_{q}^{S}\,\right \}\) | Denotes whether an assignment of flow to VNF changes |
\(\overline {\lambda }_{i,j,q,k,l} \in \overline {\Lambda }\) | \(\overline {\Lambda } = \left \{\,\overline {\lambda }_{i,j,q,k,l}\,,\,\forall \,(i,j) \in L^{P}, q \in \mathcal {Q}, (k,l) \in L_{q}^{S}\,\right \}\) | Denotes whether a flow chaining changes |
3.1 Model notation and description
Model input. The model proposed by Luizelli et al. [13] considers as input a set of SFCs \(\mathcal {Q}\) and a physical infrastructure instance \(p \in \mathcal {P}\), the latter being a triple \(p = \left (N^{P}, L^{P}, E^{P}\right)\). N^{P} represents the set of nodes in the infrastructure (N-PoPs or routing devices), while pairs (i,j)∈L^{P} are unidirectional physical links. Bidirectional links are represented by two links in opposite directions (i.e., (i,j) and (j,i)). The set of tuples \(E^{P} = \left \{\langle i, r \rangle \,\vert \, i \in N^{P} \land r \in \mathbb {N}^{*} \right \}\) contains the location (represented as a unique numeric identifier) of each N-PoP. The proposed model captures the following constraints related to physical resources: computing power of N-PoPs (represented by \(C^{P}_{i}\)), bandwidth \(\left (B^{P}_{i,j}\right)\), and link delay \(D^{P}_{i,j}\). Note that our model captures packet loss indirectly, since that such losses occur due to exhausted computing power capacity at N-PoPs (as discussed in Section 2). Note however that packet loss may also occur due to factors not related to resource usage, like software/hardware failure, misconfiguration, etc.
SFCs \(q \in \mathcal {Q}\) represent any forwarding topology. An SFC is represented by a triple \(q=\left (N^{S}_{q}, L^{S}_{q}, E^{S}_{q}\right)\). The set \(N^{S}_{q}\) represents the virtual nodes (i.e., endpoints and VNFs), while \(L^{S}_{q}\) represents the virtual links that connect them. Note that each SFC q has at least two endpoints, which are given in advance by \(E_{q}^{S} = \left \{\langle i, r \rangle \,\vert \, i \in N^{S}_{q} \land r \in \mathbb {N}^{*}\right \}\), where r is a numeric identifier for node \(i \in N^{S}_{q}\). In addition, each SFC captures the following requirements related to virtual resources: processing required by a VNF i\(\left (\text {represented by } C^{S}_{q,i}\right)\), the minimum bandwidth required for traffic between VNFs (or endpoints) i and j\(\left (\text {represented by }B^{S}_{q,i,j}\right)\), and maximum latency tolerable between any pair of endpoints \(\left (\text {represented by } D^{S}_{q}\right)\).
For simplicity, we assume that each SFC q has a set of virtual paths^{2} represented by H_{q}. Each element H_{q,i}∈H_{q} is a possible path in the subgraph q, with a source and a destination. Subsets \(N_{q,i}^{H} \subseteq N_{q}^{S}\) and \(L_{q,i}^{H} \subseteq L^{S}_{q}\) contain, respectively, the VNFs and the virtual links belonging to the path H_{q,i}.
The set \(\mathcal {F}\) denotes the types of VNFs available (firewall, proxy, etc.). VNFs can be instantiated at most \(\mathcal {U}_{m}\) times. We define \(f^{type} : N^{P} \cup N^{S} \rightarrow \mathcal {F}\) for the type of a given VNF, which can be instantiated in an N-PoP or be part of a request. In addition, functions \(f^{cpu} : (\mathcal {F} \times \mathcal {U}_{m}) \rightarrow \mathbb {R}_{+}\) and \(f^{delay} : \mathcal {F} \rightarrow \mathbb {R}_{+}\) denote power and delays related to a VNF. We assume that the provisioned VNFs can have a higher demand than their pre-determined capacity (over commitment). The parameter \(\phi \{\phi | \in \mathbb {R}_{+}, \phi \geq 0\}\) defines the percentage of capacity of VNFs that can be violated.
Model output. The model solution is expressed by sets of binary variables, described next. Variables \(Y = \left \{\,y_{i,m,j}\,,\,\forall \,i \in N^{P}, m \in \mathcal {F}, j \in \mathcal {U}_{m}\,\right \}\) indicate a VNF placement. In other words, they indicate if an instance j of a network function m is mapped to N-PoP i. Similarly, variables \(\overline {Y} = \left \{\,\overline {y}_{i,m,j}\,,\,\forall \,i \in N^{P}, m \in \mathcal {F}, j \in \mathcal {U}_{m}\,\right \}\) indicate if the current placement of a VNF j has changed in relation to its previous placement, given by \(y^{\prime }_{i,m,j}\).
Variables \(\Delta = \left \{\,\delta _{i,q,j}\,,\,\forall \,i \in N^{P}, q \in \mathcal {Q}, j \in N^{S}_{q}\,\right \}\) represent the assignment of a requested VNF (or a flow) to a provisioned VNF. That is, it indicates whether node j (being a VNF or an endpoint), required by SFC q, is assigned to the i-th (N-PoP) node. Similarly, variables \(\overline {\Delta } = \left \{\,\overline {\delta }_{i,q,j}\,,\,\forall \,i \in N^{P}, q \in \mathcal {Q}, j \in N^{S}_{q}\,\right \}\) indicate that VNF (or flow) j of SFC q remains allocated to the same instance, in relation to an earlier assignment given by \(\delta ^{\prime }_{i,q,j}\).
Finally, variables \(\Lambda = \left \{{\vphantom {L^{S}_{q}}} \lambda _{i,j,q,k,l}\,{,}\,\forall \,(i,j) \in L^{P}, q \in \mathcal {Q}, (k,l)\right.\)\(\left. \in L^{S}_{q}\,\right \}\) indicate a chaining provisioning in the physical infrastructure, i.e., the virtual link (k, l) from SFC q is assigned to the physical link (i,j). Variables \(\overline {\Lambda } = \left \{ \, \overline {\lambda }_{i,j,q,k,l}\,, \, \forall \,(i,j) \in L^{P}, q \in \mathcal {Q}, (k,l) \in L^{S}_{q}\, \right \}\) indicate, in turn, that the virtual link (k,l) of SFC q remains allocated to the physical link (i,j), in relation to an earlier assignment given by \(\lambda ^{\prime }_{i,j,q,k,l}\).
3.2 Model formulation
The proposed model considers a multi-objective function, which simultaneously minimizes (i) resources consumed in the infrastructure (i.e., in N-PoPs, VNFs, and physical links), and (ii) (possible) changes in mappings due to fluctuation of allocated demand (e.g., provisioning of new VNFs, SFC reassignments, and VNF flow reassignments).
The first part of the objective function minimizes network resource consumption; it is materialized by reducing the number of allocated VNFs (described by y), and length of flow chainings (described by λ). The second part of the equation refers to the changes made in the infrastructure, and is defined by three components. The first one refers to the minimization of changes in the placement of already allocated VNFs (described by \(\overline {y}\)); the second one refers to minimization of modifications of existing chaining (described by \(\overline {\lambda }\)); and the third one captures changes related to flows (or SFCs) (re)assignment to VNFs (described by \(\overline {\delta }\)). Each component is weighted, respectively, by α, β, and γ, according to defined priorities.
The sets of constraints that make up the model are described below. The first three refer to resource limitations of the physical infrastructure. Constraint set (1) ensures that the sum of all instances of VNFs provisioned in a given N-PoP does not exceed the available computational capacity. Set (2) ensures that the demand required by the SFC flows does not exceed the provisioning capacity of the VNFs. Note that the provisioned capacity of the VNFs can be exceeded (during peak hours, for example) by a factor ϕ. Finally, set (3) ensures that the demands of the provisioned chains on a given physical link do not exceed the bandwidth available on the link.
Constraint sets (4)-(6) guarantee proper placement of virtual resources. Constraint set (4) ensures that each element of an SFC is mapped to the infrastructure. In turn, set (5) ensures that the SFCs’ endpoints are mapped to certain devices of the infrastructure. Set (6) guarantees the availability of instances of VNFs in the N-PoPs in which the requests of the SFCs are mapped. That is, if a VNF requested by an SFC is mapped to a given N-PoP i, then (at least) one instance of the VNF is placed and running in i.
Constraints on the SFC chaining are described by the sets (7) and (8). Constraint set (7) ensures that there is a valid path in the physical infrastructure between all endpoints and SFC VNFs. In turn, set (8) ensures that the paths adopted to route the traffic respect the maximum delay limits between the endpoints. The first part of the equation refers to the delay associated to the physical links, while the second part refers to the delay incurred by packet processing in the VNFs themselves.
Finally, constraint sets (9) - (11) determine the similarity of SFCs placement and chaining in relation to a given known previous mapping (denoted by set P). Sets (9), (10) and (11) define, respectively, the similarity of variables related to VNF placement (variables y), to the assignment of SFCs to the placed VNFs (variables δ), and related to the adopted chaining (variables λ). Observe that the purpose of such equations is to identify cases in which allocation variables invert the assumed values from 1 to 0. These cases particularly identify when the allocations are modified.
4 Adaptive VNF placement and chaining with NFV-PEAR
After presenting the ILP model for adaptive placement and chaining of VNFs, in this section we introduce NFV-PEAR: an architecture for virtual network function deployment and orchestration^{3}. NFV-PEAR relies on the proposed ILP model to allow the dynamic reallocation of network functions in response to oscillations in the demands of processing flows. Our architecture was designed in line with the main building blocks recommended by the ETSI MANO (Management and Orchestration) interface standard [14].
4.1 Optimization layer
The Optimization Layer aggregates the modules responsible for optimizing and planning the instantiation and chaining of SFCs in the infrastructure. Note that both deployed and to-be-deployed SFCs are processed by these modules, when (re)planning VNF allocation in the infrastructure.
The Optimizer module is responsible for computing the best possible allocation of VNFs in the network, considering the deployed and to-be-deployed SFCs mentioned above, as well as information about the current state of the infrastructure (endpoints, N-PoPs/VNFs and their resources available, links, etc.). To this end, Optimizer implements the ILP model discussed in the previous section. The output of this module — the solution for the ILP model in the given scenario — is forwarded to Planner.
The Planner module is responsible for determining algorithmically the best way to carry out, in practice, the necessary changes on VNF placement in the network and their corresponding chaining. The goal of Planner is to keep the infrastructure in a state close to optimal operation, with a minimum number of changes performed. Several strategies can be adopted to ensure smooth transition between states that the infrastructure must undergo for avoiding service disruption [15, 16].
4.2 Deployment layer
The Deployment Layer brings together the modules responsible for provisioning the SFCs in the physical network. The Provisioner module is responsible for VNF placement and chaining, according to the mapping of SFCs received from the Optimization Layer. The Metric Collector module monitors the VNFs deployed in the network and consolidates their operation statistics. It also gauges VNF operation states to identify reallocations required to deal with fluctuations in network traffic. The performance metrics we considered, and the methodology we used to gauge their importance, are described in Section 3. The consolidated VNF performance measures are passed on to the Optimization Layer.
Both modules communicate with the Controller Interface to perform the orchestration/monitoring activities of VNFs in the physical infrastructure. This interface is made up of two sub-modules, (i) SDN northbound interface, responsible for translating chain installation requests and state queries (for example, from switches) to the protocol used by the SDN controller, and (ii) NFV northbound interface, responsible for adapting requests relevant to the VNFs to the protocol used by the NFV controller of the infrastructure. A detailed definition of the Controller Interface is left for future work.
4.3 NFV-PEAR application programming interface
A brief overview of the NFV-PEAR API
Class | Description |
---|---|
class SFC(sfc_id, edges_list, dict): def deploy_sfc() def deploy_nf(nfunction, pop) def enable_nf(nfunction) def deploy_flow() def enable_flow() | A class to materialize Service Function Chaining (SFC) documents. Each instance of this class must have an id, an array with flow steering specifications, and a dictionary that contains mapping information of network functions (NFs) into Network Points-of-Presence (N-PoPs). The class contains methods to deploy the SFC as a whole, and also to deploy and enable NFs individually, and deploy and enable flow steering between NFs (and between NFs and endpoints). The deploy_sfc() method deploys an SFC. Internally, it calls deploy_nf(), deploy_flow(), enable_nf(), and enable_flow() methods. The deploy_nf() method creates and returns an NF instance, receiving as parameter an NF image and an N-PoP instance. Finally, deploy_flow() deploys all flows of an SFC. |
class NfData(nfunction, npop, enabled): def enable() def disable() | A class to maintain NF operation data. Each NfData object points to an instance of NFunction and N-PoP. It also has a flag indicating if the NF is in operation, and methods to enable/disable its operation. |
class NFunction(nf_id, type): | A class to represent NF instances. Each NFunction instance must have the identification number of the NF, and a string representing the NF type (ex: “Load Balancer”). The class constructor receives as input a network function id and type. |
class NPop(npop_id, location): def add_deploy(deploymentFunction) def is_deployed() | A class to represent N-PoP instances. The class constructor receives as input an N-PoP id and the location of the switch to which the N-PoP belongs. |
The code snippet follows with the definition of an N-PoP (“npop31”) and a VNF (firewall). The SFC is then created, considering those N-PoP and VNF definitions and their chaining (as specified in the “nfs_npop” dictionary). The last method, deploy_sfc() deploys the SFC into the infrastructure.
5 Prototypical implementation
Next we discuss our NFV-PEAR prototype. Our discussion is guided by the architectural view shown in Fig. 2, and focuses on the optimization (Section 5.1) and deployment (Section 5.2) layers. Afterwards, we describe the infrastructure used for evaluation (Section 5.3).
5.1 Optimization layer
The Planner is implemented using Python 2.7. It parses the Optimizer JSON output and deploys the requested changes in the network. As mentioned earlier, Planner must perform as few changes as possible to minimize network disruption. To this end, it uses the networkX library [17] to represent SFCs as graphs and compute the difference between the current network graph and the one suggested by Optimizer. Depending on the number of differences, the suggestion is either accepted (and the network changes, deployed) or discarded. This behavior can be customized; in our experiments, at least two changes must occur for accepting the suggestion. Planner uses the classes and methods from the Deployment Layer API discussed earlier to perform the changes.
5.2 Deployment layer
The modules in the Deployment Layer are also implemented using Python. This layer exposes the API described in Section 4.3, and receives method calls from Planner to deploy changes, these calls are relayed to the Provisioner. To perform changes, SFC information must be represented following the Python class definition shown in Table 2. That will enable the provision module to process and make the method calls to deploy it; these calls are done to the modules in the Controller Interface (described next).
The Metric Collector module uses SSH (secure shell) to obtain VNF-related metrics from the N-PoPs they are deployed. To this end, the Paramiko library [18] is used. The collected information is consolidated into a local NoSQL database, TinyDB [19]. There are two relevant settings for metric collection: (i) interval between collections (seconds), and (ii) VNF CPU threshold (percentage). The latter is related to a trigger for Optimizer, to be invoked when the CPU of some VNF reaches the established threshold.
In the Controller Interface, the SDN northbound interface corresponds to a Web Server Gateway Interface (WSGI) part of the Ryu controller, and enables external access to the methods defined in the SDN^{6} control application. The communication with the SDN controller is done via HTTP GET calls to a REST (Representational State Transfer) web service; information about network flows is also passed on these calls. The NFV northbound interface corresponds to an RPC server (implemented using Spyne [20]), that exposes the NFV platform methods through Simple Object Access Protocol (SOAP). The communication with the NFV Platform is performed by means of a SOAP client, which accesses the methods related to NF instantiation and deployment.
5.3 Infrastructure
Our testbed SDN-NFV infrastructure includes the SDN controller, NFV platform, and devices used to materialize the deployment of network functions and flows. It is important to mention that our conceptual solution and its implementation are agnostic of our software choices for materializing our testbed.
Our NFV platform is materialized using a Python application that connects to N-PoPs for deploying and enabling network functions. Access to virtual machines is done through SSH (using Paramiko). Note that, in our prototype, deploy means upload the network function image to the N-PoP/VM, and enable means execute it in the N-PoP/VM. Our platform has been designed focusing on module isolation, thus enabling one to change the technology used to create VMs with few code changes.
Monitoring of VMs/N-PoPs is done using sar, available in the GNU/Linux sysstat package. sar shows textual info about CPU, memory and I/O data, among others. That textual info is passed on to a python parser and then to the Metric Collector module. We chose this architectural deployment as is requires no changes to other modules in case one wishes to replace sar.
We also use Open vSwitch for materializing the SDN network. In addition to being open source, Open vSwitch supports OpenFlow and provides stable releases with a set of tools that makes it possible not only to create switches and links between them and end hosts. Finally, we use KVM for virtualization, i.e. for creating VMs that run network functions.
6 Evaluation
We carried out a systematic evaluation process to assess the efficacy and effectiveness of NFV-PEAR. The experiments were carried out in a machine with four Intel i5 2.6 GHz processors, 8 GB of RAM, running Ubuntu/Linux Server 11.10 x86_64 operating system.
6.1 Experiment workload and setup
We adopted a strategy similar to that employed in previous work [13] to carry out the experiments. The physical infrastructure was generated with Brite^{8} using the Barabasi-Albert (BA-2) model [22]. That model has topological characteristics similar to infrastructures typical of ISPs (Internet Service Providers). The physical infrastructures considered contain a total of 50 N-PoPs, each with a computing power capacity of 100%. On average, each network has 300 links with uniform bandwidth capacity of 10 Gbps and average delay of 10 ms. The N-PoPs are placed at various distinct locations in the network.
Two types of VNF images were available for instantiation. For each type of VNF, the availability of small and large computational capacities (considering, respectively, 25% and 100% of the N-PoP computing power) was assumed. For our evaluation, 20 SFCs were submitted. The types of VNFs required by SFCs were randomly chosen. Each VNF required between 25% and 50% of the capacity of an image instantiated on an N-PoP (note that these percentages are different from the computational capacity of the NF images, mentioned earlier). The considered SFCs followed an in-line topology, with their endpoints in the physical infrastructure being randomly selected.
Our analysis focused mainly on the quality of the solutions generated by the Optimizer module. In order to assess the model ability to re-design the infrastructure with the minimum of disruptions, we artificially alternate some provisioned SFCs (by increasing flow rates) between normal consumption mode and overload (e.g., during peak hours). In the latter case, re-scheduling is necessary to maintain system performance and stability. The proposed model is compared with that of Luizelli et al. [13]. In that case, when re-planning is needed, all SFCs are resubmitted and provisioned in the infrastructure.
6.2 Number of modifications required in the infrastructure
It is observed that the number of changes needed (axis y) to re-adjust the network to the new demand is proportional 1) to the percentage of SFCs with increased demand and 2) to the demand values exceeded (x-axis). In addition, it is observed that the number of changes related to the repositioning of VNFs is substantially lower compared to that observed for reassignment of flows and re-engineering of SFCs. This indicates the feasibility of our model in real environments, since the time required to instantiate (or migrate) a VNF is substantially higher (in the order of milliseconds to seconds) than that of reprogramming a routing device (order of milliseconds), for example. Also, comparing to the baseline, one may observe that our model reduces by 25% the number of changes related to VNF placement and by up to twice the number of changes related to the chaining and reassignments of SFCs to VNFs.
6.3 Impact of the over-commitment factor of VNFs on SFC replanning
We also evaluated the time needed to re-deploy SFCs. In order to estimate the time necessary to reconfigure the whole infrastructure, we took into account realistic estimates for VNF boot-up time. For VNF boot-up time we considered two values/cases: (i) 50 ms, for VNFs implemented as containers; and (ii) 1000 ms for VNFs implemented as virtual machines [23, 24]. To account for placement and chaining changes, we also considered that those are related to SDN rule insertion in a forwarding device. For simplicity, we considered that the time required to insert (or modify) a single SDN rule is 10 ms [25].
6.4 Efficiency in replanning SFC chaining
Observe also that with more SFCs in normal operation, more time is required to find the optimal solution. Although the VNF placement and chaining problem is NP-hard, these results suggest that finding exact solutions is feasible in small- and medium-scale scenarios. For larger scale scenarios, additional research is needed to assess computing time bounds.
6.5 Case studies
Next we discuss example case studies designed to obtain a deeper insight on the potentialities of NFV-PEAR. In these cases our goal is to analyze resource elasticity and traffic engineering capabilities in the context of network function management. We thus focus on a case in which expansion of network function instances is needed (scale up) as a result of increased demand, and another in which network function migration is required.
6.5.1 Scaling up network functions
6.5.2 Migrating network functions
The infrastructure setting in the first 200 s is illustrated in Fig. 13a; the first 100 s without data transfer, and the next 100s with a flow from H1 to H3, passing through the FW (firewall) function deployed at N-PoP 1. Figure 13b shows the network after the first 200 s: the flow from H1 to H3 ceased, and another flow between H2 and H3 began, and continued until t=300 s; for this reason, the FW instance was migrated to N-PoP 2.
Note that we opted for a simplified approach for network function migration in this case study. For this reason, performance impact related to function migration is not considered, as different approaches for function migration have diverse impacts on performance. For example, complete function migration (including function image, data, and context) may cause unexpected network congestion, in contrast to an approach in which function image and data is shared beforehand among all N-PoPs (at the cost of extra storage required), and only context info is migrated.
7 Related work
NFV research can be organized considering several perspectives. In the context of SFC deployment planning (i.e., VNF placement and chaining), several investigations merit attention, in special the ones from Bari et al. [3] and Luizelli et al. [7, 13]. Bari et al. [3] describe the orchestration problem of VNFs, which consists in determining the number of VNFs and their locations in the network so that operating costs are optimal. The authors formulate the problem through the linear system (Integer Linear Programming), and use CPLEX and dynamic programming to optimize allocations in smaller scale NFV environments. More recently, Luizelli et al. [7] addressed the problem for large-scale environments, by proposing an optimization algorithm that combines mathematical programming and search meta-heuristics.
In the field of orchestration (post placement & chaining), one of the most notorious efforts is OPNFV [4]. This open source platform aims to foster interoperability between NFV enabling technologies (e.g., Open vSwitch, KVM and Xen) with the other layers of the architecture proposed by ETSI [14] (for orchestration and monitoring). In parallel, several other platforms have been proposed to overcome specific gaps in the orchestration of VNFs, for example ClickOS [23], Slick [26], OpenNetVM [5], and VirthPhy [27]. Clickos, based on the Xen hypervisor and using virtual functions written in Click [12], aims at reducing packet copy overhead between interfaces, allowing to achieve near-link throughput. OpenNetVM, in turn, introduces a virtual routing layer to integrate the lightweight Docker virtualization engine into the Intel DPDK packet acceleration library. Slick provides a framework for programming network functions as a single high-level control program. Finally, VirtPhy presents a programmable platform for small data centers in which both the functions and the network elements that interconnect them are virtualized.
There are also investigations addressing specific aspetcs of VNF orchestration, such as reliability and QoS performance during flow steering [28], adaptive path provisioning in dynamic service chaining in response to congestion events [29], and dynamic adaptation of VNF orchestration with high availability for 5G applications [30]. Another recent trend in NFV is offloading part of the components that form a VNF to run directly from forwarding devices [31], in an approach similar to OpenBox [16]. Although offloading may significantly save computing resources in this context, such benefit could possibly come at the expense of more complex orchestration procedures.
In the area of VNF performance monitoring, one of the main initiatives is NFV-VITAL [32]. In that paper, the authors propose a framework to characterize the performance of VNFs running in cloud environments. From this characterization, it is possible (i) to estimate the best allocation of computational resources to execute VNFs, and (ii) to determine the impact of different virtualization and hardware configurations on the performance of VNFs. Another initiative is NFVPerf [33], a tool for bottleneck detection in NFV environments. By analyzing the data flows that transit between VNFs, it makes it possible to calculate average throughput and delays, and thus possible to detect performance degradations.
Despite the observed advances, existing solutions do not address localized fluctuation and bottleneck scenarios that occur due to variations in the volume of flows in transit in the network. An ad hoc strategy to deal with these fluctuations is to re-execute VNF allocation algorithms, and to rearrange them according to the results obtained. Although effective, this strategy is computationally more expensive (by re-executing globally the optimization algorithms), and does not allow to react efficiently to dynamic flow behavior. The work of Rankothge et al. [34, 35] is the closest approach to an effective solution to this problem. In that work, the authors use genetic algorithms to introduce network functions with scalable processing capabilities. However, those network functions are considered in isolation, therefore without taking into account possible global optimizations, such as steering flows with similar requirements for higher capacity VNFs.
Considering the limitations discussed above, NFV-PEAR presents itself as a solution to re-adjust the network against demand variations, through the identification of bottlenecks in the processing of flows, reorganization of the placement and chaining of network functions locally/globally, and aiming at the minimization of disruption in the processing of transit flows.
8 Final considerations
In this work, NFV-PEAR — a framework for adaptive orchestration of network functions in NFV environments — was proposed. The contributions of this paper unfold in (i) a formal model to ensure the best provision of SFCs against dynamic changes in demand and/or costs associated with network equipment, (ii) a reference architecture for (re)planning and deploying SFCs, agnostic to virtualization and infrastructure technologies, and (iii) a preliminary analysis on a subset of metrics to represent performance indicators for VNF operation.
We provided a formal model for adaptive (re)planning of virtual network functions, along with an analytical evaluation. The results showed that our model contributes significantly to a reduction in the number of changes in the physical infrastructure (up to 25% in the repositioning of VNFs and over 200% in the re-routing of network functions). From two distinct case studies, we also assessed the feasibility of NFV-PEAR in bringing resource elasticity and traffic engineering capabilities to the NFV realm.
As prospect for future work, we intend to extend our evaluation to identify correlations in the valuation of the parameters of the model (especially α, β, and γ) in the quality of the solutions obtained. Finally, we aim at developing and integrating methods of traffic demand prediction into our ILP model.
By virtual path we mean a path from a source to a destination endpoint in an SFC. To illustrate, suppose an SFC containing three endpoints A, B, and C, and one function for load balancing; endpoint A is linked to the load balancer, which in turn is linked to endpoints B and C. That SFC has two virtual paths: one from A to the load balancer then B, and another one from A to the load balancer then C.
Further details about the objects used in the click implementation can be found at https://github.com/kohler/click/wiki/Elements
Declarations
Acknowledgements
Not applicable.
Funding
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) - Finance Code 001.
Availability of data and materials
Please contact authors for data requests.
Authors’ contributions
GM collaborated to analyze the impact of network load on VNF performance, participated in the design of the conceptual solution, participated on the proposal of the NFV-PEAR API, coded a proof-of-concept implementation, and ran evaluation experiments. ML collaborated to analyze the impact of network load on VNF performance, participated in the design of the conceptual solution, participated in the design of the improved version of the optimization model, participated on the proposal of the NFV-PEAR API, and ran evaluation experiments and analyzed results obtained from experimental evaluations. WC participated in the design of the conceptual solution, participated in the design of the improved version of the optimization model, participated on the proposal of the NFV-PEAR API, and analyzed results obtained from experimental evaluations. LG participated in the design of the conceptual solution, and analyzed results obtained from experimental evaluations. All authors wrote, read, and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Han B, Gopalakrishnan V, Ji L, Lee S.Network function virtualization: Challenges and opportunities for innovations. IEEE Commun Mag. 2015; 53(2):90–97.View ArticleGoogle Scholar
- Cohen R, Lewin-Eytan L, Naor JS, Raz D. Near optimal placement of virtual network functions. In: IEEE Conference on Computer Communications. INFOCOM ’15: 2015. p. 1346–54.Google Scholar
- Bari MF, Chowdhury SR, Ahmed R, Boutaba R. On Orchestrating Virtual Network Functions. In: 11th International Conference on Network and Service Management. CNSM ’15: 2015. p. 50–56.Google Scholar
- Open Networking Lab. Open Platform for NFV (OPNFV).2018. Available at https://www.opnfv.org/. Accessed 29 May 2018.
- Zhang W, Liu G, Zhang W, Shah N, Lopreiato P, Todeschi G, et al. OpenNetVM: A Platform for High Performance Network Service Chains. In: ACM SIGCOMM Workshop on Hot Topics in Middleboxes and Network Function Virtualization. HotMiddlebox ’16: 2016.Google Scholar
- McKeown N, Anderson T, Balakrishnan H, Parulkar G, Peterson L, Rexford J, et al.OpenFlow: Enabling Innovation in Campus Networks. SIGCOMM Comput Commun Rev. 2008; 38(2):69–74.View ArticleGoogle Scholar
- Luizelli MC, Cordeiro W, Buriol LS, Gaspary LP. A fix-and-optimize approach for efficient and large scale virtual network function placement and chaining. Comput Commun. 2017; 102:67–77.View ArticleGoogle Scholar
- Kuo TW, Liou BH, Lin JC, Tsai MJ. Deploying Chains of Virtual Network Functions: On the Relation Between Link and Server Usage. In: IEEE International Conference on Computer Communications. INFOCOM ’16. San Francisco: Springer: 2016.Google Scholar
- Luizelli MC, Saar Y, Raz D, Optimizing NFV. Chain Deployment Through Minimizing the Cost of Virtual Switching. Piscataway: IEEE Press; 2018, pp. 1–9.Google Scholar
- Luizelli MC, Raz D, Sa’ar Y, Yallouz J. The actual cost of software switching for NFV chaining. In: IFIP/IEEE Symposium on Integrated Network and Service Management. IM ’17: 2017.Google Scholar
- Miotto G, Luizelli MC, da Costa Cordeiro WL, Gaspary LP. NFV-PEAR: Posicionamento e Encadeamento Adaptativo de Funcoes Virtuais de Rede. In: Brazilian Symposium on Computer Networks and Distributed Systems. SBRC ’17: 2017. p. 1–14.Google Scholar
- Kohler E, Morris R, Chen B, Jannotti J, Kaashoek MF. The Click modular router. ACM Trans Comput Syst. 2000; 18(3):263–97.View ArticleGoogle Scholar
- Luizelli MC, Bays LR, Buriol LS, Barcellos MP, Gaspary LP. Piecing together the NFV provisioning puzzle: Efficient placement and chaining of virtual network functions. In: IFIP/IEEE Int’l Symposium on Integrated Network Management. IM ’15: 2015.Google Scholar
- ETSI. Network Functions Virtualisation (NFV).2018. Available at http://www.etsi.org/technologies-clusters/technologies/nfv. Accessed 29 May 2018.
- Rajagopalan S, Williams D, Jamjoom H, Warfield A. Split/Merge: System Support for Elastic Execution in Virtual Middleboxes. In: USENIX Symposium on Networked Systems Design and Implementation. NSDI ’13. USENIX. New York: 2013. p. 227–240.Google Scholar
- Bremler-Barr A, Harchol Y, Hay D. OpenBox: A Software-Defined Framework for Developing, Deploying, and Managing Network Functions. New York: ACM; 2016, pp. 511–24.View ArticleGoogle Scholar
- NetworkX. NetworkX - Software for complex networks.2018. Available at https://networkx.github.io/. Accessed 2 Feb 2018.
- Paramiko. Welcome to Paramiko.2018. Available at http://www.paramiko.org. Accessed 29 May 2018.
- TinyDB. Welcome to TinyDB.2018. Available at http://tinydb.readthedocs.io. Accessed 3 Feb 2018.
- Arskom Ltd. spyne - RPC that doesn’t break your back.2018. vailable at http://spyne.io/. Accessed 3 Feb 2018.
- Ryu. Ryu SDN Framework.2018. Available at http://osrg.github.io/ryu/. Accessed 4 Jan 2018.
- Albert R. Barabási AL. Topology of Evolving Networks: Local Events and Universality. Phys Rev Lett. 2000; 85:5234–7.View ArticleGoogle Scholar
- Martins J, Ahmed M, Raiciu C, Olteanu V, Honda M, Bifulco R, et al.ClickOS and the Art of Network Function Virtualization. Seattle: USENIX Association; 2014, pp. 459–73.Google Scholar
- Cziva R, Jouet S, White KJS, Pezaros DP. Container-based network function virtualization for software-defined networks. In: IEEE Symposium on Computers and Communication. ISCC ’15: 2015. p. 415–420.Google Scholar
- He K, Khalid J, Gember-Jacobson A, Das S, Prakash C, Akella A, et al. Measuring Control Plane Latency in SDN-enabled Switches. In: ACM SIGCOMM Symposium on Software Defined Networking Research. SOSR ’15. New York: ACM: 2015. p. 25:1–25:6.Google Scholar
- Anwer B, Benson T, Feamster N, Levin D. Programming Slick Network Functions. In: ACM SIGCOMM Symposium on Software Defined Networking Research. SOSR ’15. New York: ACM: 2015. p. 14:1–14:13.Google Scholar
- Dominicini CK, Vassoler GL, Ribeiro MR, Martinello M. VirtPhy: A Fully Programmable Infrastructure for Efficient NFV in Small Data Centers. In: IEEE Conference on Network Function Virtualization and Software Defined Network. NFV-SDN ’16: 2016.Google Scholar
- Gharbaoui M, Fichera S, Castoldi P, Martini B. Network orchestrator for QoS-enabled service function chaining in reliable NFV/SDN infrastructure. In: IEEE Conference on Network Softwarization. NetSoft ’17: 2017. p. 1–5.Google Scholar
- Mohammed AA, Gharbaoui M, Martini B, Paganelli F, Castoldi P. SDN controller for network-aware adaptive orchestration in dynamic service chaining. In: IEEE NetSoft Conference and Workshops. NetSoft ’16: 2016. p. 126–130.Google Scholar
- Martini B, Gharbaoui M, Fichera S, Castoldi P. Network orchestration in reliable 5G/NFV/SDN infrastructures. In: Int’l Conference on Transparent Optical Networks. ICTON ’17: 2017. p. 1–5.Google Scholar
- Cordeiro W, Marques JA, Gaspary LP. Data Plane Programmability Beyond OpenFlow: Opportunities and Challenges for Network and Service Operations and Management. J Netw Syst Manag. 2017; 1:47–53.Google Scholar
- Cao L, Sharma P, Fahmy S, Saxena V. NFV-VITAL: A framework for characterizing the performance of virtual network functions. In: IEEE Conference on Network Function Virtualization and Software Defined Network. NFV-SDN ’15: 2015. p. 93–99.Google Scholar
- Naik P, Shaw DK, Vutukuru M. NFVPerf: Online Performance Monitoring and Bottleneck Detection for NFV. In: IEEE Conference on Network Function Virtualization and Software Defined Network. NFV-SDN ’16: 2016.Google Scholar
- Rankothge W, Le F, Russo A, Lobo J. Experimental Results on the use of Genetic Algorithms for Scaling Virtualized Network Functions. In: Network Function Virtualization and Software Defined Network. NFV-SDN ’15: 2015.Google Scholar
- Rankothge W, Le F, Russo A, Lobo J. Optimizing Resource Allocation for Virtualized Network Functions in a Cloud Center Using Genetic Algorithms. IEEE Trans Netw Serv Manag. 2017; 14(2):343–56.View ArticleGoogle Scholar