DiffPerf: Towards Performance Differentiation and Optimization with SDN Implementation

Continuing the current trend, Internet traffic is expected to grow significantly over the coming years, with video traffic consuming the biggest share. On the one hand, this growth poses challenges to access providers (APs), who have to upgrade their infrastructure to meet the growing traffic demands as well as find new ways to monetize their network resources. On the other hand, despite numerous optimizations of the underlying transport protocol, a user's utilization of network bandwidth and is thus the user's perceived quality still being largely affected by network latency and buffer size. To address both concerns, we propose DiffPerf, a class-based differentiation framework, that, at a macroscopic level dynamically allocates bandwidth to service classes pre-defined by the APs, and at a microscopic level statistically differentiates and isolates user flows to help them achieve better performance. We implement DiffPerf on OpenDaylight SDN controller and programmable Barefoot Tofino switch and evaluate it from an application perspective for MPEG-DASH video streaming. Our evaluations demonstrate the practicality and flexibility that DiffPerf provides APs with capabilities through which a spectrum of qualities are provisioned at multiple classes. Meanwhile, it assists in achieving better fairness and improving overall user's perceived quality within the same class.


I. INTRODUCTION
Today's Internet is dominated by content traffic, especially video streams.According to Cisco Annual Internet Report [1], video will make up 82% of the total downstream Internet traffic by 2022.In today's home, Internet video drives our work and life, particularly with COVID-19 pandemic [2], meanwhile video applications continue to be of a significant demand for the bandwidth in the future [1].To accommodate high traffic, content providers (CPs) have been deploying wide-area infrastructures to bring content closer to users, e.g., Netflix uses third-party content delivery networks such as Akamai and Limelight, and builds its own [3].However, as end-users rely on last-mile access providers (APs) for accessing the Internet, APs' bandwidth capacity still limit user throughput due to network congestion [4].For example, the average throughput of Netflix users behind Comcast [5], the largest U.S. broadband provider, degraded 25% from over 2 Mbps in Oct 2013 to 1.5 Mbps in Jan 2014.
To sustain traffic growth, APs need to upgrade network infrastructures and expand capacities; however, their incentives depend on the business model and the corresponding mechanism used to monetize bottleneck bandwidth, which is crucial to the viability of the current Internet model in the future.A general approach used by APs is to differentiate services and prices, e.g., APs provide premium peering [6] options for CPs to choose and multiple data plans for end-users with different data usage to choose.However, the former can only be implemented with large CPs via peering agreements, while the latter does not guarantee the performance of end-users in any sense.The bandwidth allocation is typically a function of the application endpoints, and is traditionally embodied as part of transport layer's congestion control mechanism.TCP CUBIC and BBR are the most popular protocols control the majority of Internet traffic.However, both of them strive for efficient utilization of the bandwidth, while being unaware of the negatively biased user's Quality of Experience (QoE) affected by Round-Trip Times (RTT) and network buffer size.Consequently, there exists a fundamental mismatch between the differentiated services and the underlying resource allocation that differentiates for predictable performance.
To resolve this mismatch, we consider a class-based differentiation approach, under which CPs and users can choose a service class (SC) to join.We propose DiffPerf, a dynamic performance differentiation framework, at APs vantage point to manage their bottleneck bandwidth resources in a principled and practical manners.From a macroscopic perspective, DiffPerf dynamically allocates bandwidth to each SC according to the changing number of active flows in each SC, by maximizing the weighted α-fair utilities, which enables APs to trade-off fairness.Nevertheless, as the users in the same service class might not perceive a fair quality due to the consequences of the complex interaction between the transport protocol and the inherent network conditions such as heterogeneous RTTs and buffer sizes, as shown in our experimental explorations and known by conventional wisdom [7].Thus, at a microscopic level, DiffPerf uses a new performance-aware mechanism, called (β, γ)-fairness to further optimize and make more fine-grained bandwidth allocation within each SC, so as to more efficiently utilize the aggregate capacity and achieve fairer performance for flows.Our main contributions are as follows: 1. We derive the closed-form bandwidth allocation solution and show that this solution achieves guaranteed performance differentiation in terms of controllable ratios of the average per-flow throughput across the different SCs. 2. Within each SC, we present (β, γ)-fairness and a neat statistical method to differentiate and isolate flows auto-arXiv:2012.03293v1[cs.NI] 6 Dec 2020 matically based on their achieved throughput, to mitigate the bias brought by the TCP protocol due to its interaction with network latency (i.e., RTT) and buffer size.3.By leveraging SDN capabilities, we develop a native OpenDaylight (ODL) control plane application that dynamically manages network resources, including tracking flows, inquiring flow statistics and allocating bandwidth capacity.Furthermore, to measure the impact of network buffer sizes, we also implement DiffPerf on programmable Barefoot Tofino switch which allows flexible buffer sizing and enables fine-grained and flexible line-rate telemetry.4. We carry out comprehensive evaluations of DiffPerf from an application perspective for DASH video streaming, as a mainstream accounts for the majority of Internet video traffic.We believe that DiffPerf demonstrates a new avenue for APs to differentiate and optimize the performance of video flows and corresponding perceived user QoE so as to better monetize their bottleneck network resources.This will further incentivize APs to deploy more bandwidth capacity to accommodate the growth of Internet content traffic.

II. THE DI F FPE R F FRAMEWORK
In this section, we present the DiffPerf framework in a top-down manner.We first describe how DiffPerf allocates bandwidth capacity among the SCs, based on an optimization approach.We will derive closed-form allocation solution and show its feature of guaranteed performance differentiation.We then discuss the performance issues due to the consequences of TCP congestion control mechanism that responds to the heterogeneity of flows' RTTs and network's buffer sizes.To solve this problem, we show how DiffPerf classifies flows and optimizes bandwidth allocation within each SC.

A. Inter-Class Bandwidth Allocation
We consider an access provider that offers a set S of service classes over a bottleneck link with capacity C. We denote the set of active flows in any service class s ∈ S by F s and the cardinality of F s , i.e., the number of flows in class s, by n s .To differentiate the performance for flows in different service classes, the access provider needs to allocate appropriate amount of bandwidth to each service class.To accomplish this in a principled manner, we formulate the bandwidth allocation as an optimization of the allocation X = (X s : s ∈ S) that solves a general utility maximization problem as follows.max Under the link capacity constraint (2), the above mathematical program tries to maximize the aggregate utility over all service classes, where for each service class s, it counts the number of flows n s multiplied by the per-flow utility U s (X s /n s ) over the average capacity X s /n s allocated to each flow.In particular, we adopt and generalize the well-known weighted α fairness family of utility functions [8] as follows.In this family of utility functions, each service class s will be assigned a weight w s that indicates the relative importance of the service class, resulting in differentiated per-flow bandwidth allocation across the service classes.By controlling the parameter α, the access provider can express different preferences over various notions of fairness.When α approach 0, the utility tends to be measured purely by the allocated bandwidth; when α approaches +∞, the solution converges to the weighted maxmin fair allocation among the flows.In particular, a trade-off of a weighted proportional fair solution can be obtained by solving the optimization problem when α is set to be 1.Thus, besides the differentiation factor w s among service classes, the service operator can choose the value of α to tradeoff fairness.
Theorem 1.If an allocation X maximizes the aggregate utility over all service classes, it must satisfy Theorem 1 provides the closed-form solution of the utility maximization problem.Based on the optimal allocation solution in Equation (3), we derive the ratio of the average per-flow capacities of any two service classes s, s ∈ S as This result implies that performance differentiation is achieved by enforcing a fixed ratio for the per-flow bandwidth capacity across SCs, which is controlled by the weights w s , w s and the fairness parameter α.Equation ( 4) explicitly shows that the optimal solution effectively allocates a higher average perflow capacity in the service class that has a larger weight, which is desirable and expected for the better service class.
In particular, we also see that when α is set to be 1, the weighted proportional fair allocation leads an average per-flow allocation that are proportional to the weights of the SCs.

B. Intra-Class Bandwidth Allocation
Motivation: Given X s amount of bandwidth capacity allocated to the n s flows in SC s, each flow f ∈ F s is expected to achieve an average throughput of X s /n s .However, the actual throughput achieved, denoted by x f , might be significantly less than the mean.This can adversely affect the QoE that the corresponding user perceives.At the last-mile bottleneck, parameters such as RTT, TCP congestion control algorithm (e.g., CUBIC v/s BBR), and buffer size affect the performance of flows [9], [10], [11].The heterogeneity of RTTs experienced by the flows as well as the interaction of the TCP-based congestion control mechanisms that respond to the RTTs and network buffer size differently, lead to multiple competing flows achieving different throughput.We analyzed the performance of 100 competing DASH flows on a testbed, where all flows run TCP BBR and share a bottleneck link connecting to a DASH server.The bottleneck link capacity is set to 120Mbps, and 30% of the flows experience relatively longer RTTs than the rest.We run the experiments by changing one of the key parameters, i.e. the network buffer size.The experiment results show that the average stalling time of DASH flows at 10MB network buffer size, is 35% higher than that of DASH flows when the network buffer size is 1MB.However, by "isolating" flows that perceived dissimilar QoE at the last-mile bottleneck link, we observed that average stalling time of DASH flows is reduced by 50% and 25% at the buffer sizes of 1MB and 10MB, respectively, thereby improving the overall QoE significantly.
Motivated by this observation, we propose a practically scalable solution to classifying similar flows into sub-groups and isolating them into separate sub-classes by allocating appropriate amount of bandwidth to them within each SC.Next, we describe 1) a flexible statistical method that DiffPerf uses to classify flows within a SC, and 2) the intra-class bandwidth allocation used by DiffPerf for sub-group isolation.
1) Flow Classification and Isolation: By relying on QoE as a similarity metric to classify the flows, clearly this choice requires an explicit feedback via the receiver to the AP vantage point, which is difficult to afford in practice.We therefore want to leverage SDN functionalities to find other metrics to use at the vantage point.The first metric that comes to mind is RTT.However, the use of real-time RTT samples could not be taken solely as indicators of performance issues without other information such as underlying congestion control mechanism, buffer size [9], and packet route security.Even if we make assumptions of the availability of these information, measuring flow RTT at the APs is unreliable.Measuring RTT at the SDN control plane inflates a variable and high RTT based on our measurements in ODL control plane, while measuring it in the SDN data plane [12] may not scale well due to the memory space constraints.Instead, we emphasize that throughput of TCP flows is the appropriate and robust metric that indicates the collective impact of the interaction of network parameters to the user-perceived performance.Next we show how to utilize the throughput measure as a proxy to determining whether flows are similar to each other and to identify effectively which flows are affected.
Because the number of groups and the number of flows for each group may change and are not known in real scenarios, we adopt general statistical metrics for classification.Given the achieved throughput x f of the flows f ∈ F s in any SC s ∈ S, the mean and standard deviation of the flows' throughput are defined as xs = 1 ns f ∈Fs x f and σ s = f ∈Fs .
Because the achieved throughput x f of each flow depends on the number n s of competing flows and their characteristics, buffer size, and the allocated capacity X s that ultimately determines the network congestion imposed on the SC, instead of using absolute throughput thresholds to classify flows, we adopt the following statistical metric that orders and measures the relative throughput values among all flows in the same SC.
Definition 1.Given the mean xs and standard deviation σ s , the standard score of a flow f 's throughput is defined by When a flow's throughput is above (or below) the mean, its standard score or z-score is positive (or negative, respectively).This z-score captures the signed fractional number of standard deviations by which it is above the mean value.
Without loss of generality, we divide a set F s of flows into two sub-classes: lower sub-class F L s and upper subclass F H s , based on each flow's z-score compared with a pre-defined threshold β, where The set F L s contains the flows that achieved the lowest throughput values (i.e., the negatively affected flows).Thus, our goal is to identify them so that we isolate and allocate appropriate amount of bandwidth to them accordingly.We use a non-positive value of β to capture flows whose throughput are |β| deviations lower than the achieved average xf .Because the set F L s grows monotonically with the parameter β, i.e., , a smaller value of β makes a more conservative decision on the lowest throughput flows, avoiding mis-classifications.We will further study how the values of β affect the performance of flows in a later section via experimental evaluations.
2) Bandwidth Allocation Model: After classifying flows in each SC into two sub-groups, we isolate them into two subclasses and determine how much bandwidth X L s and X H s to allocate for each sub-class.To fully utilize bandwidth capacity, our solution needs to satisfy X L s + X H s = X s .The throughput of some flows might be naturally low and might not be able to achieve the targeted throughput X s /n s even allocated that amount of capacity.As a result, enforcing the per-flow allocation of X s /n s will result in resource wastage.The key question to answer is how much per-flow capacity we should allocate to the flows F L s , whose innate throughputs are less than what is needed to achieve the average throughput xs or to utilize the per-flow allocated capacity X s /n s in theory.Since these flows might not be able to achieve the average throughput, the per-flow allocation should be no higher than X s /n s .On the other hand, by isolating negatively affected flows from high-throughput flows (i.e., the flows F H s that cause the performance issues of flows F L s ), we expect them to achieve higher throughput than what are being achieved; and therefore, we should allocate more capacity for the set F L s of flows than their aggregate achieved throughput.To this end, we allocate the average amount of bandwidth capacity for the per flow of set F L s as where we define the set of flows whose throughput are below the mean xs by F − s {f ∈ F s : x f < xs } and introduce a parameter γ ∈ [0, 1] to control the allocated capacity flexibly.In particular, for one extreme of γ = 1, the solution allocates the average throughput of the set F − s of flows as the perflow capacity for the lower sub-class F L s (β), which must be lower than the average throughput xs and the average capacity X s /n s of all flows.In this case, the per-flow capacity allocated for the lower sub-class F L s is lower than that allocated for the upper sub-class F H s , under which resource wastage is reduced and resource is utilized more efficiently.For the other extreme of γ = 0, the solution simply isolates the two subclasses and equally allocates an average capacity X s /n s as the per-flow capacity for both upper and lower sub-classes, under which per-flow fairness is enforced regardless of how efficiently the resource is utilized.Thus, by choosing the value of γ between 0 and 1, we can make a trade-off between resource fairness and utilization.However, this depends on the interaction of TCP algorithm with network buffer size.As opposed to the shallow buffer, the deep buffer allows lowthroughput TCP flows (especially those negatively affected due to heterogeneity of RTTs) to stabilize their transfer.Thus, in the vantage point of deep buffer, if the low-throughput flows were crowded out by others, then they can perform better if γ = 0.However, this is not the case in shallow buffer that does not allow negatively affected flows to ramp up quickly and perhaps this even leads to lower utilization.By Eq.( 5), we also have the next theorem showing 1) lower bounds of per-flow capacities re-allocated to the lower and upper sub-classes F L s and F H s ; and 2) the monotonicity of the average throughput of the flows within F L s and the average per-flow capacity re-allocated to the F H s on the parameter β.
Theorem 2. Given any fixed parameter γ, for any service class s ∈ S, 1) the average achieved throughput of the flows within the lower sub-class F L s is non-decreasing in β and always no higher than the average per-flow capacity reallocated to F L s ; 2) the average per-flow capacity re-allocated to the upper sub-class F H s is non-decreasing in β and always no lower than X s /n s .
Theorem 2 states that as the parameter β increases, the average throughput of the flows F L s of the lower sub-class would also increase because more high-throughput flows would be classified into F L s .It also tells that this achieved average throughput must be no higher than the the per-flow capacity reallocated to them in F L s .This property guarantees our design objective of allocating more capacity for the flows in the lower sub-class than their aggregate achieved throughput.Theorem 2 also states that as β increases, flows within the upper sub-class F H s would be re-allocated more per-flow bandwidth capacity although fewer flows would be classified into the sub-class.Thus, service operators can choose the value of β to control the scales of the sub-classes and both β and γ to control the bandwidth capacity allocated to the flows of the two subclasses, which we refer to it as (β,γ)-fairness.
Before we close this section, we would like to emphasize that although DiffPerf classifies the flows in each SC into two sub-groups for simplicity, its statistical method of classification and the corresponding bandwidth allocation can be applied in a top-down recursive manner to further split any sub-group for a more fine-grained optimization.

A. DiffPerf Prototype on OpenDaylight with OpenFlow
We implement DiffPerf as an application on the popular industry-grade open-source SDN platform-the OpenDaylight (ODL) controller.We particularly develop a native MD-SAL (Model-Driven Service Adaptation Layer) application on ODL which comprises use of different technologies such as OSGI, Karaf, YANG, blueprint container, and messaging patterns as RPC, publish-subscribe, and data store accesses [13].We skip implementation details for the sake of brevity.Figure 1   1) Flow Processor: The Flow Processor module on the ODL performs two primary functions.First, it assigns userspecified service classes to newly joining flows in the network.We use the YANG modeling language [14] to define the service classes.Second, the module carries out regular flow maintenance; i.e., the flow processor inserts new flows into the data store, determines the pre-defined service class and assigns the corresponding weight to new flows, removes inactive or completed flows, etc.
2) Statistics Collector: DiffPerf performs in-network performance optimization.For DiffPerf to work, we need to estimate throughput of each active flow.Let x f (t) denote the average throughput of a flow f until time t.By measuring the instantaneous throughput xf of flow f during the past immediate time period ∆ t , the average throughput for the next time period ∆ t is updated as follows: , is a weight.Having measured the average throughput of the active flows, we use these estimates to group flows into sub-classes so that flows with similar achieved throughput fall into the same sub-class.
To obtain real-time estimates of the throughput of each active flow as well as link bandwidth, we implement a Stat.Collector module on ODL.This module registers per-flow rules to pull out measurement information using event-based handlers from the operational data store in ODL.The data store in turn uses the OpenFlow plugin (as indicated in Figure 1) to request the switches to report flow measurements; The per-flow measures of interest are packet counts, byte counts and duration.
3) Bandwidth Optimizer: The core part of DiffPerf is the Bandwidth Optmitizer module, which is responsible for inter-and intra-class bandwidth optimization described in Section II.The optimizer runs every ∆ t interval, getting input from two modules described above -Flow Processor and Stat.Collector (see Figure 1).While the former provides mapping of flows to user-specified service classes, the latter provides real-time measurements on the active flows in the switch.Given the input information, the inter-and intra-class optimizers are executed; the output of optimization are: (i) the portion of bandwidth allocated to each service class (SC), and (ii) the portion of bandwidth for each sub-class within every service class.
4) Bandwidth Enforcer: To materialize bandwidth allocation, each sub-class should use its designated bandwidth in an isolated manner.A naive approach to implementing this is to leverage multi-queues at the switch egress port so that each sub-class maps into an isolated queue.However, there are two practical challenges.First, in commodity switches the number of queues at egress port is usually limited to a small number [15] [16], meaning that the number of available queues could be less than the number flow sub-classes.Second, current OpenFlow switches do not expose APIs to update the weight of the queues dynamically.Without this capability (of dynamically changing queue weights), the bandwidth allocated to queues cannot be changed as and when required.
To overcome both limitations, we leverage the metering feature available in OpenFlow switches.Instead of defining queues and updating their bandwidth at egress port, we develop a Bandwidth Enforcer module that essentially does enforcement at the ingress side of the switch.That is, multiple meters corresponding to the number of sub-classes are defined; and based on the output of the Bandwidth Optimizer, flow rate of each sub-class is attached to a specific meter dynamically.The Bandwidth Enforcer uses the OpenFlow plugins to encapsulate the allocated bandwidth into OpenFlow messages and install them on to the switch(es).

B. DiffPerf Prototype with Programmable Data Plane
Next, we implement another prototype of DiffPerf on a lightweight C controller that is connected to Barefoot Tofino programmable switch [17] which enables flexible buffer sizing and fine-grained line-rate telemetry.We particularly implement statistics building block in the data plane to track number of bytes transmitted by the active flows.Additionally, we re-implement Bandwidth Enforncer and Statistics Collector modules in the control plane.We leverage the Tofino switch exposed APIs to update the weight of the queues dynamically and configure their sizes by the control plane.The remaining modules are kept the same with minor modifications.

IV. EXPERIMENTAL EVALUATION
We evaluate DiffPerf by carrying out experiments on a realistic testbed.We describe the details below.

A. Testbed setup
OpenFlow Brocade switch experiments: We set up a testbed for video streaming between a DASH client (i.e., video player) called dash.js and DASH server over an SDN network; Our testbed consists of 12 servers, 10 of which are used to host DASH clients and one each for hosting DASH server and ODL controller.The 10 servers running DASH clients are connected to the DASH server such that they compete (for video segments) at a downstream bottleneck link from a SDN-enabled Brocade ICX-6610 24-port physical switch.We evaluated DiffPerf in three different scenarios.In Scenario 1 (Section IV-C1) and Scenario 2 (Section IV-C2), each physical server hosts up to 4 DASH clients, each client runs in a VM, and all clients are connected to the DASH server over a 50 Mbps downstream bottleneck link.For Scenario 3 (Section IV-C3), we scale up the number of DASH clientseach physical server hosts 15 DASH clients, all run as docker containers and connected to the DASH server over a 200 Mbps downstream bottleneck link.
Barefoot Tofino switch experiments: The experiments with Tofino programmable switch concentrate on evaluating the impact of buffer size on the performance of bottlenecked flows and how DiffPerf enables switch buffer to perform better (i.e., improve the overall flow performance).The evaluations are carried out with multiple switch buffer sizes: 100KB, 1MB, and 10MB.Tofino exposes a set of APIs for Traffic Manager applications to manage buffer allocation from both ingress and egress ends.We use bf_tm_q_app_pool_usage_set API to aid in setting buffer size for the queues of the egress port attached to the bottlenecked link.Buffer size is specified in terms of cells, where each cell size is 80 bytes.The buffer precedes a 120 Mbps bottleneck link that transfers video segments from DASH server to 100 DASH clients.The results are presented under the last part of Scenario 2 (Section IV-C2).
Except for Scenario 1 (Section IV-C1), assuming majority of flows in the Internet have short RTTs [18], we partition the clients into two sets in 70:30 ratio based on the RTT min values configured: the mean and standard deviation of the bigger set are 64ms and 16ms, respectively, and that of the other are 224ms and 32ms, respectively.We use the network emulator netem [19] at the server machines running the DASH clients to set the latency.For streaming, we use the Big Buck Bunny video sample that lasts for 600 seconds and has been encoded into 3 bitrate levels-1.2Mbps, 2.2 Mbps and 4.1 Mbps-of equal segments (i.e., each segment is 2 seconds long).Thus, a DASH client can choose the bitrate levels and segments for streaming video, based on the measured congestion level of the network.We compare DiffPerf against two most popular TCP congestion control algorithms on the Internet [20]; TCP CUBIC [21] and TCP BBR [22].

B. Metrics for evaluation
To evaluate the performance of DiffPerf and TCP variants, we use two metrics.One is per-flow average throughput, which in our case corresponds to the average throughputs of all DASH client.The other metric of importance is the user-perceived quality-of-experience (QoE).The QoE metric is adopted based on the widely used model proposed by [23], and is expressed as This QoE definition uses various performance factors such as the average playback bitrate R n over the total N segments of the video, the average variability of the consecutive segments bitrate represented by the second summation, the duration of rebuffering T stall (i.e., the duration of time the player's playout buffer has no content to render), and startup delay (T s ) (i.e., the lag between the user clicking and the time to begin rendering).As in [24] [23], q maps a bitrate to a quality value; λ is usually set to one, µ and µ s are set to the maximum bitrate of the video sample.We measure QoE for the entire duration of the video.

1) Scenario 1: Evaluation of inter-class performance:
In this scenario we evaluate DiffPerf's inter-class bandwidth allocation model.We assume the access provider offers three classes of services: Golden (G), Silver (S) and Bronze (B), with weights of 3, 2 and 1, respectively.We run a set of experiments to evaluate the bandwidth allocated to users of different classes under different values of α.We assign 13 DASH clients to each service class, thereby having a total of 39 DASH clients in this scenario.All flows experience homogeneous RTT in this scenario.
Figures 2(a) plots, for each SC, the average throughput achieved by all flows in that class, for different values of α.Evidently, the ratios of the estimated average throughput of flows across the service classes closely follow the ratios obtained from our model (refer Eq. 4).In addition, the average throughput is converging with increasing α.
Figure 2(b) plots the average QoE of all flows in each SC.Observe that the QoE of service class B is low, when the average throughput achieved (given in Figure 2(a) achieved is low.The QoE of the three service classes converge with increasing value of α.With increasing α, as resources would be fairly shared among the competing flows, it is expected that higher level QoE also reflects this fair sharing given that the flows have homogeneous RTTs.
2) Scenario 2: Evaluation of intra-class performance: In this part, we evaluate our proposed performance-aware fairness, (β,γ)-fairness.That is, DiffPerf'β capability to mit-igate the bias brought against the affected flows by the interaction between TCP CUBIC, TCP BBR, flows with heterogeneous RTTs, and switch buffer size.Also, we present the flexibility of γ in enabling a feature of practical interest, the tradeoff between network efficiency and user QoE fairness.Recall, DiffPerf uses statistical flow classification and bandwidth allocation to the classified sub-classes to appropriately allocate a higher capacity to the negatively affected flows.
DiffPerf based on CUBIC.In Figure 3, we present  The figure shows that DiffPerf's isolation enables the DASH clients in the lower sub-class to achieve higher throughput than both TCP variants, while also being to achieve comparable aggregate throughput as TCP 1 .And β gives an access provider flexibility to decide on the flow classification based on a simple and intuitive metric (z-score).While small number of flows get classified in the lower subclass, we note that these were also the worst affected onces.Figures 4(a The key observation is that, the flows in the lower sub-class perceive higher QoE under DiffPerf than under both TCP variants, and only at the cost of a small number of flows in the upper sub-class.Another observation is that, 1 Notice that all CUBIC, BBR, and DiffPerf achieve aggregate throughput slightly higher than 50Mbps; this is due to the burstiness tolerated by OpenFlow meters.Also, BBR achieves the highest aggregate as it fundamentally does not passively react to packet loss or delay as signals of congestion.DiffPerf with γ = 0 is the most fair.DiffPerf not only improves the fairness, we calculated the overall QoE values for all flows; and it shows that DiffPerf, via flow isolation as well as performance-aware bandwidth allocation, improves significantly the overall QoE compared to TCP solely.It performs 1.86, 1.62, and 1.58 times higher QoE than CUBIC, at γ = 0, 0.5, and 1, respectively; Similarly, it performs 1.63, 1.42, 1.38 times higher QoE than BBR.

Fairness-efficiency tradeoff:
We run experiments for different values of γ, to analyze the trade-off between efficiency (i.e., bandwidth utilization) and fairness (user-perceived quality fairness), β is set to −0.25. Figure 5 depicts that the aggregate throughput increases as γ increases.Meanwhile, the average throughput of the lower sub-class decreases and that of the upper-class increases, as γ is increased from 0 to 1 (refer the second Y-axis).Evidently, the parameter γ affects the average flow throughput of both sub-classes.This behavior is due to the fact that, when γ approaches 0, the clients are allocated equal bandwidth regardless of the sub-class; hence the DASH clients tend to achieve higher throughput (subject to their characteristics), and thus the fairness is also improved.Conversely, when γ approaches 1, DiffPerf helps the network achieves better bandwidth utilization.DiffPerf allocates higher bandwidth to upper sub-class that likely has flows with greater tendency to exploit provisioned network bandwidth, and hence result in better network utilization.
DiffPerf based on BBR.As TCP BBR has recently gained wide-spread attention, DiffPerf is evaluated over TCP BBR. Figure 6 shows that DiffPerf's isolation enables DASH clients in the lower sub-class (i.e., the affected flows) to achieve higher throughput than BBR, while also being able to achieve comparable aggregate throughput as BBR.The number of flows classified into the sub-classes are illustrated next to DiffPerf bar.For example, at β = 0, 26 flows are classified into lower sub-class, and 14 flows to upper sub-class.Impact of buffer size: Based on the experiments carried out on the Tofino switch, this part presents the impact of buffer size on the performance of bottlenecked flows and the flow optimization achieved using DiffPerf.Note, Tofino switch updates DiffPerf with flow statistics every 1s (i.e., the sampling rate).At every interval of ∆ t = 5s, DiffPerf uses    25 and γ = 0; and it achieves 1.58 times higher QoE than BBR with shallow buffer and 2.6 times higher QoE than BBR with deep buffer.We note that shallow buffer leads to overall better user-perceived quality.However, quality worsens with much smaller buffer size (e.g.100KB).The deep buffer might help low-throughput flows, especially those affected by the interaction of TCP with flow RTT, to achieve better QoE but it increases packet queuing delay.The very shallow buffer (e.g.100KB) reduces packet queuing delay but increases packet losses.Hence, both these extreme buffer sizes increase DASH client's average stalling time (i.e., the duration of time the player's playout buffer has no content to render).With BBR, the client average stalling time is 92.2, 56.4, and 76.5 seconds, while under DiffPerf, the client average stalling time is reduced to 38.8, 28.8, and 58.4 seconds over buffer sizes 100KB, 1MB, and 10MB, respectively.DiffPerf thus demonstrates to be effective in improving the user-perceived quality for multiple buffer sizes.Lastly, it is worth noting that from this set of experiments, the buffer size of 1MB makes better trade-off between queuing delays and packet losses.
Overall, DiffPerf is fairer than both CUBIC and BBR in terms of client's throughput, client's QoE, and provides the highest overall QoE.
Scalability: DiffPerf operations on the Tofino switch are split across controller and dataplane.The controller collects aggregate real-time statistics of the active flows, performs the optimization, and reacts to data plane regularly.The dataplane tracks the number of bytes transferred by the active flows.DiffPerf does not impose high rate of sampling, which may lead to inaccurate statistics, especially when the DASH clients enter OFF period.Hence, the communication (between controller and dataplane) is only at the scale of seconds, and this works well for long running video flows over the Internet.This is also demonstrated by our experimental results.
3) Scenario 3: The Dynamics of DiffPerf: Finally, to understand how DiffPerf performs in real-world cases, we evaluate it in a dynamic scenario, where users from different service classes join and leave the network at different times.In this set of experiments, we conduct the evaluation on Open-Flow network centralized by ODL controller, were 150 DASH clients share a 200 Mbps bottleneck link; they have variable RTTs, with the ratio and distribution same as in the previous scenario.The arrivals of the DASH client requests follow the Poisson process with rate λ = 1 client/s.A client exits after the entire video (that lasts for 600 seconds) is streamed.The DASH clients subscribe to G, S, and B, service classes in the ratio 1:2:3.The weights of service classes are kept the same as before; i.e., G:S:B = 3:2:1.We set the values of α, β and γ to 1, −0.25, and 0.5, respectively.At every interval of ∆ t = 15s, DiffPerf uses the last measured statistics, such as number of active flows and each flow's instantaneous throughput (δ = 0), to subsequently send command to the switch for regulating network flows in the next immediate time interval.OpenFlow switch updates DiffPerf with flow statistics every 3s (i.e., the default sampling rate in Brocade ICX-6610 switch).Figure 9(a) shows the arrival and departure of DASH flows.The number of active flows in the two sub-classes, for each of the services classes, are depicted in Figures 9(b), (c), and (d).Although the video being streamed is of 600 seconds, observe that the G-class clients complete earlier than S-class and B-class; and this is true for both lower and upper sub-classes.Similarly, S-class flows finish earlier than B-class flows.We observe a sudden decrease in the active flows a few times (the dips on the curves); this is not because the flow(s) actually leave the system, rather, due to the expiry of idle timeout of flows.When DASH client does not receive video segments packets, the timeout causes the flow to be deemed as an inactive flow.However, once the client resumes receiving data, DiffPerf promptly counts it as an active flow.Figure 10 plots the dynamic bandwidth allocation recommended by DiffPerf, for each service class and the sub-classes within.The bandwidth allocated accounts for number of active flows in each service class and their achieved throughput, optimized via (β, γ) performance-aware mechanism.It also shows that DiffPerf adapts quickly to the departure of flows (observe time period after 600 seconds), allocating the spare capacity to the remaining active flows.

A. TCP Congestion Control
Increase of network bandwidth also saw the emergence of 'high-speed' TCP variants such as FAST [25], BIC [26], CUBIC [21] and BBR [22] for transporting Internet traffic.Yet, TCP's inability to fairly share the bandwidth of flows with heterogeneous RTTs-a problem known to the community for around two decades [27], [28], [29]-still persists.As demonstrated by our experiments (and also other works, e.g., [30]) CUBIC exhibits such a behavior, and so does BBR [9], [31].This unfairness in achieved throughput worsens when flows with different TCP congestion control mechanisms compete [11].Another interesting observation from literature is that the relative performance degradation in throughput can be due to more than the single factor of RTT.In this context, we highlight that DiffPerf is agnostic to the specific RTT of flows and other router specifications (e.g., buffer size) in performing optimization and enforcement of the computed optimized bandwidth; indeed DiffPerf classifies and isolates flows of dissimilar characteristics solely based on tracking their achieved throughput.

B. Service Differentiation
Service differentiation is at the core of network quality of service (QoS) provisioning to serve traffic from multiple classes over the network [32], [33], [34], [35].While IntServ [34] did not find adoption in the Internet, DiffServ [32] inspired a body of work on providing differentiated services.However, many of such solutions mandate a sophisticated scheduling with manual configuration of QoS knobs on a per service class basis.Instead, we choose a well-known utility function based framework which enables service operator to practically specify number of service classes and make good balance between the bandwidth share and performance.
In [36], authors proposed an approach for rate-delay (RD) differentiation by maintaining two queues at the router's output link.While the aspirations resemble DiffPerf, it is still best-effort and does not promise any rate or loss guarantees.[37] discussed a static service differentiation framework for ISP.In short, class-based traffic control and service differentiation have been largely limited to theoretical analyses [36], [38], [39], [40], and have not been experimented on hardware switches with real application traffic.DiffPerf's inter-class utility is general enough to make trade-offs among desirable performance metrics and operates dynamically based on active users and available bandwidth.

C. Fair Queuing
Fair queuing has been a topic for extensive research [41], [42], [43], [42], [44], [45], [46].FQ-CoDel [42], a recent AQM discipline, offers good performance gain in achieving fairness among the flows, by classifying them into different  buckets and serving them in a round-robin manner.However, large memory is extremely expensive or unavailable in data plane, hence it is practically infeasible to accommodate very large number of buckets for hashing large number of flows.DiffPerf optimizer performs simply by comparing flow's z score with a pre-defined β threshold, (i.e., classifier requires no training).Additionally, unlike AQM, DiffPerf is not limited to specific congestion algorithm; it works on top of several interactive parameters such as buffer size, flow characteristics, and congestion algorithm.Also, DiffPerf is portable, it can be packaged as virtual network function (VNF) over SmartNIC [47], to handle a presence of extremely heavy workload.

D. User Quality of Experience (QoE)
In the context of video streaming, several studies proposed to improve user QoE [24], [48] or to achieve QoE fairness [49].These approaches continuously attempt to improve the adaptive bitrate (ABR) algorithms in the DASH Reference Player at application layer, based on several performance metrics seen in the application.Our work differ from them in that we propose bottom-up optimization.DiffPerf reacts to the interplay between several network's inherently coupled parameters by continuously improving affected traffic flows.This in large part improves the performance metrics (e.g., QoE fairness) at the application.

VI. CONCLUSION
We propose DiffPerf that leverages the rapid development in network softwarization and enables an agile and dynamic network bandwidth allocation at the AP vantage point.At a macroscopic level, DiffPerf offers access providers new capabilities for performance guarantees by dynamically allocating bandwidth to service classes through which the trade-off between fairness and differentiation can be made.At a microscopic level, DiffPerf isolates and optimizes the affected flows as a result of interplay between several network's inherently coupled parameters such as flow characteristics, buffer size, and congestion algorithm.We implemented two prototypes of DiffPerf; one in ODL with OpenFlow, and the other on the programmable Tofino switch.We evaluate DiffPerf from an application perspective by evaluating it for MPEG-DASH video streaming.Our experiment results confirm DiffPerf's capabilities of QoE provisioning, fairness, and optimization.Proof of Theorem 2: By the definition of the set F L s (β), we know that for any two thresholds β 1 < β 2 , F L s (β 1 ) ⊆ F L s (β 2 ).For any two flows f and f satisfying f ∈ F L s (β 1 ) and f ∈ F L s (β 2 )\F L s (β 1 ), we have x f < x f because z f < β 1 ≤ z f < β 2 .Therefore, it satisfies +(1−γ) Xs ns ≥ Xs ns .Thus it is non-decreasing in β and no lower than X s /n s .

Figure 2 :
Figure 2: The Achieved throughput and corresponding QoE of multiple service classes for different values of α

Figure 3 :Figure 4 :
Figure 3: Aggregate throughput of the SC sub-classes flows ) and (b) plot the QoE achieved by each DASH client, for β = −0.25;here we plot for different values of γ as well.The former figure plots QoE of the upper sub-class of flows, the the latter plots the same for the lower sub-class of flows.

Figure 7 (
a) and (b) plot the perceived QoE of DASH clients in the aforementioned sub-classes.The lower-subclass flows with DiffPerf perceive better QoE than with BBR.

Figure 6 :Figure 7 :
Figure 6: Aggregate throughput of the SC sub-classes flows

Figure 8 :
Figure8: Impact of switch buffer size β = −0.25 and γ = 0; and it achieves 1.58 times higher QoE than BBR with shallow buffer and 2.6 times higher QoE than BBR with deep buffer.We note that shallow buffer leads to overall better user-perceived quality.However, quality worsens with much smaller buffer size (e.g.100KB).The deep buffer might help low-throughput flows, especially those affected by the interaction of TCP with flow RTT, to achieve better QoE but it increases packet queuing delay.The very shallow buffer (e.g.100KB) reduces packet queuing delay but increases packet losses.Hence, both these extreme buffer sizes increase DASH client's average stalling time (i.e., the duration of time the player's playout buffer has no content to render).With BBR, the client average stalling time is 92.2, 56.4, and 76.5 seconds, while under DiffPerf, the client average stalling time is reduced to 38.8, 28.8, and 58.4 seconds over buffer sizes 100KB, 1MB, and 10MB, respectively.DiffPerf thus demonstrates to be effective in improving the user-perceived quality for multiple buffer sizes.Lastly, it is worth noting that (c) Bandwidth allocation for B-class

Figure 10 :
Figure 10: Dynamic service classes bandwidth allocation

−
APPENDIX A PROOFS OF THEOREMSProof of Theorem 1: From optimization theory, our bandwidth allocation problem is a convex optimization problem.By Karush-Kuhn-Tucker (KKT) conditions, it has a unique solution which satisfies  u + u G = 0 and u s X s = 0, ∀s ∈ S,u s∈S X s − C = 0where u and (u s : s ∈ S) are KKT multipliers and satisfy u, u s ≥ 0 for any s ∈ S. By solving the above equations, we can derive that X s = ns α √ ws s ∈S n s α √ w s C, ∀s ∈ S.
f ∈F L s (β 1 ) x f |F L s (β1)| ≤ f ∈F L s (β 2 ) x f |F L s (β2)| ,i.e., the average achieved throughput of the flows within the lower sub-class F L s is non-decreasing in β.By Eq.(5), when β = 0, we havef ∈F L s (β) x f s = F − s .In other words, the average achieved throughput of the flows within the lower sub-class F L s equals the average per-flow capacity re-allocated to F L s .Because |F L s | is non-decreasing in β, γn s / n s − |F L s | is nondecreasing in β.By Eq.(5), the capacity allocated to the per-flow of the upper sub-class satisfies depicts