University of Pennsylvania
ScholarlyCommons
Departmental Papers (ESE) Department of Electrical & Systems Engineering
10-23-2007
Improving Service Differentiation in IP Networks
through Dual Topology Routing
Kin-Wah (Eric) Kwong
University of Pennsylvania, [email protected]
Roch A. Guérin
University of Pennsylvania, [email protected]
Anees Shaikh
IBM Corporation
Shu Tao
IBM T.J. Watson Research Center
Postprint version. Paper to be presented at ACM CoNEXT 2007 Conference, December 2007, 12 pages.
This paper is posted at ScholarlyCommons. http://repository.upenn.edu/ese_papers/303
For more information, please contact [email protected].
Improving Service Differentiation in IP Networks through
Dual Topology Routing
Kin-Wah Kwong, Roch Guérin
University of Pennsylvania
{kkw@seas, guerin@ee}.upenn.edu
Anees Shaikh, Shu Tao
IBM T.J. Watson Research Center
{aashaikh@watson, shutao@us}.ibm.com
ABSTRACT
The convergence on IP of a wide variety of traffic types has
strengthened the need for service differentiation. Service
differentiation relies on two equally important components:
(i) resource allocation, i.e., what resources does a given ser-
vice class have access to; and (ii) contention resolution, i.e.,
how is access to shared resources arbitrated between ser-
vices classes. The latter has been well studied with numer-
ous mechanisms, e.g., scheduling and buffer management,
supporting it in modern routers. In contrast, relatively few
studies exist on the former, and in particular on the impact
of routing that determines the resources a given service class
is assigned to. This is the focus of the paper, which seeks to
investigate how routing influences a network’s ability to effi-
ciently support different service classes. Of particular inter-
est is the extent to which the ability to route service classes
separately is beneficial. This question is explored for a base
configuration involving two classes with either similar or en-
tirely different service objectives (cost functions). The pa-
per’s contributions are in demonstrating and quantifying the
benefits that the added flexibility of different (dual) routing
affords, and in developing an efficient heuristic for comput-
ing jointly optimal routing solutions. The former can mo-
tivate the deployment of newly standardized multi-topology
routing (MTR) functionality. The latter is a key enabler for
the effective use of such capability.
1. INTRODUCTION
IP networks today carry traffic of different types,
characteristics, and performance requirements. For in-
stance, many ISP’s are using IP networks to deliver
The work of Kin-Wah Kwong and Roch Gu´erin was sup-
p orted by NSF grant CNS-0627004.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CoNEXT’07, December 10-13, 2007, New York, NY, U.S.A.
Copyright 2007 ACM 978-1-59593-770-4/ 07/ 0012 ...$5.00.
bundled services that include performance-sensitive ap-
plications, such as voice and video, as well as the more
“elastic” data services. Large enterprises also rely heav-
ily on IP networks to transfer a mix of both critical (e.g.,
data center backup) and non-critical data for their busi-
ness needs. To better support such mixed traffic, service
differentiation is often necessary or at least desirable.
Service differentiation is widely available in modern
routers, e.g., in the form of scheduling and buffer man-
agement mechanisms that arbitrate between service classes
in the presence of resource contention. Although “con-
tention resolution” is a critical component in support
of service differentiation, it does little to ensure that a
network delivers the best possible services to the differ-
ent service classes it supports, i.e., minimizes the onset
of contention whenever possible. In IP networks, this is
largely controlled by routing that determines how net-
work resources (link bandwidth) are assigned to carry
different traffic flows. In order to optimize routing for
the expected traffic demand, network operators often
employ traffic engineering (TE) algorithms to determine
the paths, which are then implemented using shortest-
path routing protocols, such as OSPF or IS-IS.
We investigate the performance benefits of different
routing schemes using this traditional TE framework,
but with two traffic classes (i.e., high and low priority).
We assume a simple contention resolution mechanism—
priority queueing (i.e., a separate queue is maintained
on each link for each service class, and higher priority
classes are always served first). When using common
TE approaches in a priority queueing environment, it
is straightforward to optimize performance of the high-
priority traffic. However, as we show, such optimized
performance often comes at the expense of low-priority
traffic, whose performance may be severely degraded.
Ideally, we seek a routing scheme that ensures the best
performance for high-priority traffic, while still main-
taining reasonable performance for low-priority traffic.
In this paper, we develop a practical routing scheme
that achieves the goal of improving support for service
differentiation in IP networks. Our approach is based
on multi-topology routing (MTR), in which different
routings are computed for different service classes, al-
lowing greater flexibility in exploiting available network
resources to meet their service goals. In traditional IP
networks, a single weight is assigned to each link, which
produces a single routing that is used to forward packets
of all traffic classes. We refer to this traditional routing
scheme as single-topology routing (STR). In contrast,
MTR allows multiple weights on each link, and hence
a corresponding number of different routings that can
each be assigned to separate service classes. MTR is be-
ing standardized [1] and is becoming available on a num-
ber of router platforms. In our investigation, we limit
ourselves to two topologies, i.e., two distinct weights
on each link, and denote the resulting routing as dual-
topology routing (DTR). In DTR, high-priority traffic
is routed using one set of link weights, and low-priority
traffic routed using the other.
While the increased routing flexibility of DTR is clearly
beneficial when compared to STR, those benefits come
at a cost. The first is the higher complexity of identify-
ing good sets of link weights. This is because the solu-
tion space to explore is exponentially larger than with
STR (each assignment of link weights for one traffic
class needs to be evaluated for all possible combinations
of links weights for the other traffic class). Furthermore,
the objective functions that need to be optimized by
selecting link weights are often different for each traf-
fic class. For example, two commonly used objective
functions include one that is based on link utilization
and seeks to minimize the average overall network de-
lay, and a second that involves Service Level Agree-
ments (SLAs) in the form of delay bounds that need
to be met for each “user” (pairs of source and destina-
tion nodes). Computing weight settings that optimize
various combinations of such objective functions can be
challenging. As a result, realizing any of the benefits of
DTR calls for developing computationally tractable ap-
proaches for finding good combinations of link weights
across these many different scenarios. Another cost of
DTR over STR is in the added configuration and com-
putational overhead it imposes on routers, because of
the need to configure and disseminate multiple weights
for each link and run multiple SPF algorithms in the
presence of network changes.
Given the costs associated with DTR, it is vital to
both provide techniques to mitigate them when pos-
sible, and equally important to quantify the magni-
tude of the improvements achievable with DTR. The
paper makes contributions toward both of these issues,
first by developing an efficient heuristic for computing
good weight settings for DTR in the presence of var-
ious objective functions, and second by performing a
comprehensive assessment of the performance benefits
that DTR can afford. Through extensive simulations,
we show that DTR is able to provide substantial perfor-
mance improvements, especially for low-priority traffic,
when compared with STR. These benefits are consis-
tent across a number of topologies, network sizes, traf-
fic patterns, and objective functions in the optimization
formulation.
The rest of this paper is structured as follows. Sec-
tion 2 provides some background on multi-topology rout-
ing and reviews a few related works. Section 3 formu-
lates the problem and introduces the two objective func-
tions considered in our problem setting. Section 4 de-
scribes the heuristic algorithm designed for link weight
computation in DTR. Section 5 applies this algorithm
to a wide set of network topologies and traffic patterns,
and provides an in-depth discussion of when and why
DTR can offer substantial performance improvements
above and beyond STR-based solutions. Section 6 sum-
marizes our findings and concludes the paper.
2. RELATED WORK
The problem of finding optimal link weight settings in
IP networks that rely on destination-based, SPF routing
was first studied in [2], where the problem was identified
as NP-hard and a local search heuristic was proposed.
Several other heuristics, e.g., [3, 4], were later developed
as extensions to this work. Nucci et al. [5] studied a
similar problem, and their goal was to compute a link
weight setting that both met SLA requirements and was
robust to link failures. All these works consider routing
of only a single traffic class, i.e., an STR setting.
The use of MTR for traffic engineering purposes was
proposed in [6] based on dividing traffic matrix into
smaller “slices”, each routed on a separate topology—
the greater the number of slices, the better the perfor-
mance as it increases the ability to approximate optimal
routing. Recently, the use of MTR has been proposed
to improve the resiliency of IP routing [7, 8, 9], with
different topologies offering backup routes for different
failure scenarios. Our work differs from these earlier
contributions in that we focus on assessing MTR’s ben-
efits in a network that offers service differentiation.
Also related to our work are studies on QoS routing
for optimizing multi-class traffic p erformance, e.g., [10,
11, 12, 13, 14, 15]. However, even though they con-
sider the problem of optimizing network p erformance
across multiple traffic classes and performance objec-
tives, these works focus on a flow-oriented environment
where the goal is to minimize the blocking probabil-
ity of high-priority requests [14, 15]. In contrast, in
this paper we assume the standard destination/SPF
based routing/forwarding paradigm of traditional IP
networks. Similarly, works on multi-class TE in MPLS
[16, 17] have also tackled the problem of network op-
timization in the presence of different traffic classes.
However, they again differ from the work of this paper
because of the flow-oriented nature of MPLS networks.
3. PROBLEM FORMULATION
We model the network as a directed graph G = (V, E)
with node set V and edge set E, and C
ij
denotes the ca-
pacity of edge (i, j) E. The traffic from the high- and
low-priority classes are specified through two traffic ma-
trices, T
H
= [r
H
(s, t)]
|V |×|V |
and T
L
= [r
L
(s, t)]
|V |×|V |
,
respectively, where r
H
(s, t) and r
L
(s, t) represent the
volume of high- and low-priority traffic originating at
node s and destined for node t, with r
H
(s, s) = r
L
(s, s) =
0, s V .
As mentioned earlier, the contention resolution mech-
anism used to differentiate b etween service classes is a
simple two-priority queueing scheme. The high-priority
queue is always served first, hence the low-priority traf-
fic only sees the residual capacity left by the high-priority
traffic. For example, if link l with capacity C
l
is carry-
ing H
l
units of high-priority traffic, the residual capacity
seen by the low-priority traffic is
e
C
l
= max(C
l
H
l
, 0).
When computing “good” weight settings, we focus on
two cost/objective functions that are commonly used in
practice in IP networks.
3.1 Load-based cost function
We first consider a load-based cost function that seeks
to minimize the overall queueing delay each traffic class
experiences in a network. Our model is a generalized
version of that of [2]. Specifically, we assume that the
performance of both traffic classes on a link can be mod-
eled by an M/M/1 model. Using the piecewise linear
approximation suggested in [2], we define the cost in-
curred by the high-priority traffic on link l as
Φ
H,l
=
H
l
,
H
l
C
l
6
1
3
(1a)
3H
l
2/3C
l
,
1
3
6
H
l
C
l
6
2
3
(1b)
10H
l
16/3C
l
,
2
3
6
H
l
C
l
6
9
10
(1c)
70H
l
178/3C
l
,
9
10
6
H
l
C
l
6 1 (1d)
500H
l
1468/3C
l
, 1 6
H
l
C
l
6
11
10
(1e)
5000H
l
16318/3C
l
,
11
10
6
H
l
C
l
(1f)
where H
l
denotes the amount of high-priority traffic on
link l. Similarly, the cost incurred by the low-priority
traffic on link l, Φ
L,l
, can be defined by replacing H
l
with the amount of low-priority traffic on link l, L
l
,
and C
l
with the residual capacity on link l seen by the
low-priority traffic,
e
C
l
.
Based on the above definitions, we define the overall
cost for the two traffic classes, denoted by Φ
H
and Φ
L
,
as the sum of all link costs, i.e., Φ
H
:=
P
lE
Φ
H,l
and
Φ
L
:=
P
lE
Φ
L,l
. Our goal is to minimize Φ
H
and Φ
L
,
while preserving the priority precedence between the
traffic classes. We, therefore, define a lexicographical
optimization function such that the high-priority traffic
is optimized first. Specifically, our goal is to
minimize A = hΦ
H
, Φ
L
i (2)
Note that we use the notation , ·i to denote a lexico-
graphically ordered tuple. Namely, hx
1
, y
1
i > hx
2
, y
2
i,
if and only if x
1
> x
2
, or x
1
= x
2
and y
1
> y
2
. In other
words, we optimize first for the high-priority traffic and
then for the low-priority traffic.
3.2 SLA-based cost function
Another commonly used cost function is based on
Service Level Agreements (SLA’s) [5]. SLA’s are typ-
ically defined in the form of end-to-end delay targets
for each source-destination (SD) pair. SLA-based cost
functions are particularly meaningful for high-priority
traffic, as customers of this service class pay a premium
for their service. Typically, whenever an SLA is vio-
lated, a provider pays a certain penalty back to its cus-
tomers. There is, therefore, a strong incentive on the
provider’s part to select a routing that minimizes the
chance of such occurrences.
The second cost function we consider reflects such an
objective. Specifically, its goal is to ensure that delay
performance targets are met for all the high-priority
traffic SD pairs. Delay consists of both queueing and
propagation delays contributed by all links on the path
used by a given SD pair. Specifically, the average de-
lay D
l
(seen by high-priority traffic) on link l can be
computed as
D
l
=
s
C
l
µ
H
l
C
l
H
l
+ 1
+ p
l
s
C
l
µ
Φ
H,l
C
l
+ 1
+ p
l
(3)
where s is the average packet size, p
l
is the propagation
delay of link l. Note that we use Φ
H,l
/C
l
to approximate
H
l
/(C
l
H
l
) [18].
Let ξ
(s,t)
be the average end-to-end delay for SD pair
(s, t). Suppose the SLA delay bound of any SD pair is
θ > 0. We define the cost function for the high-priority
traffic between SD pair (s, t) as
Λ
(s,t)
=
½
0, if ξ
(s,t)
6 θ (4a)
a + b(ξ
(s,t)
θ), Otherwise (4b)
Here, a and b are two positive parameters that deter-
mine the SLA penalty; a is a constant penalty incurred
by any SLA violation, while b represents the penalty
ratio associated with delay in excess of the SLA bound.
Without loss of generality, we choose a = 100 and b = 1.
The overall cost function for the high-priority traffic is
then defined as Λ :=
P
(s,t)V ×V
Λ
(s,t)
. In other words,
Λ can be considered as the total amount of penalty that
the provider pays back to customers because of SLA vi-
olations.
For the low priority traffic, we continue to use the
load-based cost function Φ
L
defined in Section 3.1. In
keeping with the lexicographical ordering that gives prece-
dence to the high-priority traffic cost function, our goal
is now to
minimize S = hΛ, Φ
L
i (5)
3.3 Discussions
3.3.1 On the feasibility of a joint cost function
A natural question when considering an optimization
problem consisting of multiple cost functions, i.e., for
different service classes, is to explore whether a unique,
joint cost function can be defined to facilitate the se-
lection of a routing solution
1
that accurately captures
the goals of the original individual cost functions. One
such possibility is a joint cost function of the form J :=
αΦ
H
+ Φ
L
or J := αΛ + Φ
L
, where by varying α > 0 a
different trade-off can be realized between the two ser-
vice classes. When setting α , J defaults back
to the earlier lexicographical cost function. A similar
joint cost function was proposed in [19], for a config-
uration where the two objectives being combined were
performance and reliability instead of the performance
of different service classes as is the case here.
In spite of the attractiveness of such an approach,
especially in terms of computational simplicity
2
, iden-
tifying how to select α in order to realize a given per-
formance trade-off for high- and low-priority traffic is
challenging. Setting α to too low a value can result in a
“priority inversion” in the sense that some high-priority
traffic can see relatively poor performance compared to
what low-priority traffic sees overall. This violates the
concept of “high priority” and the associated ordering
among traffic classes. Conversely, choosing too large a
value for α yields little benefits compared to a lexico-
graphical ordering solution, and in general identifying
a value of α that works well across all configurations is
a challenging task. This is especially so when dealing
with different cost functions across service classes, as
identifying how to “add” quantities that are expressed
in different units is at best un-intuitive. We briefly illus-
trate these difficulties next through a simple example.
Consider a 3-node triangular network (A, B and C),
as shown in Fig. 1. The capacity of each link is 1 unit,
and 1/3 and 2/3 units of high- and low-priority traf-
fic, respectively, originate at node A destined for node
C. Consider J := αΦ
H
+ Φ
L
, where α = 35. Under
STR, both traffic classes then use link A C to min-
1
Note that the use of a joint cost function only makes sense
in a setting where STR is used. If DTR is available, because
each class can be routed independently, there is no benefit
in coupling their routing through a joint cost function.
2
A purely lexicographical optimization calls for identifying
“all” routings that minimize the first cost function, and then
select the one that yields the best value for the second. With
a joint cost function, a single optimization is instead per-
formed.
A
B
C
1
C
2
C
3
C
Figure 1: A simple 3-node network
imize J, which results in Φ
H
= 1/3 and Φ
L
= 64/9.
This solution is the same as lexicographically minimiz-
ing A = hΦ
H
, Φ
L
i and maintains the priority prece-
dence among the two traffic classes. However, the low-
priority traffic performance is very poor, so that one
may want to improve it by lowering α to, say, α = 30.
The solution that minimizes J then calls for evenly
splitting both high- and low-priority traffic over links
AB and AC, which yields Φ
H
= 1/2 and Φ
L
= 4/3.
This improves Φ
L
by 81%, but also degrades Φ
H
by
50%, which results in a “priority inversion” that vio-
lates the implicit precedence given to high priority traf-
fic. This illustrates how difficult it is to select a value
of α such that both traffic classes achieve good perfor-
mance, and matters are even worse when considering
more complex topologies and different combinations of
traffic loads and patterns.
3.3.2 STR with trade-off between traffic classes
Our performance goal is formulated as a lexicograph-
ical optimization, which gives strict precedence to the
performance of high-priority traffic. With DTR the im-
pact of this strict precedence is “softened” by giving
low-priority traffic control on its own routing once the
high-priority routing has been set. A natural question is
then whether similar benefits are achievable with STR
by slightly relaxing the precedence rule.
For load-based cost functions, this can be realized as
follows: Denote the minimum cost for high-priority traf-
fic as Φ
H
; one may then allow STR solutions that satisfy
Φ
H
(1 + ε
H
, ε > 0. In other words, we allow all
routing solutions that do not degrade the performance
of high-priority traffic by more than ε. This increases
the number of routing solutions that can be considered,
hence improving the odds of finding one that yields bet-
ter performance for low-priority traffic. The parameter
ε, allows us to tune how much high-priority performance
we are willing to sacrifice to improve low-priority per-
formance. For an SLA-based cost function, a similar
trade-off can be realized simply by increasing the high-
priority delay bound to (1 + ε)θ. As ε grows, more STR
routing solutions satisfy the SLA; hence again increas-
ing the odds of finding one that yields b etter perfor-
mance for low-priority traffic.
As we shall see, while such relaxation can help im-
prove the overall performance of STR, it does not come
anywhere near what is achievable through DTR. We
expand on this in Section 5.3.
4. HEURISTIC SEARCH FOR DTR LINK
WEIGHT ASSIGNMENT
For most cost functions, including the two we se-
lected, finding an optimal link weight setting in STR
is an NP-hard problem [2]. As alluded to earlier, the
problem is even more complex with DTR since in each
possible weight configuration for the high-priority traf-
fic, there are exponentially many possible weight set-
tings for the low-priority traffic. To address the prob-
lem of computing good sets of link weights for DTR, we
develop the following heuristic algorithm.
Let W = {W
H
, W
L
} be the set of link weights for
DTR, where W
H
=
©
W
H
l
ª
lE
and W
L
=
©
W
L
l
ª
lE
denote the sets of link weights for the high- and low-
priority traffic, respectively. Our goal is to find W
that minimizes our lexicographical objective function L,
where L = A (Eq. (2)) or L = S (Eq. (5)). The heuris-
tic is an iterative algorithm. As shown in Algorithm 1,
it consists of three routines: The first minimizes the
high-priority traffic cost; the second minimizes the low-
priority traffic cost, and the third refines the solution
obtained from the first two. Details on each routine are
provided next.
In the first routine (from lines 3 to 12), the high-
priority traffic performance is optimized by searching
for a good weight setting W
H
(see subroutine FindH)
given a set of initial link weights for the low-priority
traffic, i.e., W
L
remains unchanged as W
H
is optimized.
In the second routine (from lines 15 to 24), W
H
is set
to the best setting W
H
obtained in the first routine,
and the low-priority link weights are optimized. After
completion of the first two routines, a reasonably good
link weight setting W
= {W
H
, W
L
} has been iden-
tified. However, additional improvements are possible,
and the third routine (from lines 27 to 38) is used as a
refinement step that searches for better overall solutions
in a small neighborhood around W
= {W
H
, W
L
}.
In Algorithm 1, diversification is also used to let the
search “escape” from a local optimum if there is no cost
improvement in M iterations. A small fraction (g
1
and
g
2
) of weights in W
H
and W
L
are randomly perturbed
in the first and second routines respectively (in lines 9
and 21). In the third routine, the diversification proce-
dure (from lines 33 to 36) randomly perturbs a smaller
fraction ( g
3
) of link weights in b oth W
H
and W
L
, so
that the search re-starts from a solution sightly different
from W
, while preventing it from jumping to a very
sub-optimal one.
The FindH subroutine is shown in Algorithm 2. The
subroutine first defines a “neighborhood” for the cur-
rent solution W , and then evaluates solutions in this
neighborhood to find the best one to replace W . A
neighborhood is defined as follows. With the current
solution W , each link l is associated with a lexicograph-
ical cost L
l
, where L
l
:= hΦ
H,l
, Φ
L,l
i (resp. L
l
:=
hD
l
, Φ
L,l
i) if L = A (resp. L = S). Links are then
sorted in decreasing order of cost. Intuitively, we want
to increase the weight of links with high cost to route
traffic away from them, and decrease the weight of links
with low cost to attract more traffic onto them [5]. In
our heuristic, we select m high-cost links and m low-
cost links to form two sets, A and B. A neighbor of the
current solution W is identified by randomly selecting a
link from A (resp. B) and increasing (resp. decreasing)
its weight, both without replacement. As a result, m
such neighbors can be identified to form the neighbor-
hood of W . Note that it is not always good to change
the weights of the links with the highest or lowest cost,
because this may lead to situations where only a few
links are perturbed, so that only a small part of the
solution space is explored. To avoid this problem, we
introduce one more randomization for link selection in
FindH (as shown in lines 2 to 4 in Algorithm 2). We
define two integers k
1
and k
2
randomly drawn from a
heavy-tail probability distribution P (k ) k
τ
, where
1 6 k 6 n m + 1 and n is the total number of links in
the network [20]. Based on this distribution, if τ 0,
links in A and B are selected independent of their costs,
while if τ , only those links with the highest or low-
est costs are included in A and B. In our study, we use
τ = 1.5 so that all links have a chance of being chosen,
while keeping some preference for selecting links with
very high or low costs.
The routine FindL is similar to FindH except that
we use Φ
L,l
for sorting links, because W
L
has no effect
on the high-priority traffic. For brevity, we omit the
pseudo-code for FindL.
5. EVALUATION AND FINDINGS
In this section, we evaluate the performance bene-
fits of DTR over STR solutions. Our evaluation spans
a broad range of network settings, including different
topologies, traffic patterns and intensity levels. Our
goal is to investigate when DTR is worth deploying
(given its added complexity), and in particular whether
the use of different performance objectives for the two
traffic classes plays a major role.
5.1 Evaluation settings
5.1.1 Network topology
We use three different types of topologies:
Random topology: generated by randomly adding
links between nodes in a network, all nodes have
similar link degrees.
Power-law topology: generated using the preferen-
Algorithm 1: DTR Weight Search
Input: Network topology, traffic matrix, initial link weight
setting W
0
Result: Best link weight setting W
W W
0
, W
W
0
1
L
L(W
0
); Iteration 02
while Iteration<N do3
W FindH()4
if L(W ) < L
then5
L
L(W ), W
W6
end7
if No improvement in L
after M iterations then8
Randomly perturb g
1
(%) of link weights in W
H
9
end10
Iteration++11
end12
Iteration 013
W
H
W
H
14
while Iteration<N do15
W FindL()16
if Φ
L
(W ) < Φ
L
then17
Φ
L
Φ
L
(W ), W
W18
end19
if No improvement in Φ
L
after M iterations then20
Randomly perturb g
2
(%) of link weights in W
L
21
end22
Iteration++23
end24
W W
25
Iteration 026
while Iteration<K do27
W FindH()28
W FindL()29
if L(W ) < L
then30
L
L(W ), W
W31
end32
if No improvement in L
after M iterations then33
W W
34
Randomly perturb g
3
(%) of link weights in both35
W
H
and W
L
end36
Iteration++37
end38
tial attachment model [21] to emulate the power-
law degree distribution observed in the Internet
topology [21, 22].
ISP top ology: emulating a North American back-
bone network consisting of 16 nodes and 70 links.
In our study, all link capacities are set equal to 500
Mbps. For the evaluation of the SLA-based cost func-
tion, we also need to assign propagation delays to links.
For synthesized (i.e., random and power-law) topolo-
gies, the propagation delay of each link is randomly
selected in the range of 1.2ms and 15ms. This ensures
that the simulated links range from very short links to
long-haul (coast-to-coast) links. For the ISP topology,
we assign a propagation delay between 8ms and 15ms
to each link, based on the geographical locations of the
corresponding nodes. Unless specified otherwise, the
SLA delay bound is set to θ = 25ms.
Algorithm 2: FindH()
Input: Current link weight setting W
Sort the links in the decreasing order of their lexicographical1
link costs, i.e. L
Π(1)
> L
Π(2)
> . . . > L
Π(n)
, where Π (·)
means p ermutation.
Draw two random integers, k
1
and k
2
, from the distribution2
P (k) where 1 6 k 6 n m + 1.
Select m links with ranks3
Π (k
1
) , Π (k
1
+ 1) , . . . , Π (k
1
+ m 1) to form a set A.
Select m links with ranks4
Π (n + 1 k
2
) , Π (n k
2
) , . . . , Π (n k
2
m + 2) to form
a set B.
A neighbor is defined as randomly selecting a link l from A5
and a link l
0
from B, both without replacement. Then we
increase W
H
l
and decrease W
H
l
0
. Construct m neighbors as
the ab ove manner.
Find the best neighbor, denoted by W , from the6
neighborhood.
if L
¡
W
¢
< L (W ) then
7
return W .8
else9
return W .10
end11
5.1.2 Traffic matrix
The traffic matrices T
H
= [r
H
(s, t)]
|V |×|V |
and T
L
=
[r
L
(s, t)]
|V |×|V |
, which correspond to the high- and low-
priority traffic classes, are specified as follows.
The low-priority traffic matrix T
L
is generated using a
gravity model [23, 24, 5]. Specifically, the traffic volume
from node s to node t is defined as
r
L
(s, t) = d
s
e
V
t
P
iV \{s}
e
V
i
(6)
where d
s
is the total traffic originating at node s, given
by
d
s
=
Uniform(10, 50), with prob 0.6 (7a)
Uniform(80, 130), with prob 0.35 (7b)
Uniform(150, 200), with prob 0.05 (7c)
Here, Uniform(a,b) denotes a random variable uniformly
distributed in [a, b], V
t
is a random variable uniformly
distributed in [1, 1.5], and can be considered the “mass”
of node t. The larger a node’s mass, the more traffic it
attracts. Using d
s
, we generate a heterogeneous traffic
demand model with three different levels: (1) low traffic
volume originating from 60% of the nodes, (2) medium
traffic volume originating from 35% of the nodes, and
(3) high traffic volume originating from 5% of the nodes
to emulate “hot spots” in the network.
We consider two different models for high-priority
traffic. The first is a random model, in which we ran-
domly select a fraction (k) of SD pairs to generate high-
priority traffic. In other words, k represents the density
of high-priority SD pairs. The second is a sink model
that emulates “popular” servers, e.g,. data centers that
are destinations for a large number of client nodes. In
our simulation, a small number of sinks were selected
among nodes with the highest degree. Then, a larger
number of client nodes are selected, and bi-directional
traffic is assumed between sinks and client nodes.
The volume of high-priority traffic is specified in pro-
portion to the total network traffic volume. Specifically,
0 < f < 1 denotes the fraction of the total traffic volume
that corresponds to high-priority traffic. In other words,
if the high- and low-priority traffic volumes are η
H
and
η
L
respectively, f = η
H
/(η
H
+ η
L
). The high-priority
traffic volume between nodes s and t is then generated
as follows. We first assign a uniformly distributed ran-
dom variable, m
(s,t)
[1, 4], to SD pair (s, t). Given
that the total volume of low-priority traffic is η
L
=
P
s,tV
r
L
(s, t), the volume of high-priority traffic be-
tween SD pair (s, t) is set as r
H
(s, t) = η
L
f
1f
m
(s,t)
P
i,jV
m
(i,j)
.
This not only ensures that the total high-priority traffic
is a fraction f of the total network traffic, but also gen-
erates a heterogeneous traffic volume among different
high-priority SD pairs.
5.1.3 Heuristic algorithm settings
In our heuristic algorithm, we define link weights to
be between 1 and 30. The relatively small maximum
link weight is selected as a trade-off between the effec-
tiveness of the resulting routing solutions and compu-
tational complexity.
Algorithm 1 uses two variables N and K to control
the number of iterations of each routine. In this study,
we select N = 300000 and K = 800000. In each itera-
tion, the heuristic evaluates m = 5 neighbors. We also
choose g
1
= g
2
= 5% and g
3
= 3% for the diversifi-
cation phases of the three routines of Algorithm 1; g
3
is smaller because this routine is computationally more
complex. The diversification interval is set to M = 300.
Extensive investigations [25] have shown that these are
sufficient to generate good link weight settings for the
topologies considered in this paper.
To compare the performance of DTR with that of
STR, we adopt the “single weight change” heuristic pro-
posed in [2] to generate STR link weight settings to
minimize the objective functions of Eqs. (2) and (5).
5.2 Performance benefit of DTR
In this section, we evaluate the performance benefits
of DTR over STR. The focus is not only on quantifying
the benefits of DTR, but also on identifying the scenar-
ios where the difference is more significant. Because of
space limitations, we only include a representative set of
results. Results for many other configurations are avail-
able in [25], and are consistent with the subset reported
here.
A general and expected observation is that across
topologies, the performance of high-priority traffic un-
der DTR and STR is comparable, while the perfor-
mance of low-priority traffic is significantly better un-
der DTR. This holds for both cost functions of Sec-
tion 3. This can be explained as follows: For either load-
based or SLA-based cost functions, the combination of
lexicographical optimization and priority queueing en-
sures that under both STR and DTR the performance
of high-priority traffic is optimized first. Furthermore,
it also ensures that the high-priority traffic is imper-
vious to the behavior of low-priority traffic, whether
routed identically or differently. In contrast, because
STR routes the low-priority traffic over the same set
of links as the high-priority traffic, it tends to experi-
ence higher link loads, hence higher cost. DTR gives it
the flexibility to be routed away from overloaded links,
hence lowering its cost. To illustrate this effect, we plot
in Fig. 3 the distribution of link utilization (with both
high- and low-priority traffic) in a random topology for
the load- and SLA-based cost functions respectively
3
.
As shown in the figure, DTR yields significantly fewer
overloaded links than STR does.
To further demonstrate the benefits of DTR, Fig. 2
shows for three different topologies the cost ratios R
H
and R
L
(computed as the cost under STR over the
cost under DTR) for high- and low-priority traffic, us-
ing load- and SLA-based cost functions, respectively.
For all three topologies, the total traffic demand (rep-
resented by the average link utilization
4
) is varied by
scaling the traffic matrix. As can be seen from the fig-
ures, R
H
is approximately equal to 1, while R
L
can be
relatively large.
In our simulations, we also find that the benefits of
DTR can be affected by several factors, including net-
work load and traffic patterns. Some of our major find-
ings are summarized as follows
5
:
DTR outperforms STR the most when the network
is moderately loaded. The difference decreases
when network load is either light or heavy;
The performance benefits of DTR are more signif-
icant when the network carries more high-priority
traffic;
When the density of high-priority SD pairs in-
creases, we observe opposite behaviors for the cost
functions under consideration: the benefits of DTR
decrease for load-based cost function, but increase
under SLA-based cost function;
High-priority traffic patterns also affect the perfor-
mance of DTR. When high-priority traffic is con-
centrated “locally” on a small set of nodes, the
performance benefits of DTR are less pronounced.
3
Fig. 3 is a “stacked” bar chart so that the DTR bars are
drawn on top of the STR bars and do not overlap.
4
Because the average link utilization is roughly equal under
DTR and STR, we use it as a reference of network load.
5
Since we always have R
H
1, we focus only on R
L
.
(a) Random topology
0.4 0.5 0.6 0.7 0.8
0
10
20
30
40
Average link utilization
Ratio of STR cost to DTR cost
H−cost ratio
L−cost ratio
(b) Power-law topology
0.4 0.5 0.6 0.7 0.8
0
2
4
6
8
10
Average link utilization
Ratio of STR cost to DTR cost
H−cost ratio
L−cost ratio
(c) ISP topology
0.5 0.55 0.6 0.65 0.7 0.75
0
5
10
15
20
25
Average link utilization
Ratio of STR cost to DTR cost
H−cost ratio
L−cost ratio
(d) Random topology
0.4 0.45 0.5 0.55 0.6 0.65
0
5
10
15
20
25
30
Average link utilization
Ratio of STR cost to DTR cost
H−cost ratio
L−cost ratio
(e) Power-law topology
0.4 0.5 0.6 0.7 0.8
0
2
4
6
8
10
12
Average link utilization
Ratio of STR cost to DTR cost
H−cost ratio
L−cost ratio
(f) ISP topology
Figure 2: Cost ratios between STR and DTR when the load-based cost function (a-c) or the SLA-
based cost function (d-f) is used: R
L
(denoted as “L-cost ratio”) and R
H
(denoted as “H-cost ratio”).
30% of the traffic is of high-priority (f = 30%); 10% high-priority SD-pair density (k = 10%). A
30-node, 150 link random topology is studied in (a) and (d), a 30-node, 162-link power-law topology
is used in (b) and (e), and the ISP topology is used in (c) and (f ).
We expand on these observations in the following sec-
tions.
5.2.1 Impact of network load
We first study how R
L
changes with network load.
From Fig. 2, we observe that for both load- and SLA-
based cost functions, R
L
follows an “increasing-and-
decreasing” pattern when network load (represented by
the average link utilization) increases. This observation
can be explained as follows.
When the overall network load is low, the volume
of high-priority traffic is proportionally low (note that
f = 30% and k = 10% throughout the simulations in
Fig. 2). Therefore, although STR routes high- and low-
priority traffic on the same set of paths, the residual
capacity seen by the low-priority traffic on these paths
is relatively large. Hence, the cost for low-priority traf-
fic
L
), which is determined by the residual capacity
(see Eq. (1)), is low. In such cases, the difference be-
tween STR and DTR is marginal. When network load
increases, the volume of high-priority traffic proportion-
ally increases. With STR, the residual capacity seen by
the low-priority traffic decreases, hence Φ
L
increases.
But with DTR, the low-priority traffic can be routed
away from the links with low residual capacity, hence al-
lowing Φ
L
not to increase as much. As the network load
keeps increasing, most links become eventually heavily
loaded, so no matter how low-priority traffic is routed,
it experiences a high cost, which limits the potential
for improvement of DTR and eventually results in a
decrease of R
L
.
Next, we investigate the effect on R
L
of varying the
fraction of network load contributed by the high-priority
traffic. Specifically, we conduct simulations similar to
those discussed above, but change f from 20% to 40%,
while keeping k = 10%. Fig. 4 shows the results for
a 30-node, 150-link random topology, when the load-
based cost function is used. As the figure shows, the
cost ratio R
L
becomes higher when f increases from
20% to 40%. This can be understood as follows. As f
increases, the links on the shortest paths (for both STR
and DTR) become more loaded with high-priority traf-
fic. Since under STR the low-priority traffic is routed
over the same links, its performance degrades. In con-
trast, DTR allows low-priority traffic to be routed away
from those highly loaded links, hence avoiding most of
the performance degradations associated with the in-
crease of f. As a result, the benefits of DTR are more
pronounced (R
L
becomes larger) for f = 40% than for
f = 20%. For the same reasons, the above observation
also holds for the SLA-based cost function.
5.2.2 Impact of high-priority SD-pair density
We investigate next how R
L
varies when the density
0 0.5 1 1.5
0
5
10
15
20
25
Link utilization
Link count
Single routing
Dual routing
(a)
0 0.5 1
0
5
10
15
20
Link utilization
Link count
Single routing
Dual routing
(b)
0 0.5 1 1.5 2 2.5
0
5
10
15
20
25
Link utilization
Link count
Single routing
Dual routing
(c)
Figure 3: Link utilization comparison between STR and DTR in a 30-node, 150-link random topology.
30% of the traffic is of high-priority (f = 30%). (a) 10% high-priority SD pairs ( k = 10% ), load-based
cost function. (b) 10% high-priority SD pairs (k = 10%), SLA-based cost function. (c) 30% high-
priority SD pairs (k = 30%), SLA-based cost function.
0.4 0.5 0.6 0.7 0.8
0
10
20
30
40
Average link utilization
Ratio of STR L−cost to DTR L−cost
f=20%
f=40%
Figure 4: The impact of high-priority traffic vol-
ume on cost ratio R
L
in a 30-node, 150-link ran-
dom topology for the load-based cost function,
k = 10%.
of high-priority SD pairs (i.e., k) changes. Specifically,
we conduct a series of simulations in which k is increased
from 10% to 30%, with the fraction of high-priority traf-
fic fixed at f = 30%. We observe opposite behaviors
under the load- and SLA-based cost functions.
We consider first the load-based cost function. As
shown in Fig. 5(a), when k changes from 10% to 30%,
R
L
decreases. This is because for a fixed fraction f of
the high-priority traffic volume, as the number of SD
pairs sending high-priority traffic increases, the high-
priority traffic is spread on more links. For instance,
Fig. 6 shows the high-priority link utilization (sorted
in descending order) in a random topology, when STR
is used. As shown in the figure, when k is increased
from 10% to 30%, the curve “flattens”, indicating more
evenly distributed high-priority load across the links,
and hence lowering the utilization on those once-highly-
loaded links. Consequently, the residual capacity on
those links is increased, which helps make the perfor-
mance of low-priority traffic under STR closer to that
of DTR, hence reducing R
L
.
In contrast, when the SLA-based cost function is used,
R
L
increases when k changes from 10% to 30%, e.g., see
0.5 0.6 0.7 0.8 0.9
10
0
10
1
10
2
Average link utilization
Ratio of STR L−cost to DTR L−cost
k=10%
k=30%
(a)
0.5 0.6 0.7 0.8
10
0
10
1
10
2
10
3
Average link utilization
Ratio of STR L−cost to DTR L−cost
k=10%
k=30%
(b)
Figure 5: The impact of high-priority SD-pair
density (k) on R
L
in a 30-node, 150-link ran-
dom topology. (a) Load-based cost function. (b)
SLA-based cost function.
Fig. 5(b). This can be explained as follows: Given that
the high-priority traffic is only a small portion of the
total traffic (f = 30%), its queueing delay is nearly in-
significant compared to the propagation delay; hence it
plays little or no role in the optimization of the objective
function Λ in Eq. (4). As the density of high-priority SD
pairs initially increases, more high-priority flows con-
centrate on links with small propagation delays. In
STR, because nodes originating high-priority traffic also
generate low-priority traffic to the same destination(s),
when k increases to 30%, at least 30% of low-priority
SD pairs are “forced” onto a small set of links with low
0 50 100 150
0
0.2
0.4
0.6
0.8
Sorted links
Link H−utilization
k=10%
k=30%
Figure 6: Link load contributed by the high-
priority traffic under STR using the load-based
cost function. The results for two different high-
priority SD-pair densities, k = 10% and k = 30%,
are plotted.
5 10 15
0
0.5
1
1.5
2
2.5
Link propagation delay (ms)
Link utilization
Single routing
Dual routing
Figure 7: Link load as a function of propagation
delay when SLA-based cost function is used.
propagation delay. Therefore, the low-priority traffic
performance is poorer than when k = 10%. To better
illustrate this effect, we plot the link utilization distri-
butions for k = 10% and k = 30% in Figs. 3(b) and 3(c)
respectively in which a random topology is used. We see
that when k = 30%, the tail of distribution is spread to
the right under STR, indicating more low-priority traf-
fic has been “dragged” onto congested links. In Fig. 7,
we also plot the load of a link as a function of its prop-
agation delay, which shows how links with lower prop-
agation delay tend to have a higher load. This further
confirms the role of link propagation delays.
5.2.3 Impact of high-priority traffic patterns
The above discussions focus on the random model
for the high-priority traffic. We consider next the sink
model of Section 5.1. We find that in most cases DTR
still outperforms STR, and essentially the same obser-
vations hold as with random traffic patterns (details
can be found in [25]). Nevertheless, there exists one
noticeable difference, namely, that the distribution of
high-priority SD pairs together with topology, may sig-
nificantly affect the effectiveness of DTR.
To illustrate this effect, we consider a power-law topol-
ogy and two different distributions of high-priority node.
Specifically, we first select 3 nodes with the highest de-
grees as sink nodes and then select high-priority client
0.4 0.5 0.6 0.7 0.8
0
2
4
6
8
10
12
Average link utilization
Ratio of STR L−cost to DTR L−cost
Local
Uniform
(a)
0.5 0.6 0.7 0.8
0
5
10
15
Average link utilization
Ratio of STR L−cost to DTR L−cost
Local
Uniform
(b)
Figure 8: Performance benefits of DTR for the
sink communication pattern (Uniform vs. Lo-
cal) in a 30-node, 162-link power-law topology.
f = 20% and k = 10%. (a) Load-based cost func-
tion. (b) SLA-based cost function.
nodes either uniformly distributed in the network or
among nodes located close to the sinks. Fig. 8 shows
the performance of DTR and STR under the load-based
and the SLA-based cost functions for the two scenarios
(“Uniform” and “Local” denote the first and second
scenarios, respectively). We see that DTR and STR
achieve similar performance (R
L
1) in the “Local”
scenario, while DTR performs much better than STR
in the “Uniform” scenario. There are several reasons
for this behavior. When client nodes are close to the
sink nodes, high-priority paths remain “local” to the
sink nodes, and hence affect fewer low-priority SD pairs
overall. In addition, the proximity between client and
sink nodes also means that their SLA is easier to meet,
which allow more routing solutions, some of which can
yield better low-priority performance. Neither of these
factors is present when clients are randomly distributed
in the network, so that the benefits of DTR remain more
pronounced in this case.
5.3 Comparing DTR and STR with tradeoff
between traffic classes
Following the discussion of Section 3.3, we study whether
and how much low-priority performance can be improved
under STR if we are willing to sacrifice some high-
priority performance.
Table 1: Low-priority traffic performance in
STR with a relaxation. f = 30%, k = 10%.
30-node, 150-link random topology
R
L
1.03 1.29 1.38 4.04 31.16 58.17 7.51
R
L,5%
1.02 1.07 1.14 2.34 19.89 21.84 5.74
R
L,30%
1.02 1.04 1.14 2.07 6.91 19.56 5.52
A
D
0.43 0.5 0.58 0.65 0.73 0.8 0.88
30-node, 162-link power-law topology
R
L
1.03 1.15 5.20 34.32 21.06 8.3 4.07
R
L,5%
1.03 1.09 1.3 5.86 15.62 4.69 3.28
R
L,30%
1.03 1.09 1.25 3.24 15.45 4.58 3.06
A
D
0.41 0.48 0.55 0.63 0.71 0.78 0.85
ISP topology
R
L
1 1.07 1.11 3.91 9.28 2.3 2.1
R
L,5%
1 1.04 1.09 2.06 7.72 1.58 1.36
R
L,30%
1 1.04 1.06 1.42 5.94 1.45 1.35
A
D
0.33 0.42 0.5 0.59 0.67 0.78 0.86
5.3.1 The case of load-based cost function
We first consider the load-based cost function. Relax-
ing the constraint of strict lexicographical optimization
can be easily incorporated into the heuristic used to
search for STR solutions. Specifically, denote the link
weight setting at iteration n as W (n), and the costs
of high- and low-priority traffic associated with it as
Φ
H
(n) and Φ
L
(n), respectively. Φ
H
(n) and Φ
L
(n) are
the best costs for high- and low-priority traffic up to it-
eration n. Assume next that we allow a degradation of ε
of the high-priority traffic performance. Then, for every
iteration n, if W (n) results in Φ
H
(n) (1 + ε
H
(n)
and Φ
L
(n) < Φ
L
(n), we record W (n) as a “relaxed”
best solution
6
. We study the impact on the low-priority
traffic performance of this relaxation for ε = 5% and
30%. The results are shown in Table 1 for three differ-
ent topologies, where A
D
is the average link utilization.
We observe that the performance of low-priority traffic
improves as ε increases in STR, but even for ε = 30%, a
large gap remains between the performance of STR and
DTR. Furthermore, this improvement causes significant
performance degradation to the high-priority traffic. In
contrast, DTR provides both greater improvement for
low-priority traffic and no degradation in the perfor-
mance of high-priority traffic.
5.3.2 The case of SLA-based cost function
In the case of an SLA-based cost function, relaxation
takes the form of a looser SLA delay bound. To investi-
gate its impact, we study the performance of STR and
DTR when varying the SLA requirement from 25ms to
35ms. In Fig. 9, we report a set of results collected from
a random topology, with a random traffic pattern.
Fig. 9(a) shows that independent of the SLA delay
bound, the number of high-priority SD pairs that vi-
olate the SLA requirement is the same for STR and
DTR. We also observe that when the SLA delay bound
6
If there are multiple such W (n)’s, we pick the one achieving
the lowest Φ
L
(n).
is above 30ms, STR and DTR result in similar perfor-
mance for the low-priority traffic and similar maximum
link utilizations as shown in Figs. 9(b) and 9(c) respec-
tively. In other words, a loosening of the delay bound
by 20% allows STR to provide similar performance as
DTR for the low priority traffic. The main reason for
this improvement is that the looser SLA enables STR
to explore more routing solutions, including some that
offer good performance to the low-priority traffic while
still meeting the (looser) high-priority SLA. Neverthe-
less, DTR offers these benefits without penalizing high-
priority traffic. Furthermore, as with any relaxation
technique, identifying the “right” level of relaxation re-
quired to allow STR to provide reasonable performance
to low-priority traffic is in itself a challenging task. This
issue is altogether absent when using DTR.
6. CONCLUSION
This paper investigates how routing can help service
differentiation in IP networks. We study the benefits
of DTR, which allows separate routing of high- and
low-priority traffic classes. Compared to the traditional
STR scheme, DTR provides greater flexibility in con-
trolling resource allocation across traffic classes, hence
achieving more efficient service differentiation.
To realize the benefits of DTR, we developed a heuris-
tic that overcomes the computational complexity of iden-
tifying good link weight settings for each traffic class.
The heuristic was tested under a lexicographical op-
timization model that gives strict precedence to high
priority performance, but as shown in the paper, can
be easily adapted to other configurations. The heuris-
tic was used to investigate the effectiveness of DTR
with two common performance objectives, embodied by
load-based and SLA-based cost functions, respectively.
The results showed that irrespective of which cost func-
tion was used, DTR can substantially improve the per-
formance of low-priority service class without sacrific-
ing that of high-priority class. This was demonstrated
through extensive simulations for a broad range of net-
work topologies and traffic patterns.
7. REFERENCES
[1] P. Psenak, S. Mirtorabi, A. Roy, L. Nguyen, and
P. Pillay-Esnault, “Multi-topology (MT) routing
in OSPF,” IETF RFC 4915, June 2007.
[2] B. Fortz and M. Thorup, “Internet traffic
engineering by optimizing OSPF weights,” in
Proc. IEEE INFOCOM, 2000.
[3] M. Ericsson, M. Resende, and P. Pardalos, “A
genetic algorithm for the weight setting problem
in OSPF routing,” J. of Combinatorial
Optimization, vol. 6, pp. 299–333, 2002.
[4] L. Buriol, M. Resende, C. Ribeiro, and
M. Thorup, “A memetic algorithm for OSPF
25 30 35
0
1
2
3
4
5
SLA requirement (ms)
Number of SLA violations
Single routing
Dual routing
(a)
25 30 35
10
4
10
5
10
6
10
7
10
8
SLA requirement (ms)
Low−priority traffic cost
Single routing
Dual routing
(b)
25 30 35
0.8
1
1.2
1.4
1.6
1.8
SLA requirement (ms)
Max link utilization
Single routing
Dual routing
(c)
Figure 9: The impact of changing the SLA requirement on STR and DTR in a 30-node, 150-link
random topology. f = 30% and k = 30%. The average link utilization is roughly 0.5 in each SLA
requirement setting.
routing,” in Proc. 6th INFORMS Telecom, 2002.
[5] A. Nucci, S. Bhattacharyya, N. Taft, and C. Diot,
“IGP link weight assignment for operational
Tier-1 backbones,” IEEE/ACM Transactions on
Networking, vol. 15, no. 4, pp. 789–802, August
2007.
[6] S. Balon and G. Leduc, “Dividing the traffic
matrix to approach optimal traffic engineering,”
in Proc. IEEE ICON, 2006.
[7] G. Apostolopoulos, “Using multiple topologies for
IP-only protection against network failures: A
routing performance perspective,” ICS-FORTH,
Greece, Tech. Rep., 2006.
[8] S. Gjessing and O. Norway, “Implementation of
two resilience mechanisms using multi topology
routing and stub routers,” in Proc. AICT-ICIW,
2006.
[9] A. Kvalbein, A. F. Hansen, T. Cicic, S. Gjessing,
and O. Lysne, “Fast IP network recovery using
multiple routing configurations,” in Proc. IEEE
INFOCOM, 2006.
[10] W. C. Lee, M. G. Hluchyj, and P. A. Humblet,
“Routing subject to quality of service constraints
in integrated communication networks,” IEEE
Network, vol. 9, no. 4, pp. 46–55, Jul/Aug 1995.
[11] E. Crawley, B. R. R. Nair, and H. Sandick, “A
framework for QoS-based routing in integrated
service networks,” IETF RFC 2386 , 1998.
[12] J. Wang and K. Nahrstedt, “Hop-by-hop routing
algorithms for premium traffic,” ACM SIGCOMM
Computer Communications Review
, vol. 32, no. 5,
pp. 73–88, November 2002.
[13] R. Gu´erin and A. Orda, “QoS-based routing in
networks with inaccurate information: Theory
and algorithms,” in Proc. IEEE INFOCOM, 1997.
[14] Q. Ma and P. Steenkiste, “Supporting dynamic
inter-class resource sharing: A multi-class QoS
routing algorithm,” in Proc. IEEE INFOCOM,
1999.
[15] S. Chen and K. Nahrstedt, “An overview of
quality of service routing for next-generation
high-speed networks: Problems and solutions,”
IEEE Network, vol. 12, no. 6, pp. 64–79, Nov/Dec
1998.
[16] D. Mitra and K. G. Ramakrishnan, “Techniques
for traffic engineering of multiservice,
multi-priority networks,” Bell Labs Technical
Journal, vol. 6, no. 1, pp. 139–151, 2001.
[17] S. L. Spitler and D. C. Lee, “Integrating
effective-bandwidth-based QoS routing and best
effort routing,” in Proc. IEEE INFOCOM, 2003.
[18] S. Balon, F. Skiv´ee, and G. Leduc, “How well do
traffic engineering objective functions meet TE
requirements,” in Proc. IFIP Networking, 2006.
[19] B. Fortz and M. Thorup, “Optimizing
OSPF/IS-IS weights in a changing world,” IEEE
Journal on Selected Areas in Communications,
vol. 20, no. 4, pp. 756–767, May 2002.
[20] S. Boettcher and A. G. Percus, “Nature’s way of
optimizing,” Artificial Intelligence, vol. 119, pp.
275–286, May 2000.
[21] A.-L. Barab´asi and R. Albert, “Emergence of
scaling in random networks,” Science, vol. 286,
no. 5439, pp. 509–512, October 1999.
[22] M. Faloutsos, P. Faloutsos, and C. Faloutsos, “On
power-law relationships of the Internet topology,”
in Proc. ACM SIGCOMM, 1999.
[23] A. Medina, N. Taft, K. Salamatian,
S. Bhattacharyya, and C. Diot, “Traffic matrix
estimation: Existing techniques compared and
new directions,” in
Proc. ACM SIGCOMM
, 2002.
[24] S. Bhattacharyya, N. Taft, J. Jetcheva, and
C. Diot, “POP-level and access-link-level traffic
dynamics in a Tier-1 POP,” in Proc. ACM IMW,
2001.
[25] K.-W. Kwong, R. Gu´erin, A. Shaikh, and S. Tao,
“Improving service differentiation in IP networks
through dual topology routing,” University of
Pennsylvania, Tech. Rep., June 2007. [Online].
Available: http://einstein.seas.upenn.edu/mnlab