CS 6963 Summaries

From ResearchWiki

Revision as of 16:01, 26 August 2008 by Piyush (Talk | contribs)
Jump to: navigation, search

Contents

Aug 26, 2008

Problem: routing in a network.

We're given a directed graph on N nodes. Each edge is a link between communicating nodes. A link can carry one packet at a time. We assume that nodes have queues for each outgoing edge. All nodes have IDs, so we can refer to node 1, node 10, and so on.

Consider permutation routing: here, each node wants to send a packet to exactly one other node, and each node receives exactly one packet. A routing algorithm has to decide, for each (i,j=d(i)) start-end pair, how to route the packet that goes from i to j. Every edge can be busy at a time step: thus the running time of the routing algorithm is upper bounded by the max congestion on an edge.

Let's consider an "easy" class of routing algorithms: namely, algorithms that are oblivious: the route used depends solely on the pair (i,j). More precisely, the path taken by a packet from i depends only on j, and on nothing else. Oblivious routing needs little state: each (src,dest) pair uniquely defines a route, and so routing tables are easy to maintain.

Theorem: Let the graph have N nodes and out degree d. Then there exists a permutation that forces \Omega(\sqrt{N/d}) congestion for any deterministic oblivious routing scheme.

Consider a special network: the Boolean hypercube on n-bit strings (two nodes are adjacent if they differ in one bit). the degree d = n, and the size of the network N = 2n. The above result says that the number of steps needed is at least \sqrt{2^n/n}.

We will prove the following result:

Theorem: There exists a randomized routing scheme for the hypercube that requires only O(n) steps with high probability.

The deterministic lower bound works by a counting argument (show that each "destination graph" linking a fixed destination to its sources must have a large number of congested edges, and use averaging to show that there must be some large set of routes that all hit the same bad edge. Since routes are oblivious, this defines a permutation that forces congestion).

But let's get some intuition on the hypercube using a simple algorithm called 'bit fixing': find the left most bit where the current location differs from the target, and route along that path.

Lemma: there's a \sqrt{N/n} lower bound for bit-fixing methods. Proof: consider the following permutation: the node i = a_i b_i is routed to the node j = b_i a_i Idea: all routes must pass through a node of the form X X and there are only \sqrt{N} of these. Thus there exists at least one node that takes in N/\sqrt{N} packets, and it has degree n.

Here's a clue though: the adversary beat the algorithm by taking advantage of the bit fixing protocol. Suppose we could confuse it by scattering nodes so as not to create a bottleneck ?

Randomized algorithm

  1. Route every packet from i to a random location σ(i)
  2. Route every packet from σ(i) to d(i)

Intuition: the randomization spreads the packets out, so the routes don't get bottlenecked.

Proof. Suppose we show that Phase 1 has congestion c with high probability. Then this implies that phase 2 has congestion c with high probability as well by reversing time. Does this mean that the total congestion is at most 2c ? Yes, if we make sure that no packets from phase 1 interfere with packets from phase 2. So we make all packets wait until c steps are done, so that we can be sure that there's no interference.

What remains is to prove that phase 1 causes only congestion c. Fix a packet v_i from i, and look at how long it takes. It's n + delay. How much is the delay ?

Part I: identify structural cause of delay (independent of randomization) Part II: analyze structure using randomization

What's the cause of delay.

Fix a route for a particular packet v. Let S be the set of all packets that traverse at least one edge of this route. Claim: the delay in packet is at most |S| Intuition: any packet that shares a route with v can delay it only once.

Observation: once two routes separate, they cannot merge again. Proof: in the bit fixing model, two routes diverging implies that at some step, the two packets are at locations that have different prefixes. Now each prefix is a prefix of the final destination, so once this happens, the routes cannot merge again.

[Back to claim] We will charge every unit of delay for v to some packet in S. Suppose at time t, v travels packet ej. then the lag is t - j. Suppose v's lag goes from l to l+1, while v is about to cross ej. this means that some other packet must cross ej at the same time, and therefore also has lag l.

Fix the latest time at which some packet has lag l. This packet must leave the route, else it can continue on with lag l. We charge v's increase in lag to this packet, and now that it's gone, we will never charge it again.


Analyze this structure.

We now need to understand how many packets overlap v's route. Let Hij = 1 if routes for i and j share an edge. Therefore, the delay on v_i = \displaystyle{\sum_j} H_{ij}. Destinations are chosen randomly, and so Hij are independent Poisson trials. So we need to estimate E[\sum H_{ij}].

Fix an edge e, and let T(e) = # routes through it. Then \sum H_{ij} = \sum T(e_j) (for e_1\ldots e_k the route of i)

By symmetry, each edge is the same as each other in Phase 1, and so T(ei) = T(ej). Since the expected route length is n/2, and there are N of them, the total number of route-edges = Nn/2, and there are Nn edges in the hypercube, and so E[T(e)] = 1/2 Therefore, E[\sum H_{ij}] \le k/2 \le n/2

Applying a standard Chernoff bound, the probability that this is more than 6n is 2 − 6n. Moreover, we have to take a union bound for all packets, so we multiply this by 2n = 2 − 5n

NOTE: the T(ei) are not independent, but the Hij are. so we must use a Chernoff bound over the H_{ij} rather than over the T(ei)

Personal tools