The *all-pairs shortest path* problem can be considered the mother of all routing problems. It aims to compute the shortest path from each vertex *v* to every other *u*. Using standard single-source algorithms, you can expect to get a naive implementation of *O(n^3)* if you use Dijkstra for example -- i.e. running a *O(n^2)* process *n* times. Likewise, if you use the Bellman-Ford-Moore algorithm on a dense graph, it'll take about *O(n^4)*, but handle negative arc-lengths too.

Storing all the paths explicitly can be very memory expensive indeed, as you need one spanning tree for each vertex. This is often impractical in terms of memory consumption, so these are usually considered as all-pairs shortest distance problems, which aim to find just the distance from each to each node to another.

The result of this operation is an **n * n** matrix, which stores estimated distances to the each node. This has many problems when the matrix gets too big, as the algorithm will scale very poorly.

So, this would give us something like this:

/** * A third-party implementation of the shortest path algorith, that * uses a common distance matrix to store distances. * * This could be Dijkstra, BFM or any of their variants. */ void SingleSourceSP( Node* root ); // run the algorithm for each node for (Node* n = nodes.begin(); n != nodes.end(); n++) { SingleSourceSP( n ); }

Not very nice, admittedly, but it can work in O(n^3) for non-negative arc lengths. As you will see, its quite tough to beat that.

There are a few simple simplifications that can be made to such an algorithm, which can improve the performance in practice. Though it does not help the theoretical bounds of the algorithm.

If the graph is undirected, then its symmetry properties can be exploited. Only half a matrix is required. This is fairly simple to implement in practice, as you can wrap your matrix inside an oriented object class, and treat any operation on the cell (i,j) as (j,i) if j

During the graph traversal, the naive implementation cannot assume much from the distances that have already been computed. It can use them as heuristic, but little more.

The Floyd Warshal algorithm takes the dynamic programming approach. This essentially means that independent sub-problems are solved and the results are stored for later use. The algorithm allows negative edges, but no negative cycles, as per usual with such shortest path problems.

The basic concept is simple. If, for a path *(u,v)* and its length estimate D[u][v], if you can take a detour via w and shorten the path, then it should be taken. This translates into the following equation:

D[u][v] = min( D[u][v], D[u][w], D[w][v] )

The easiest way to calculate this is bottom up. You start with basic assumptions that can be made from the graphs connectivity, and correct those assumptions "n" times for each vertex pair, and the results will converge.

/** * Initialise the original estimates from nodes and edges. */ for (int i=0; i<nodes; i++) { D[i][i] = 0.0f; } for (Edge* e = edges->begin(); e != edges->end(); e++) { D[e->start][e->end] = e->length; } /** * And run the main loop of the algorithm. */ for (int k=0; k<nodes.size(); k++) { for (int i=0; i<nodes.size(); i++) { for (int j=0; j<nodes.size(); j++) { D[i][j] = min( D[i][j], D[i][k] + D[k][j] ); } } }

Johnson has defined an algorithm that takes advantage of the sparse nature of most graphs, with directed edges and no cycles. With these assumptions, he gets better results than the naive algorithm.

The first pass consists of adding a new vertex with zero length arcs to every other node, and running the Bellman-Ford-Moore algorithm thereon. This can detect negative cycles, as well as find *d(u)*, which is the final estimated distance from the new node to vertex *u*. Then, the edges are "reweighted" based on these values, and Dijkstras algorithm is run for each of the nodes. The final weight is adjusted to take into account the reweighting process. The time complexity is officially O(n^2 log n + mn), though improvements to both Dijkstra and BFM can be used to improve that by a fair amount.

The original paper (Efficient algorithms for shortest paths in sparse graphs, 77) is quite hard to find, so check the book "Introduction to Algorithm" (Cormen Thomas, MIT Press) for more details.

Without the assumptions made by Johnson, its difficult to break the O(n^3) boundary. The problem has been shown to be at least as hard as Boolean matrix multiplication, and mappings from one problem to the other can be performed at a small cost.

That said, implementations that are combinatorial in nature, rather than matrix based have advantages in some cases (like Johnson assumed), may have many benefits. This has become a challenge for many researchers, and some nice parallel algorithms have arisen.

See the paper called __Sub-cubic cost algorithms for the all-pairs shortest path problem__ by *Tadao TAKAOKA* for more details.

Often, approximations are cheaper to compute than exact values, and some recent work has been done on such approximations. These allow breaking of the sub-cubic barrier, but at a cost, naturally; only approximations of the path lengths are returned.

*Stretch-t* paths are defined as having a length of at most t times the distance between its ends. For paths of stretch (4 + e) you can get a running time of *O(n^(5/2) polylog n)* -- which doesnt mean much in theory, does it?

Dor, Halperin and Zwick (in "All pairs almost shortest path", 97) propose an efficient algorithm for stretch 3 paths. I wont go into it here for fear of confusing you, and myself in the process, but bare in mind that it exists

Remember you can visit the *Message Store* to discuss this tutorial.
Comments are always welcome!.
There are already replies in the thread, why not join in?