One of the assumptions Dijkstra's algorithm relies on becomes invalid when negative cycles are present. This is due to the fact that neighbouring edges must be reprocessed when a node's distance changes. In the code, this means that the **Insert()** operation is needed, or a data-structure that does not discard the nodes in the first place must be used.

Negative arc lengths have some benefit in real applications. For networking for example, someone may pay you if your data uses their connection (as an incentive maybe?). In traffic management, negative path lengths may correspond to a downhill slope -- so you dont need to pedal to build up speed! In games, the bots may prefer to take a path, so not only is it not a pain to move there, but they may enjoy it (the path "cost" decreases).

However, you need to be careful with negative cycles. These are cycles in a graph where the sum of the edge lengths is negative. These cause many problems for path-finding algorithms, since to them it seems there is infinite benefit in taking those paths.

Bellman first observed that the condition for achieving a minimum spanning tree can be expressed as a set of equations *(see Bellman's Equation for more details)*. This states that the distance of any node to the root node is equal to the minimum of all neighbouring node distances plus the length of the corresponding path. Bellman also suggests solving this by successive approximations, which gives the following na‹ve algorithm:

/** * Initialise all distances as on the previous page. */ // repeat approximations for n loops for (int i=0; i<nodes.size(); i++) { // scan each edge for (Edge* e = edges.begin(); e != edges.end(); e++) { // get distance of end node via this path const float d = e->start->distance + e->length; // if it's closer, correct the estimate if (e->end->distance > d) e->end->distance = d; } }

Note that the distance estimate of each vertex decreases monotonously until the result converges. However, this is not an ideally optimal algorithm, since it takes **O(mn)** cycles -- which can be noticed in the double loop. If the graph is dense with **O(n^2)** edges, then it takes **O(n^3)**, which is much more computationally expensive than running Dijkstra's algorithm -- negative cycles or no. However, convergence can be detected when no distance updates are made within the inner loop; the spanning tree must be minimal.

The algorithm above does a lot of redundant scanning. The nodes are updated in no particular order, so any form of coherence is lost. Also, nodes with optimal distances -- that have converged -- are not discarded. To remedy this, Bellman, Ford and Moore independently suggested maintaining a list of nodes to scan, and traverse the paths in the order defined by the adjacency. This type of algorithm is called *label correcting*, and performs a fair bit better in practice.

/** * Initialise all distances as usual. */ Node* current = root; vector<Node *> temporary; // repeat while we can still correct an estimate while (current) { for (Edge* e = current->edges.begin(); e != current->edges.end(); e++) { // get distance of end node via this path const float d = e->start->distance + e->length; // if it's closer, correct the estimate if (e->end->distance > d) { e->end->distance = d; // add to the list of nodes to process EnQueue( temporary, e->end ); } } // fetch next node current = DeQueue( temporary ); }

The *EnQueue* procedure simply checks if the node is already in the array, and if not, inserts it. *DeQueue* just removes one node from the array. These were separated from the main loop since they are conceptually very important. Many have improved the perfomance of the algorithm by changing the specification of these procedures...

Pape suggested that nodes that have not already got a distance estimate should be processed last, since their neighbours may change in the mean time. He proposed to insert new nodes at the back of the list, while putting existing ones at the front. By using a data-structure that behaves both like a stack and a queue, this could be done in *O(n2^n)*, though in practice it was much more efficient. Gallo and Pallotino (86) made a small modification to the data-structure, and the algorithm lost its worse case complexity disadvantage compared to the standard BFM.

Goldberg (93) proposed using a heuristic to guide the process. This involves maintaining labels of each node, and sorting the neighbours by adjacency before inserting them into the data-structure. This provides a simple way of matching the *O(mn)* performance without too complex data-structures

Remember you can visit the *Message Store* to discuss this tutorial.
Comments are always welcome!.
There are already replies in the thread, why not join in?