Hacker News new | past | comments | ask | show | jobs | submit login

> And many common algorithms are actually just the application of dynamic programming to specific problems, including omnipresent path-finding algorithms such as Dijkstra’s algorithm.

Dijkstra's algorithm is an application of dynamic programming? I disagree. In dynamic programming, you tabulate the subproblems to solve, with static interdependencies between them leading to straightforward orders in which to solve the subproblems. In Dijkstra's algorithm, you need to compute the shortest path from s to each vertex, but the order in which you have to visit the vertices is only discovered along the way using a priority queue, so the subproblem interdependencies are not known ahead of time until you have actually solved the problem.




Dijkstra is definitely dynamic programming, no doubt about it, you are still computing a new state from neighbouring state nodes while ignoring substantial number of non-neighbouring states.

You just have to accept the more abstract definition of "state" where the state encodes subsets of the graph. The states in Dijkstra are represented by subgraphs and distances that incrementally include more nodes leaving us with a path of states to follow in some order.

It's not much different from the traveling salesman or viterbi in that sense. You come up with some topological order of states in abstract state-space and then follow that topological order to compute the new state based on the adjacent previously computed states and never look back to update already finalised states.

With this more abstract point of view, it's clear Dijsktra is dynamic programming.

There is a whole field of graph dynamic programming problems. And the closely related Markov chain problems.


> You come up with some topological order of states in abstract state-space

If the state space is "subsets of V", then it's exponentially larger than the set of states actually visited in Dijkstra's algorithm. Dijkstra's algorithm has an invariant that the vertices visited have a smaller distance than the vertices not visited. For vertex sets that adhere to this invariant, there's clearly a topological order, but for arbitrary subsets of V I don't see how this topological order would be defined.

I guess my gripe is that in my view, the framework of dynamic programming is not a useful way to analyze algorithms that explore a small set of states in an exponentially larger state space.


I think what you're missing is that DP is not an algorithm in itself, but rather a theorem. I also didn't knew about this for a long time. Quoting from [1, §4]

  > It should be stressed, here and elsewhere, that the DP functional equation
  > does not constitute an algorithm. It merely stipulates a certain property
  > that function f defined in (3) must satisfy. Indeed, in the context of Theorem
  > 2 the functional equation constitutes a necessary optimality condition. This
  > point is sometime not appreciated by students in their first encounter with DP
so that is actually happening in Dijkstra's algorithm is that the DP optimality condition is being reached with a sequence of clever approximations. Another common way to reach the DP optimality condition is by recursively exploring everything with a cache, this is how compsci people usually learn about DP, and of course this can get exponential and does not apply well to big problmems. Though, both are dynamic programming at their core.

  > in my view, the framework of dynamic programming is not a useful way to analyze algorithms that explore a small set of states in an exponentially larger state space
Well, since DP is a theorem I kinda agree, it is not how one should think about it when solving day-to-day problems. But formally DP is the reason why these algorithms work in the first place.

[1]: http://matwbn.icm.edu.pl/ksiazki/cc/cc35/cc3536.pdf


The state space here is subsets of V, but not all possible subsets and not exponential. The entire state is represented by the priority queue at each step which is bounded by the number of nodes and edges. And each state represented (encoded) by the priority queue transitions to exactly one new state (by popping the top node from the queue and adding the neighbours). And the number of steps is also bounded by the number of edges and nodes in V.

Most importantly, at every step of the algorithm, we only need to preserve the previous node in the state space (the previous state of the priority queue) to compute the new state (pop the top and add the dirty neighbours) and we can forget all previous states that existed before. We don't even need to keep track of the visited nodes, although if we did, that wouldn't change anything.


Not quite, actually Dijkstra is a special case of what are called label correcting methods / algorithms (LCA) [1], which come directly from applying dynamic programming to the shortest path problem. LCA apply to all finite graphs with non-negative cycles and to be specific Dijkstra's algorithm is a LCA where in step one the node to be removed is choosen according to min_{i \in OPEN} d_i, see [1] for more.

[1]: https://web.mit.edu/dimitrib/www/DP_Slides.pdf (see lecture 3)


I totally agree in that I use the same mental model. But, if you look at it as “the shortest path must be the shortest path through one of its neighbors”, it can actually be classified as a dynamic programming algorithm!

https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#Dynamic...


In dynamic programming, the problem can be solved by solving subproblems, and those subproblems are solved by solving subsubproblems, and there is overlap between these subproblems. This allows us to solve DP problems in two ways, either by recursion with memoization or by iterative table filling.

Although the shortest path problem has some kind of "optimal substructure", the recursive memoized approach doesn't work because there's no set order in which the subproblems can be solved. Instead, you need to compute the shortest paths in order of shortest path length, and the shortest path lengths aren't given ahead of time - those are exactly what Dijkstra's algorithm computes!

It's not enough to call it dynamic programming that "the shortest path must be the shortest path through one of its neighbors", because this fact doesn't immediately lead to an acyclic subproblem dependency graph.

Shortest path on an acyclic graph, and longest path on an acyclic graph, are two problems that can be solved with dynamic programming - but Dijkstra's algorithms solves shortest paths on a different class of graphs that doesn't lend itself to DP.


I'm a bit lost in this terminology. But coming from the Operations Research perspective that gave Dynamic Programming its name, Dijkstra's Algorithm is very clearly Dynamic Programming. It's Forward Dynamic Programming as opposed to the much more common Backward Dynamic Programming, if that helps any.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: