108: bring out your hate

Jan 14, 2009 16:01

Read more... )

Leave a comment

Comments 22

jes5199 January 15 2009, 00:25:55 UTC
does dynamic programming mean anything other than "store things in memory" ? all the explanations seem to be written for math-heads-who-code-a-little not code-heads-who-need-a-little-math

Reply

pmb January 15 2009, 00:35:53 UTC
Not really. But most programming is "store things in memory and then do stuff".

Dynamic programming (and especially the implementation technique of dynamic programming called memoization) is basically adding a cache to your side-effect-free recursive functions and only actually doing any work if the answer is not already cached.

Consider a stupid example that math people harp on: the fibonacci numbers. We can calculate them in the stupidest way possible like so:

def f(n):
if n <= 1: return n
else: return f(n-1) + f(n-2)

This is SLOW. Very very slow. f(50) will not return in the next year.

If we add a cache then it becomes fast!

cache = {}
def f(n):
if n not in cache:
if n <= 1: cache[n] = n
else: cache[n] = f(n-1) + f(n-2)
return cache[n]

Now it is fast.

Proving that you are in a situation where this technique can be correctly used is distinctly non-trivial in languages that thrive on side-effects (C, I'm looking at you), but could probably be done pretty easily in most strict functional languages.

Reply

pmb January 15 2009, 00:38:54 UTC
Oops. f(50) will take a few minutes. f(100) will not return in the next millenium, however.

Reply

jes5199 January 15 2009, 00:42:29 UTC
"don't repeat work" seems so trivially useful that I'm surprised that it needs a name.
and memoizing is well-known in the ruby-world, at least. (it doesn't actually matter if the program is particularly strictly functional, as long as the particular function you memoize doesn't have side effects)

anyway, I stand by my assertion that "dynamic programming" means "think less like a math major"

Reply


pmb January 15 2009, 00:29:27 UTC
The only intuition I have is to write your program in the most recursive-ish side-effect-free style you can, and then to check whether you ever call your function twice with the same arguments. If you ever do, then dynamic programming is the solution, and memoization is the easy way to get there.

Reply

conform January 15 2009, 18:32:36 UTC
So my solution is significantly faster than yours (O(n^3) vs O(n^4)), but it was throwing me off that your outside recursion is no improvement over enumeration. In fact, it's slower than a straightforward enumeration, since you have to build a bunch of call stacks, etc.

I think I just realized that the important recursion that makes your approach "dynamic" is the one in your total() function (essentially the same as the one I produced in grid_sum()). This makes me feel much better about the world and my understanding of it.

Reply

pmb January 16 2009, 15:25:30 UTC
Yours is definitely O(n^4) at least.

max_rect_sum calls max_column_sum n*n times, and max_column_sum is O(n^2) with its two nested loops each of extent n.

Depending on how you build up your solution, both memoizations may be important, or neither. The way I built mine, both are important. The way coldtortuga built his, neither are.

Reply

conform January 16 2009, 17:44:41 UTC
max_rect_column_sum is O(n), having only one loop. and my approach is definitely faster -- try running my code with D=60 or D=100 and compare to yours.

i'm also not sure about your assertion that the outer memoization is significant. if i replace your outer loop with an enumerative one, i get results like the following (in order: my code, your original, your enumerative):

sea-smoke:~/problems conform$ cat matrix_60 | time python 108_max_rect.py
2781
1.42 real 1.36 user 0.04 sys
sea-smoke:~/problems conform$ cat matrix_60 | time python peter.py
2781
59.62 real 58.88 user 0.61 sys
sea-smoke:~/problems conform$ cat matrix_60 | time python peter_enumerative.py 2781
27.58 real 27.13 user 0.32 sys

Reply


Leave a comment

Up