The straightforward recursive solution:
def collatz(n):very rapidly converges to... an error:
if n == 1:
return 1
if n % 2 == 0:
return 1 + collatz(n/2)
else:
return 1 + collatz(3*n + 1)
RuntimeError: maximum recursion depth exceededSo I've recoded it a bit using a cache dictionary to store all the intermediate values in it:
cache = {1: 1}Now loping for all the numbers lower than 1 million we find the solution to the problem: the number with the longest path is 837799 with a path length of 524. The code runs in about 4secs.
def collatz(n, res):
nn = n
while True:
if n in cache:
if nn != n:
cache[nn] = cache[n]
return cache[nn]
if n % 2 == 0:
cache[n] = 1 + collatz(n/2, cache)
else:
cache[n] = 1 + collatz(3*n + 1, cache)
Since I got that cache dict around to play with, I graphed with Matplotlib (strange, ha ;) ) the frequencies of each path length:
There are few short paths, several long paths but rare to occur, and most of the paths have length between 100 and 200 (more or less).
UPDATE: I've removed length variable, not needed (left from a previous version of the code) and fixed the path length for 837799, wrongly reported.
12 comments:
FWIW, the straightforward solution fails miserably only because Python has never implemented the very wall known tail call optimization (http://en.wikipedia.org/wiki/Tail_recursion). In any decent optimized compiler (or even runtime), recursive function equates to regular loops in term of stack consumption.
So, while the following function in Python will fail with a runtime exception:
def f():
print "Hello, world!"
f()
return # never executed
in many other languages the above will simply behave as a "while true" loop, as the programmer most likely intended it.
Cheers.
@zack: thanks for your feedback. Of course, tail recursion would have helped a bit, but the point is: recursion here is the wrong approach.
Using a caching structure allows to avoid a function call, that's much heavier than a dict lookup.
Also, the cache is almost empty at the beginning of the loop (small nums, short paths), but as the numbers increase, the cache becomes hot and you gain a lot (in rhyme too :) ).
I didn't analyze the cache hit/miss rate, maybe someone interested can do that :)
Err, not really. In fact what you use a cache for is what originally was called "memoization", which is typical in functional programming. Memoization works pretty well with functions call and supports your optimization.
Also, in functional programming languages functional calls are really quick; in fact a recursive function usually has the same overhead, call after call, than a loop. You can basically imagine that recursion is not there at execution time, it gets "optimized away" by the compiler/interpreter.
Anyhow, sorry for drifting away from your subject, I just jump up when I read comments against recursive functions, whereas they should really be comments against poor implementations of recursion (like the one currently present in the Python interpreter) :-)
Cheers.
@zack: thank again for getting back :)
Well, not really: I didn't aim to be against tail recursion (the function is written exactly thinking to it) but simply that it doesn't work to solve this problem with Python (ok, for Python limitations :) ).
Indeed Haskell, which has proper tail recursion, doesn't have this problem.
BTW, there is something fishy with your program, collatz 837799 gives me 524 not 259 (collatz 63,728,127 gives me 949 and collatz 670,617,279 gives me 986, as in http://en.wikipedia.org/wiki/Collatz_conjecture).
@norsetto: thanks for pointing it out! I was printing the path length for the last number (999,999) instead of the one for 837,799, sigh :(
What's the point of the nn = n assignment? You're never modifying 'n'. Also, the 'res' argument is never used.
Refactoring your code I get this: http://gist.github.com/323897
Just to check, there is a difference in +1 between Wikipedia's stopping time and this program's output of the sequence length (as stated in Problem 14), right?
Hurray for tail recursion trolls.
Also, did you try just adding a @memoize decorator to the original function? I suspect it would hit the cache before it reaches the stack limit.
If you do want to do the caching yourself here is what my caching-local functions look like
def func(arg, cache={}):
..try:
....return cache[arg]
..except KeyError:
....result = cache[arg] = do normal calculation
..return result
[why doesn't blogger support the PRE tag?]
The "cache" dictionary is local to the function so it doesn't pollute the module namespace (it is also a slightly faster name lookup). Assuming the cache is hit a lot (which it should be, that's the purpose) the try/except is cheap because it's rarely raised.
@Marius: yep, 'nn' is a leftover from a previous version of the code, like 'res'; thanks for reorganizing the code, I'm attaching here for reference (as I won't update the code in the post):
cache = {1: 1}
def collatz(n):
....if n not in cache:
........if n % 2 == 0:
............cache[n] = 1 + collatz(n/2)
........else:
............cache[n] = 1 + collatz(3*n + 1)
....return cache[n]
@ulrik: yes: wikipedia counts the edges, while here (and Project Euler) counts the nodes (that are N+1, where edges are N).
@Jack: Thanks for your notes, they are very interesting! I didn't know the possibility of using the @memoize decorator, in fact I don't know even how to use it :) examples are welcome :)
In my version here, I don't use recursion at all.
def hailstone(n):
....seq = [n]
....while n>1:
........n = 3*n + 1 if n & 1 else n//2
........seq.append(n)
....return seq
- Paddy.
Thank you. Your code is very nice :)
Post a Comment