Thursday, August 11, 2011



The problem, as I recall it being described to me, is to calculate the number of ways to get from one vertex of a square lattice (of size N points per side) to the diagonally opposite one, where along either axis each move is always in the same direction, say ++. It is or used to be a popular interview question to test the interviewee’s coding skills.

My zeroth thought … I’ll come back to that later.

My 1.1th thought was of course, an iterative approach – figure out a process to get from the k-1th diagonal to the kth: at each step of the iteration use the distribution  of ways to get to the pth point on the k-1th diagonal to calculate the number of ways to get to the qth point on the kth, store this new distribution. Start from k=2 and repeat until k=N. OK, that gets us to the diagonal. Thought 1.1 was divide and conquer: get to the transvecting diagonal between the vertices, square the distribution and then add.

My second thought was to take a recursive approach for the first part (the distribution on the diagonal):
Def diag(k):
  If k ==1:
    Return [1]
     Oldlist = diag(k-1)
     Newlist = [ oldlist[i] + oldlist[i+1] for i in range (0,k) ]
     Return newlist
I had forgotten how much more beautiful recursion is than iteration!
All you do is call diag(N), square the elements and sum!

I think the above goes as n**3.

Time to implement and try it:
$ python --n natural number

OK, so for n =1,2,3,4,5 we get 1,2,6,20,70.

So how do we know it is right? Let’s say you’ve debugged the code, validated it for  n=1,2,3. It agrees with what the interviewer has. The answer agrees with the consensus, still, how do we know that it is right?

Put another way, can you predict the answer for n = 6 or any n, before you run your code, or at least, in less than n**3 time?
I’ll claim that my code runs in n**1 time. How?

Well, I didn’t code the recursion. Which brings me to my 0th thought, I can do it analytically, why code it?

One can readily see that the distribution on the diagonal is the binomial, since at every node you are making the choice between +/--, and the number of ‘+’ gets you the position along the diagonal. If we just wanted to get to the diagonal, the number of ways is 2**(n-1). So the answer we want is the sum of the squares of the binomial coefficients, which luckily has a closed form : (2N)!/(N!)**2. If ! goes as N-time, so does my code.

Comment 0: I can also ask the question: “to calculate the number of ways to get from one vertex of a square lattice (of size N points per side)”, where N is:
Real (for the computer this should be pseudo-real or float I think)
Matrix (real or complex valued)
Anything which forms a group under multiplication, or anything that forms a group under a group operation which satisfies the properties of multiplication, with the relaxation of the commutativity requirement.

Since n! = Gamma(n+1), any of the above can be calculated (convergence for matrices will be harder to prove). 

I wouldn’t have the slightest idea how to code any of the above!

Comment 1: What about lattices of different dimensions? Now here you’ve got me, I can’t avoid coding. If we are only interested in the number of ways of getting to the nearest transvecting diagonal plane, the answer is simply D**(n-1). But it is no longer sufficient to square the multinomial coeff.s and add, since we have to get to the other transvecting planes before getting from the last one to the opposite vertex. There are two such planes in 3D, with normal vector (1,1,1) and at distances 1/sqrt(3) and 2/sqrt(3) from the origin. Similarly in higher dimensions.

Now if you construct the following D dimensional solids: namely to take two D-simplices and to join them  on their D-hedron faces, then we are reduced to summing the squares of the multinomial coefficients. But otherwise:
the “coding” value of this problem lies in its extension to higher dimensions.

Comment 1.1: I am going to take a reasonable guess and claim that the answer in D-dimensions is (Dn)!/(n!)**D. Again, by using the gamma function (and extending it to Complexes), we can allow both n and D to take on complex values. I have no idea what a complex dimension looks like (though I suspect that John Conway and others do), but a fractional dimension, or at least a real valued fractal dimension, that I can visualize. The famous Sierpinski gasket, also has a fractal dimension (Box counting = Hausdorff) of 1.585. Similarly, if I construct the square equivalent (taking away 4 of the nine subsquares of a tictactoization of the square at every recursion), I get a lattice of fractal dimension ln(5)/ln(3) ~1.465 (5 copies, 1/3 linear dimension each). I can readily imagine that the constraints of having to pass through certain lattice points reduces the number of paths relative to the number in a square lattice.

Comment 2: What a beautiful introduction to Path Integrals! The function of paths we are “integrating” above is just 1. But you can readily imagine summing the length L of the path over all paths, or say the phase change over the path, exp(i L), or indeed any other function of paths. If you use L, you are doing geometric optics, if you use some particle action S, why, “welcome to quantum mechanics!”

I’d like to thank _ for suggesting this problem.


No comments:

Post a Comment