User:Devon McCormick/DLA0Runs
Performance Metrics
First we run this code after loading it, along with some utilities.
load 'viewmat mystats' load '~Code\cdla6Revised0.ijs'
We'll want to use the mystats verb usus to summarize the usual statistics in which we may be interested: minimum, maximum, mean, and standard deviation. We'll use this to study how many random steps we need to add a point in the following runs. The "~Code/" sub-directory is an alias for a folder with my J code, assigned (from a standard J session) via "Edit/Configure/Folders; you may use your own sub-directory as long as it has the code.
We also define these two utilities to help us focus on the small part of our point-space which has non-zero values.
ixor=: 13 : '({.,{:)I.+./y' NB.* ixor: axis indexes of non-zero range. fpil=: 13 : '({.y)+i.>:|-/y'"1 NB.* fpil: first point, index length.
These let us build the smallest box from which to extract non-zero points, as we'll see shortly.
First run for 1000 points with a maximum of 1000 tries per random release; time this and assign the results to nn.
]_1{stats=. ,:tm,(+/,SPC_dla_),(usus nn),nn+/ . = >./nn [ tm=. 6!:2 'nn=: grow_dla_"0 ] 1e3$1e3' 24.186796 784 5 1000 498.302 352.15767 217
We ran for about 24 seconds and accumulated 784 points, as well as some other stats on how long each walk went before ending. The length of the walks varied from 5 to 1000, with a mean of 498. There were 217 walks of the maximum length: it's a good bet that a walk of the maximum length did not add a point and, in fact, we see that 784 points + 217 maximum walks = 1001, so only one walk of the maximum length added a point.
If we run for another thousand points, it takes longer and we add fewer points to the cluster.
_1{stats=. stats,tm,(+/,SPC_dla_),(usus nn),nn+/ . = >./nn [ tm=. 6!:2 'nn=: grow_dla_"0 ] 1e3$1e3' 42.558429 1028 7 1000 880.503 252.11638 756
In a little over 42 seconds, we added 255 (=1028-784) points; our average walk increased in length from about 498 to 881.
Let's view the result. First we extract the middle portion of SPC_dla_ that has non-zero points and display it. We'll scale the size of the picture up by a factor of four to make it easier to see.
ixs=. (13 : '(fpil ixor |:y);fpil ixor y') SPC_dla_ ]sz4=. |.;(4*#)&.>ixs viewmat ixs (13 : 'y{~<x') SPC_dla_ setsize_jviewmat_ sz4
Here's what we see initially:
If we continue to run with the same arguments, we can track the time, which seems to level off to about 47 seconds per run but the number of points added each time seems to decline slowly.
2-~/\0,1{"1 stats NB. Number of points added each time. 784 244 134 109 77 58 64 51 43 45 43 72 59 61 52 36 0.1 roundNums 0{"1 stats NB. Number of seconds per run. 24.2 42.6 45.2 45.6 46.7 47.1 47 47.5 47.5 47.4 47.3 46.6 46.5 46.8 47.1 47.5
As we watch the cluster grow, here's what it looks like at the second iteration:
At the sixth, we see that it just keeps getting longer in one direction:
Lastly, by the sixteenth round, we see that this trend looks like it will continue indefinitely:
Conclusion
So, this code isn't ideal, but it does seem to work the same as its predecessor. For an idea on how to deal with the bias toward "stretched-out" clusters, take a look here.
Also, we can get some idea of a baseline for efficiency. The number of points/second changed like this:
0.1 roundNums (2-~/\0,1{"1 stats)%0{"1 stats NB. pts/sec 32.4 5.7 3 2.4 1.6 1.2 1.4 1.1 0.9 0.9 0.9 1.5 1.3 1.3 1.1 0.8
This is about one point/second, so it would seem that's there's room for improvement.
However, on the principle that it's easier to optimize debugged code than to debug optimized code, let's first take a look at another implementation, one that may be able to better speak to the strengths of J.