NYCJUG/2022-02-08
Beginner's regatta
The J Playground
We take a look at a web-based tool for experimenting with J.
Show-and-tell
Scoring Hole-card Combinations in Omaha
The latest updates to the code for simulating the poker game Omaha incorporates low hands. Once we were able evaluate winners of low hands, we revamped the scoring table to account for split pots, both in the case of a high hand splitting with a low hand and for multiple splits among high and low hands.
So, in the case of a single (high) winner, the set of hole-cards that led to that get a score of one. A split between a single high and low gives 1/2 point to each of those starting sets of hole-cards (or one to the winner of both). Similarly, other multiple winner scenarios have the single point divided among the hole-cards leading to those winners, both high and low, appropriately. The result of all this is a 10x270,725 table of numbers called wtdWinsRatio.
Weighted Wins Ratio Table
The variable wtdWinsRatio is has ten rows representing the winning results of game simulations for two to eleven players. The ratio is the number of times a given four-card hand won (weighted by portion won) divided by the number of times that hand appeared. Since these are simulations, both the numerator and denominator of this ratio vary somewhat by chance but we hope to iron out the incidental variation by running enough simulations.
The 270,725 columns of wtdWinsRatio correspond to each of the four-card possibilities (4!52), ordered ascending within each row then by rows, or
$AHC NB. All hole cards, 270725 4 #~.AHC NB. each row unique, 270725 (]-:/:"1~) AHC NB. in standard order which is this and 1 (]-:/:~) AHC NB. this. 1
The first 20 columns look like this:
0.001 roundNums 20{."1 wtdWinsRatio 0.368 0.376 0.368 0.379 0.368 0.374 0.412 0.406 0.425 0.443 0.383 0.404 0.408 0.382 0.386 0.379 0.375 0.375 0.385 0.396 0.267 0.262 0.246 0.241 0.243 0.241 0.256 0.264 0.276 0.294 0.271 0.273 0.275 0.271 0.274 0.256 0.25 0.248 0.251 0.258 0.218 0.206 0.203 0.189 0.184 0.186 0.189 0.195 0.211 0.216 0.21 0.212 0.216 0.222 0.22 0.2 0.192 0.188 0.189 0.199 0.182 0.176 0.156 0.155 0.148 0.153 0.157 0.157 0.166 0.179 0.171 0.173 0.174 0.188 0.179 0.165 0.154 0.153 0.153 0.156 0.161 0.151 0.135 0.13 0.132 0.129 0.128 0.132 0.146 0.147 0.143 0.146 0.149 0.163 0.156 0.138 0.133 0.132 0.128 0.13 0.14 0.136 0.118 0.112 0.11 0.114 0.114 0.118 0.127 0.132 0.121 0.12 0.126 0.143 0.136 0.119 0.115 0.108 0.11 0.112 0.127 0.121 0.107 0.099 0.097 0.099 0.099 0.105 0.113 0.113 0.103 0.104 0.108 0.13 0.123 0.108 0.102 0.098 0.096 0.099 0.117 0.106 0.095 0.089 0.087 0.084 0.089 0.096 0.104 0.108 0.09 0.091 0.093 0.117 0.108 0.096 0.088 0.086 0.085 0.086 0.108 0.097 0.086 0.082 0.078 0.081 0.081 0.087 0.094 0.101 0.078 0.08 0.082 0.106 0.1 0.088 0.081 0.075 0.075 0.076 0.095 0.092 0.081 0.072 0.071 0.073 0.078 0.083 0.085 0.09 0.068 0.07 0.071 0.099 0.089 0.078 0.073 0.068 0.068 0.07
Graphically, it looks like this:
viewmat wtdWinsRatio
For a viewmat with the standard palette, dark blue represents low values, cyan is higher, green is higher, yellow higher still, finishing up with red and magenta as the highest colors, as this rough legend indicates:
We see that the highest value represented is a little greater than 0.7.
Statistical Measures
The numbers in each row represent the score - the number of times a given four-card hand won (weighted by portion won) divided by the number of times that hand appeared - for each starting set of four cards, in a standard order as defined above. They are not normalized in any sense which is evident if we take some basic statistical measures of the table.
+/"1 wtdWinsRatio 141118 97540.7 75889.8 62935.9 54315.7 48185.7 43622 40124.6 37392 35219.7 0.0001 roundNums usus |:wtdWinsRatio 0.0902 0.7236 0.5213 0.0631 0.0188 0.5646 0.3603 0.0551 0.0048 0.4656 0.2803 0.0472 0.0018 0.4009 0.2325 0.0422 0.0015 0.3552 0.2006 0.0392 0.0012 0.3214 0.178 0.0374 0.0001 0.3037 0.1611 0.0363 0.0002 0.2873 0.1482 0.0358 0 0.2782 0.1381 0.0357 0 0.2733 0.1301 0.0359
The sum of numbers are higher for fewer players since, for instance, in a two-player game, each player will win about half the time whereas in an eleven-player game each player will win only about one-eleventh of the the time. The statistics in the table represent the minimum, the maximum, the mean, and the standard deviation of each row.
The image above of the entire table introduces a kind of distortion caused by viewing very different ranges of values at the same time. In fact, the best sets of hole-cards are relatively very similar, as can be seen by these viewmats of the 2-player row compared to the 11-player row. The absolute values are very different between the two as can be seen by comparing the first two columns of the statistics table above: the values range from 0.0902 to 0.7236 for two players but from 0 to 0.2733 for eleven players.
All Hole Cards
We encode the 0-51 values of AHC as characters.
]rndix=. 4?#AHC NB. Random selections 142781 254032 107303 22250 rndix{AHC NB. Characters spanning 0 51{a. ┼*1 │��' �'-2 �� " a. i. rndix{AHC NB. As numbers 0-51 8 20 42 49 25 29 31 39 5 39 45 50 1 3 9 34 showCards a. i. rndix{AHC NB. As cards +--+---+--+--+ |Q♠|10♣|9♦|5♠| +--+---+--+--+ |A♦|7♥ |5♥|2♠| +--+---+--+--+ |K♠|8♠ |7♣|2♠| +--+---+--+--+ |J♣|10♥|5♣|3♣| +--+---+--+--+
The suitRank verb converts a vector of card numbers into a two-row table of suit and rank:
suitRank"1 a. i. rndix{AHC 0 1 3 3 8 7 3 10 1 2 2 3 12 3 5 0 0 3 3 3 5 0 6 11 0 0 0 2 1 3 9 8
We can isolate just the suits like this:
0{&><"2 suitRank"1 a. i. rndix{AHC 0 1 3 3 1 2 2 3 0 3 3 3 0 0 0 2
This is key to what we do in this next section: we want to replace the suits with all suit combinations having the same pattern as the existing ones.
Equivalent Card Combinations Differing Only by Suit
A key insight to improving the weighted wins ratio was to realize that, because this is a simulation, equivalent sets of hole-cards will have slightly different scores. For instance, these three sets of hole-cards are equivalent because their ranks are the same and the suit pattern is the same but the suits differ within the pattern:
|K♦|9♠|4♥|2♣| |K♥|9♦|4♠|2♣| |K♠|9♦|4♣|2♥|
There are 24 instances of this "K-9-4-2 rainbow". Other suit patterns have this many or fewer variants. For instance, for K-9-4-2 of the same suit, there are only four variants: one for each suit.
The point is, my simulation calculated different scores for these 24 variants; looking at statistics for the 24 scores of this one, we see:
min 0.1097 max 0.1339 mean 0.1161 SD 0.0054
Since we know these are really the same, we assign all 24 of these the mean score of 0.1161. Once we do this for all sets of variations, we have a better estimate for each. Using these adjusted scores, we can rank all sets of hole-cards and look at the best and worst.
Representative Best 60 Hole-cards
These are shown with the best from top to bottom, in sets of four from left to right. So, A-K-A-K, both suited, is the best and A♣ A♦ 8♣ 8♦ is the worst shown. Because there are so many duplicates for each variant, we list only one from each set here so the A-K-A-K suited happens to be clubs and diamonds (as are all the suits as these are the first two in alphabetical order) but it represents the other 11 equivalent ones as well. One take-away from this is that pocket aces are much better if they are also suited with the other two cards.
Notice too how there are very few low cards in these sets but, when there are, they are mostly 4s and 5s, probably because of the low and straight possibilities with the aces. There is still some random variation here as well which may be why A♣ A♦ J♦ 10♣ (third one down on the left) is ranked higher than A♣ A♦ J♣ J♦ (fourth one down). So, while these may not be strictly correct, they are largely correct and we can still draw some useful inferences from them.
Representative Worst 60 Hole-cards
These are ordered top to bottom, left to right from worst to not as bad and each one represents all the equivalent variants. We can see from this that four-of-kind is the very worst set of hole cards, except for four aces. Also, three-of-a-kind also sucks especially with lower cards. Again, there is random noise here as it seems unlikely that 2-2-2-9 rainbow (top of 3rd column) is really less bad than 3-3-3-J rainbow (second from the bottom of 2nd column).
Besides the roughness of this ordering because of randomness, what you see above represents only a few hundred possibilities out of 270,725 possible sets of four hole-cards, or 16,432 equivalent sets, so take this all with a grain of salt.
Generating the Equivalent Sets
These equivalent sets were generated by this J code:
suitShuffles=: 3 : 0 assert. 2=#$sr=. suitRank y if. -.nameExists 'SCRes' do. SCRes=: genSCRes '' end. 'suits ranks'=. sr=. orderByCount sr NB. Suits, Ranks: order by ascending suit. ncsb=. /:~#/.~suits NB. # cards/suit breakdown (as in SCC). sc=. >SCRes{~SCC i. <,ncsb NB. Corresponding suit combinations hv=. /:~"1 suitRank&>(<"1 sc) 0}&.><sr NB. Hand Variants )
It relies on SCRes which is a five-element vector of the tables of equivalent suit combinations which looks like this:
SCRes +-------+-------+-------+-------+-------+ |0 0 0 0|1 0 0 0|0 0 1 1|0 1 2 2|0 1 2 3| |1 1 1 1|2 0 0 0|0 0 2 2|0 1 3 3|0 1 3 2| |2 2 2 2|3 0 0 0|0 0 3 3|0 2 1 1|0 2 1 3| |3 3 3 3|0 1 1 1|1 1 0 0|0 2 3 3|0 2 3 1| | |2 1 1 1|1 1 2 2|0 3 1 1|0 3 1 2| | |3 1 1 1|1 1 3 3|0 3 2 2|0 3 2 1| | |0 2 2 2|2 2 0 0|1 0 2 2|1 0 2 3| | |1 2 2 2|2 2 1 1|1 0 3 3|1 0 3 2| | |3 2 2 2|2 2 3 3|1 2 0 0|1 2 0 3| | |0 3 3 3|3 3 0 0|1 2 3 3|1 2 3 0| | |1 3 3 3|3 3 1 1|1 3 0 0|1 3 0 2| | |2 3 3 3|3 3 2 2|1 3 2 2|1 3 2 0| | | | |2 0 1 1|2 0 1 3| | | | |2 0 3 3|2 0 3 1| | | | |2 1 0 0|2 1 0 3| | | | |2 1 3 3|2 1 3 0| | | | |2 3 0 0|2 3 0 1| | | | |2 3 1 1|2 3 1 0| | | | |3 0 1 1|3 0 1 2| | | | |3 0 2 2|3 0 2 1| | | | |3 1 0 0|3 1 0 2| | | | |3 1 2 2|3 1 2 0| | | | |3 2 0 0|3 2 0 1| | | | |3 2 1 1|3 2 1 0| +-------+-------+-------+-------+-------+
Each digit from 0-3 represents a suit. Each of these is applied to a given element of the set of all hole cards to generate the equivalent sets. Each set is then looked up in the whole set and the corresponding weighted wins ratio is extracted for each. From this table of values, the mean is calculated and inserted back into the weighted wins ratio as the new value for each equivalent.
There is probably a better way to generate these combinations above as they seem to be expansions of integer partitions as explained here and elucidated in great depth here in the work by the late Howard Peelle. However, I had already done this before I realized the general category, so my implementation is this crude thing:
NB.* genSCRes: generate result of all suit combinations for 4 cards in "SCC" order. genSCRes=: 3 : 0 sccres=. 4#&>i.4 sccres=. sccres;,/>((<0 0 0){&.>i.4),"(1 0)/&.>(<i.4)-.&.>(<0 0 0){&.>i.4 x0=. (<0 0){&.>i.4 sccres=. sccres,<,/>x0,"1/&.>2#&>&.>(<i.4)-.&.>x0 sccres=. sccres,<,/>&>,&.>(<2;<i.4) genRest&.>(i.4),.&.>(<i.4)-.&.>i.4 sccres=. sccres,<(i.!4) A. &> <i.4 sccres=. ([: ; </.~ { ~ [: /: #/.~)"1 &.> sccres ) NB.* genRest: generate rest of combinations. genRest=: 4 : 0"1 'st allNums'=. x (<y),&.>st$&.>allNums-.y )
The number in each set, given by this global,
]SCC=: /:~&.>(,4);1 3;2 2;1 1 2;1 1 1 1 NB. All possible Suit Count Combos ordered ascending. +-+---+---+-----+-------+ |4|1 3|2 2|1 1 2|1 1 1 1| +-+---+---+-----+-------+
is in fact all the integer partitions of 4.
Advanced topics
Invisible Modifiers
Last September the tools to write tacit modifiers were brought back to J after 16 years of only having an explicit option. There are three big questions that we'll take a look at today:
- What are modifiers?
- Why would I write them?
- How would I do this using tacit techniques?
- Starting the first question, modifiers are conjunctions and adverbs in J. Unlike verbs, which can only take nouns as arguments and can only produce nouns as a result, modifiers can take any other part of speech as an operand and can make a new part of speech from the components. Adverbs only take one operand to their left and conjunctions are place between the two operands that they take. J has lots of adverbs and conjunctions already defined as primitives and a good place to review those is Nuvoc
- Which brings us to our second question. To put it simply, although J provides a lot of useful primitive modifiers that are all ready to go, from time to time there are combinations of verbs, nouns and modifiers that are not available as primitives. The solution is to write your own. Now no one is expecting you to write a modifier for every pattern that you use in your code, but if you are consistently seeing a pattern that occurs over and over again, only differing in the verbs that might be used, then this is a candidate for a modifier. Let's look at the Do While pattern in J. This can be done with explicit code using the while. control word.
doWhile0=: 4 : 0"0 i=. y r=. x while. x>i do. i=.+:i. end. )
You can see that this is an awful lot of ceremony to print out some numbers and it is not array style programming. What J provides through primitive modifiers such as the Power conjunction ^: are tools that can be put together to create an effective Do While loop.
doWhile1=:(+:^:(100>])^:_)"0 100 doWhile1 2 3 4 128 192 128
This is the sort of thing that we are expecting with J, but it does require us to know the u ^: v ^: _ pattern, where u is the verb to the left and v is the verb to the right. It also requires us to rewrite the whole verb in order to change the verb from doubling to squaring. When we are working with patterns we don't want to rewrite the modifier every time we change a verb. Looking at that pattern you can see how it might be made into a modifier explicitly, in this case a conjunction.
doWhile2=: {{ (u@] ^: v ^:_)"0}} 100 +: doWhile1 > 2 3 4 128 192 128 100 *: doWhile1 > 2 3 4 256 6561 256
Even though there is no doWhile primitive we now have made one and because doWhile is kind of a long name let's shorten it to dW and before we go tacit.
- The third question of how do I write this in tacit involves knowing a few primitives that were reintroduced. They are the conjunctions Lev [. and Dex ]. and the adverb Ident ]:. Lev takes two verbs and selects the left most one and Dex does the same but returns the rightmost verb. Ident is used for making adverbs when there is only one operand to the left of the adverb. If you used Lev instead of Ident for an adverb the modifier would be an adverb, but it would be looking for a right operand. Ident keeps this from happening. Let's write the tacit version:
dW=: ([.@] ^: ]. ^:_)"0 100 +: dW > 2 3 4 128 192 128 100 *: dW > 2 3 4 256 6561 256
- Pascal Jasmin has done a lot of work regarding modifiers and their use with regard to isolating side effects from the rest of a process. For the details [1] is worth checking out. Based on some of his observations this simple conjunction can provide the opportunity to initiate a side effect and not affect the final calculation.
sFX=: [.[ ]. NB. a fork that evaluates the right side and then discards it in favour of the left side (; +:) sFX (".@:('t=.',":@])) 64 NB. Will assign 64 to the variable t and then process the hook (; +:)64 ┌──┬───┐ │64│128│ └──┴───┘ t 64 4 (; +:) sFX (echo bind 'left boxed with double the right') 6 NB. This conjunction uses a right operand to echo a message then processes the 4(; +:)6 hook left boxed with double the right ┌─┬──┐ │4│12│ └─┴──┘
This is just the beginning of understanding modifiers in general and tacit modifiers in particular. For further reference on modifiers I recommend chapter 30 of Henry Rich's 'J for C Programmers' and for tacit modifiers the best page in Nuvoc at this time is this one, which not only has the table of allowable tacit combinations, but also a discussion on precedence and using parentheses to separate the right to left grouping of verbs from the left to right grouping of modifiers.
- And one last thing that Remy Clarke discovered when exploring the cross over from explicit to tacit modifiers. Using a combination of [. ]. u and v it is possible to write modifiers that will take up to 4 operands. An example of this is a splitFork which will combine the left tine of the fork working monadically on the left noun and the right tine on the right noun.
splitFork=: {{([.@[) u (].@])}} NB. Direct definition must be used to accommodate the explicit u 4 *: ; splitFork %: 25 NB. the u refers to the left verb closest to the modifier and the [. refers to the very left most verb ┌──┬─┐ │16│5│ └──┴─┘
Learning and Teaching J
Here we look at some current criticisms of software and programming languages, most of which leave some of us feeling unsatisfied.
Teaching How to Code is Broken
This article criticizes the usual method of teaching programming because this method often focusses too much on abstractions with too little concrete detail. Interestingly enough, the author suggests something like teaching how to program a card game as a better way to give students more concrete knowledge that is more applicable than the usual abstractions.
The Software Industry is Broken
Speaking of broken things, as this essay does, why do software "advances" so frequently slow down things and make them more complicated? The author's complaint may be summed up in the statement "We had 4KB or RAM when we landed on the moon, but today it takes 100000 more to send your colleague a message."
I think this looks at the continuing bloat of software but only scratches the surface as the complexity of the underlying languages seems to continue to grow without adding much benefit. From the standpoint of someone who worked with APL in the 1970s, I've watched mainstream languages inch toward the simplicity and power of that language for many years, but only asymptotically approaching it or, worse, diverging into needless complexity required by those languages' unexamined underpinnings.
I was reminded of this when editing an earlier NYCJUG wiki page for a section called "Drowning but Unaware of It". TL;DR a language like Java is drowning in unnecessary complexity but its practitioners don't seem to understand this. Compare this Java implementation of extended precision to J's. Here we see an example of how a calculation with an extended-precision number may be combined with regular numbers almost "transparently":
1 Decimal total = new Decimal("106838.81"); 2 Decimal a = new Decimal("263970.96"); 3 BigDecimal b = new BigDecimal("879.35"); 4 Decimal c = new Decimal("366790.80"); 5 total.add(a); 6 total.subtract(b); 7 total.subtract(c); 8 System.out.println(total.toString()); which, by the way, returns the result "3139.62".
This corresponds to the far simpler J expression
]total=. (106838.81+263970.96-x: 879.35)-366790.80 3139.62
The Case for a Modern Language
This article deploring the state of modern programming languages is one offering a plethora of bad examples for the simple task of converting a character string to a number safely. We could do this in J like this
cvtChar2NumSafely=: 3 : 0 assert. 0~:#y=. y-.' ' NB. No empty strings assert. y *./ . e. '0123456789_.' NB. Assuming J-style negative, only numerals and decimal point. ".y )
However, the author finds fault with examples in Rust:
// pretend that this was passed in on the command line let my_number_string = String::from("42"); // If we just want to bubble up errors let my_number: u8 = my_number_string.parse()?; assert_eq!(my_number, 42); // If we might like to panic! let my_number: u8 = my_number_string.parse().unwrap(); assert_eq!(my_number, 42); // If we're a good Rustacean and check for errors before trying to use the data if let Ok(my_number: u8) = my_number_string.parse() { assert_eq!(my_number, 42); }
As well as in a language called Zig:
const std = @import("std"); const myNumberString = "42"; // Bubble up the errors just like in Rust, third arg is the radix (10) const myNumber = try std.fmt.parseInt(u8, myNumberString, 10); // But we can do better const myOtherNumber = std.fmt.parseInt(u8, myNumberString, 10) catch 42; // Or even better if (std.fmt.parseInt(u8, myNumberString, 10)) |val| { // do something with val std.debug.print("{d} is THE answer\n", .{val}); } else |err| { std.debug.print("Error getting THE answer: {s}\n", .{err}); }
A comparison with C is given to illuminate the danger of doing this too simply:
#include <stdlib.h> // atoi char *forty_two = "42"; int i = atoi(forty_two);
At least the C example is succinct, if unsafe because it opens the door to mischief caused by dangerous input.
There is an attempt to do a better job in C:
#include <stdlib.h> // strtol #include <errno.h> // errno, perror char *forty_two = "42"; errno = 0; // Initialize errno in case we tripped an error previously long i = strtol(forty_two, NULL, 10); if (errno != 0) perror("Error parsing integer from string: ");
However, this allows a value like "42b" to pass without an error.
Cognitive Load Theory
We will finish up with a more encouraging selection, this article about Cognitive Load Theory which seems to speak to one of J's strengths, its well-engineered terseness combined with its consistency.
The central observation is this:
The central concept in cognitive load theory is that we have limited mental bandwidth for dealing with new information, but no such limitations when dealing with previously mastered material.
This is a consequence of the limitations of short-term memory, something that underpins this theory and which we have brought up many times in the J community as an argument in favor of its conciseness combined with the power of its base operations augmented by the consistency with which these are combined.
One of the findings is that, in preference to solving practice problems, is "...that studying worked examples (problems, along with detailed solutions) is often more efficient." This motivates a lot of our Beginner's Regattas on this forum where we lay out solutions to simple problems, showing how the solution was reached. However, this preference tends to reverse as one becomes more proficient in the domain. Having a relevant foundation gives us a better basis from which to solve new problems.
A finding we can apply to the study and teaching of J is avoiding the "split-attention effect" - https://pubmed.ncbi.nlm.nih.gov/30973250/ - by emphasizing parallel constructions and groupings, as illustrated by this example:
Part of the summary tells us that "[c]ognitive load theory favors direct instruction, quick feedback and plenty of practice." One of the continuing strengths of interpreted languages is the immediate feedback. This has actually been well-known for decades, if widely ignored in the development of new languages.
Discussion
Bob brought up the practice that medical schools use of "problem-based" learning, which is similar to "project-based" learning as a kind of guided example. Also, I know that if I learn something in a project, it makes it easier to find a good example when I need to do something similar later because I have a larger context in which to place it. This aids recall in the first place and provides debugged examples of using it.
Amilcar contributed something from his study of psychology: a well-regarded method for teaching autistic children is something called "errorless learning" - https://www.sciencedirect.com/topics/psychology/errorless-learning. Working on a problem through trial and error is a time-honored technique but it adds to our cognitive load by having to consider various dead-ends, our trials that failed. There is criticism of errorless learning - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3381647/ - but it makes sense that focusing on what we might call "the happy path" to solving a problem reduces our cognitive load by clearly presenting good techniques.
Looking back to Bob's presentation on modifiers, we see a good example of "chunking" - wrapping together a number of complexities into a high-level reference - as Bob emphasized how these modifiers allow us to generalize complex patterns of code into a general case. Chunking is another well-known way of reducing our cognitive load when solving a problem.