NYCJUG/2014-10-14
array-thinking, code clarity, language as a tool of thought, reading J code, statistics versus computer science, first programming language, learning coding
Meeting Agenda for NYCJUG 20141014
1. Beginner's regatta: another attempt to introduce J: see "Introduction to J for Hacker School". Language as a tool of thought: how do we read J code? See "Code Clarity". 2. Show-and-tell: comparison of some programming languages: see "Excerpts from Rosetta Code Programming Languages Study". Examples of how we think differently with J: see "Array-Thinking by Roger" and "Project Euler: Counting All Rectangles". 3. Advanced topics: Statistics versus Computer Science: see "Statistics needs to fight back against CS". 4. Learning, teaching and promoting J, et al.: what makes a language good as a vehicle for introducing programming? See "A Brief History of Choosing First Programming Languages". Programming becoming more popular? See "[Harvard’s] CS50 Logs Record-Breaking Enrollment Numbers" and "Learning to Write Code". Open areas to exploit with J? See "What’s A Good App-Development Tools or Environment for Children?" and "Examples of R Regression".
Beginner's regatta
We looked at the slides for a talk to introduce J to students at "Hacker School" in Manhattan. Also, we talked about a discussion of what constitutes "code clarity".
Code Clarity
from: Dan Bron <j@bron.us> to: programming@jsoftware.com date: Mon, Jan 13, 2014 at 1:45 PM subject: [Jprogramming] Code clarity
We often say the APL family of languages allow us to use language as a tool of thought. How does this play out in practice? Do we approach reading J programs differently from those written in other languages? If so, how?
These questions occurred to me today while I was knocking together an implementation of a RosettaCode task on reading configuration files. The task is to parse file formatted like the following:
# This is the fullname parameter FULLNAME Foo Barber # This is a favourite fruit FAVOURITEFRUIT banana # This is a boolean that should be set NEEDSPEELING # This boolean is commented out ; SEEDSREMOVED
Fuller example at [1]. After reading the intro, I copy/pasted the example into a J noun and proceeded to write this:
deb L:0@:(({.~ ; [: < [: ;^:(1=#) ',' cut (}.~>:)) i.&1@:e.&' =')&>@(#~a:&~: > ';#'e.~{.&>)@:(dlb&.>)@:(LF&cut)
Which is a verb which takes the configuration text as input and produces a table of name-value pairs as output. My first thought was "wow, I was able to knock that together in literally less than a minute, through simple incremental iterations in the REPL: J is AWESOME".
But then, thinking about posting it, I realized "this is awful, no one's going to be able to read it like this, and it's going to take more work to make it readable than it took to make it actually work".
So that got me thinking about what exactly we mean by J as a notation. And I wondered: how could we use the language to express our thoughts more clearly, and how does that differ from how we write J when we just want to get something done? And is this a different or more difficult problem for J than other languages?
So, how would you write a configuration file parser in J, if clarity were an important concern? I'm interested in not only the actual program, but the reasoning behind the decisions you make.
. -Dan [1] RosettaCode task to read a configuration file: http://rosettacode.org/wiki/Read_a_configuration_file
---
from: William Tanksley, Jr <wtanksleyjr@gmail.com> date: Mon, Jan 13, 2014 at 4:08 PM
Dan Bron < j@bron.us > wrote:
> We often say the APL family of languages allow us to use language as a tool of thought. How does this play out in
> practice? Do we approach reading J programs differently from those written in other languages? If so, how?
I think this is a fantastic question.
I completely agree that it's very easy to write unreadable programs in J. Some people have pushed back that it's easy to write unreadable programs in any language; but I would actually counter that it's easier in APL derivatives.
But I would contend that this is actually a consequence of APL derivatives being developed as a tool of thought rather than a tool of communication (or a tool of command).
Thinking is hard. Communicating is also challenging, but it's not the same as thinking.
I would like to point to the general teaching that APL people give for reading APL code -- what they say (I don't have a link) is that you can best understand APL code by rewriting it, and then comparing what you write with what was originally written. In other words, you learn to THINK about what the author was thinking about, and then you try to understand the WAY the author was thinking.
This reminds me of Chuck Moore's approach to program design, which he called "Thoughtful Programming". He advocated using many prototypes to cull out decent and unacceptable designs.
My brain is refusing to give me the name of the guy who's developing a thinking system using an APL derivative.
. > -Dan
-Wm
---
from: [[User:Raul Miller|Raul Miller]] <rauldmiller@gmail.com> date: Mon, Jan 13, 2014 at 5:06 PM
I'll counter your suggestion that it's easier to write unreadable code in APL derivatives with an observation that looks to me like a social issue rather than anything intrinsic in the language.
I say this because with minimal training (one class in APL at a community college, and occasional use in other classes, like biology),I was able to debug and improve other people's APL code in a large codebase.
From my point of view the thing that has been holding back APL is that it is too valuable and too productive. Consider, for example, the impact of Arthur Whitney on Wall Street. Things like that tend to drive up the price of the implementations which makes business people nervous. And when a business tries to switch away, and fails? That makes them even more nervous.
Meanwhile, most schools do not teach APL. If there were a large supply of programmers, the above issues would not be such a problem. (Instead, we'd have the problem of lots of code much of which would not address most people's needs, sometimes colloquially called "bad code" - popular languages suffer this issue and people mistakenly attribute that kind of problem to the language, also.)
The problem with APL is mostly a lack of source material, for people who might be interested in using the language. This leads to people being intimidated and also leads to a lack of implementations.
So that probably means that admirable books and examples would help.
Thanks,
---
from: William Tanksley, Jr <wtanksleyjr@gmail.com> date: Mon, Jan 13, 2014 at 6:00 PM
Raul Miller < rauldmiller@gmail.com > wrote:
. > I'll counter your suggestion that it's easier to write unreadable code in APL derivatives with an observation that looks to . > me like a social issue rather than anything intrinsic in the language.
Fascinating and very plausible. But I wasn't intending to talk about ease of writing unreadable code; I was intending to talk about the difference between a language as a tool of thought and as a tool of communication. Most programming languages are not really apt as tools of thought; they get unreadable when they're used as tools of _command_ rather than communication. Courses in programming tend to attempt to teach people to use them for communication, and code quality seems to increase.
. > I say this because with minimal training (one class in APL at a . > community college, and occasional use in other classes, like biology), . > I was able to debug and improve other people's APL code in a large . > codebase.
Cool. I wish I'd done that. I loved APL since I picked up a text on it at the school library -- but I failed to press through the character set difficulties, so I wasn't able to use that fascination.
. > Meanwhile, most schools do not teach APL. If there were a large supply of programmers, the above issues would not be . > such a problem. (Instead, we'd have the problem of lots of code much of which would not address most people's needs, . > sometimes colloquially called "bad code" - popular languages suffer this issue and people mistakenly attribute that kind . >of problem to the language, also.)
Hm. I'm not sure I agree with that characterization of what makes code bad. Note that peer review does create a large increase in various positive project metrics... it's hard not to attribute that to an increase in code quality, even if we don't know concretely what that means. I suppose, to borrow your words, objectively bad code is "code that doesn't address the original programmer's needs". Grin.
. > The problem with APL is mostly a lack of source material, for people who might be interested in using the language. . >This leads to people being intimidated and also leads to a lack of implementations. So that probably means that . > admirable books and examples would help.
That sounds delightful. No doubt different approaches would help. And this mailing list, of course, helps.
. > Raul
---
from: Joe Bogner <joebogner@gmail.com> date: Mon, Jan 13, 2014 at 8:32 PMGreat question!
. > So, how would you write a configuration file parser in J, if clarity were an important concern?
I find it helpful to identify the audience when writing - code or non-code. I then try to write for the audience. If there won't be an audience other than a computer and I will never read it again, then I'm not so worried about clarity. Otherwise, it's a real concern. My audience typically includes a lot of imperative/procedural programmers. I don't know if that's completely due to experience or how brains are actually wired for some people. I just know that's what I learned growing up. It's stuck with me for 20 years. I think of myself as a logical, "step by step" thinker.
As such, I would probably write it in a style that scans the lines and acts upon the lines. I think the PicoLisp example is close to how I would write it. As an aside, I have a few years of experience with PicoLisp
(de rdConf (File) (pipe (in File (while (echo "#" ";") (till "^J"))) (while (read) (skip) (set @ (or (line T) T)) ) ) )
Or I would get clever and translate the config file into something that can be evaled in the native language.
If I compare that to the your J implementation
. > deb L:0@:(({.~ ; [| < ][| ;^|(1=#) ',' cut (}.~>|)) i.&1@|e.&' =')&>@(#~ . > a|&~| > ';#'e.~{.&>)@|(dlb&.>)@|(LF&cut) ]
This J implementation feels more like code golf or a compressed string. How many tokens/operations are included in it? I won't count, but I am fairly sure it's more than the (pipe, in, while, echo, till, read, skip, set, or, line) 9 in the PicoLisp example.
When reading a long J string or an entry in the obfuscated C code contest, I try to recognize patterns or operations. Having used J for about 6 months, I can recognize probably about half the operations in that string without having to look them up. That's progress. It still feels like a "run on sentence" which is harder to read than short sentences.
. > and it's going to take more work to make it readable than it took to make it actually work".
That's normally true for any type of writing. The difference between a first draft and final version is a fair amount of work. Taking a stream of consciousness and turning it into something other people understand takes some effort.
I think there's a fine balance between tacit expressions and clarity. It may be my level of inexperience with the language. However, I wonder if I've put as much time on it as any intro-level APL programmer. Are there any conventions in the language for # of tokens, trains, etc for a readable sentence? There might be some relation to phone numbers and the Magical Number Seven, Plus or Minus Two [1] of the number of objects a brain can hold in working memory. That J
. expression exceeds it for me as my brain tries to parse it [1] - http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two
---
from: Pascal Jasmin <godspiral2000@yahoo.ca> date: Mon, Jan 13, 2014 at 9:32 PM
. > deb L:0@:(({.~ ; [| < ][| ;^|(1=#) ',' cut (}.~>|)) i.&1@|e.&' =')&>@(#~
. > a|&~| > ';#'e.~{.&>)@|(dlb&.>)@|(LF&cut) ]
completely untested if the following is equivalent, but:
3 : ' deb leaf (({.~ ; [: < [: ;^:(1=#) ',' cut (}.~>:)) i.&1@:e.&' =')&>@(#~> a:&~: > ';#' e.~ {.&>) dlb each LF cut y'
is a little better? (took out "unnecessary" @:, and spaced out verb trains)
further clarity could be achieved with verb names:
. docomment . doassign . doboolean
which I am guessing would take care of replacing the longer trains (or parts thereof) in the middle.
---
from: Devon McCormick <devonmcc@gmail.com> date: Mon, Jan 13, 2014 at 9:42 PM
Dan's question is relevant and, for me, very timely as I am preparing a talk for a week from now and am wrestling with how to convey the notion of J as a tool of thought. It's hard because what Dan wrote, for instance, looks like gibberish unless you know enough J to make sense of it but who would want to learn such gibberish?
For my own practice w/tacit reading, I parsed much of Dan's code in my head and found it fairly readable, up to a point.
Here's what I did - first I confirmed that it does what it claims to do:
eg=. 0 : 0 # This is the fullname parameter FULLNAME Foo Barber # This is a favourite fruit FAVOURITEFRUIT banana # This is a boolean that should be set NEEDSPEELING # This boolean is commented out ; SEEDSREMOVED ) nameValPair=: deb L:0@:(({.~ ; [: < [: ;^:(1=#) ',' cut (}.~>:)) i.&1@:e.&' =')&>@(#~a:&~: > ';#'e.~{.&>)@:(dlb&.>)@:(LF&cut) nameValPair eg +--------------+----------+ |FULLNAME |Foo Barber| +--------------+----------+ |FAVOURITEFRUIT|banana | +--------------+----------+ |NEEDSPEELING | | +--------------+----------+
OK - that's good. Now how does it achieve this? Starting my evaluation on the right and working leftward, here's what I was able to figure out.
nameValPair=: deb L:0@:(({.~ ; [: < [: ;^:(1=#) ',' cut (}.~>:)) i.&1@:e.&' =')&>@(#~a:&~: > ';#'e.~{.&>)@:(dlb&.>)@:(LF&cut) ...@:(LF&cut) ^Cutting into lines. ...(dlb&.>) ... ^Deblanking the lines . . (#~a:&~: > ';#' e. ~{.&>) @: ... ^ Removing ^^ lines starting with either of these characters
Actually, not removing the comment lines: more strictly, keeping the non-comment lines.
Continuing leftward,
... (}.~>:)) i.&1@:e.&' =')&>@ ... ^ Increment where we find the first space or equals sign.
This tells me we're missing a test case, so I'll check if this is doing what I think it is:
eg=. eg,0 : 0 # Try NAME=value NAME=value ) nameValPair eg +--------------+----------+ |FULLNAME |Foo Barber| … +--------------+----------+ |NAME |value | +--------------+----------+
. OK - looks good so far. Continuing...
... ',' cut ... ^ Cutting comma-delimited items
Looks like we need another test case:
eg=. eg,0 : 0 # Test comma-delimited value COMMADELIMITED Here,are,several,values ) nameValPair eg +--------------+-------------------------+ |FULLNAME |Foo Barber | … +--------------+-------------------------+ |COMMADELIMITED|+----+---+-------+------+| | ||Here|are|several|values|| | |+----+---+-------+------+| +--------------+-------------------------+
OK - continuing...
...;^:(1=#) ... ^ Simplify if only 1 value ... (({.~ ; [: < [:... ^ First item w/other(s) following...
Finally,
deb L:0@: ... ^ Remove blanks at lowest level?
I'm not quite sure about this w/o testing, but I think that's a fairly accurate explanation of the code, perhaps having sloughed the exact meaning of a few of the "@:"s.
This brings up the dichotomy between writing and reading code: one of the important reasons J is a tool of thought is because it gives us this powerful, logical, consistent vocabulary for talking about computational concepts.
As an example of how this affects our thinking, I'll mention a time when a colleague asked me how I'd written a file differencer in APL.
At the time he asked, I didn't have the code available and had been working mostly in ksh (Korn shell scripting) at that time. I outlined the method I thought I'd used but only later realized I had completely mis-informed him: I told him a method I would have used working with Unix shell tools, not how I'd actually done it in APL which was something like this (in J):
'fl0 fl1'=. <;._2&.>CR-.~&.>fread (<'\directory\),&.>'fl0.txt';'fl1.txt' 'fl0 fl1'=. deb&.>&.>fl0;<fl1 fl0 -. fl1 fl0 -.~ fl1
My approach to the algorithm was changed by the toolset I had foremost in my mind at that time.
---
from: [[User:Raul Miller|Raul Miller]] <rauldmiller@gmail.com> date: Mon, Jan 13, 2014 at 10:28 PM
On Mon, Jan 13, 2014 at 8:32 PM, Joe Bogner < joebogner@gmail.com > wrote:
. > PicoLisp . > . > (de rdConf (File) . > (pipe (in File (while (echo "#" ";") (till "^J"))) . > (while (read) . > (skip) . > (set @ (or (line T) T)) ) ) )
Is that complete?
I learned lisp back in highschool, and I've used drracket and emacs and other such lisp environments, but never learned picolisp. I tried to install picolisp but it would not build for me and I do not feel like debugging the source for picolisp just for this message.
My impression, though, is that a J implementation like what you have written would look something like this:
conf=: a:-.~(#~ 1 -.&e. '#;'&e.S:0)<;._2 fread file
In other words, read the file as lines, removing blank lines and comment lines.
If all you are doing is saving the unparsed lines then we should expect simpler code. But maybe I have missed a subtlety of picolisp?
I get that @ is a wild card, but I do not understand the mechanism well enough to say whether your implementation is correct, nor do I know whether (while (read) .. is stashing the read result somewhere or what. Nor do I know if your skip is assuming a rigid file structure or is allowing free-form comments in the config file.
> If I compare that to the your J implementation > >> deb L:0@:(({.~ ; [: < [: ;^:(1=#) ',' cut (}.~>:)) i.&1@:e.&' =')&>@(#~ >> a:&~: > ';#'e.~{.&>)@:(dlb&.>)@:(LF&cut) > > This J implementation feels more like code golf or a compressed string. How many tokens/operations are included in it? >I won't count, but I am fairly sure it's more than the (pipe, in, while, echo, till, read, skip, set, or, line) 9 in the PicoLisp > example.
I count 64 tokens in the J implementation and 54 tokens in your PicoLisp example. I'm not sure why you have implied that parentheses are not tokens but I do not think they qualify as whitespace? We could get further into parsing and punctuation issues, but I'm not sure whether that would be relevant.
. > When reading a long J string or an entry in the obfuscated C code contest, I try to recognize patterns or operations. . > Having used J for about 6 months, I can recognize probably about half the operations in that string without having to . > look them up. That's progress. It still feels like a "run on sentence" which is harder to read than short sentences.
I also usually prefer shorter sentences. Not always, but I'd probably try to split Dan's code into two or three lines. Posting a fair bit of code to email has been an influence there.
I imagine I would also favor a shorter implementation than what Dan has done here. For example, in his code I see the phrase e.&' =' but I see no equals signs in the config file nor in the specification on the companion task (whose J entry should perhaps be simplified?).
. > I think there's a fine balance between tacit expressions and clarity. It may be my level of inexperience with the . > language. However, I wonder if I've put as much time on it as any intro-level APL programmer. Are there any . > conventions in the language for # of tokens, trains, etc for a readable sentence?
Personal taste?
. Thanks,
---
from: Joe Bogner <joebogner@gmail.com> date: Mon, Jan 13, 2014 at 11:18 PM
Yes, it is complete. I didn't write it or test it as it was already posted to rosettacode. I will explain how it works assuming there is interest.
It uses some uncommon tricks. It leverages the read function which is the same function used in the repl to read input characters. So the goal is to take the file and skip any comments and then pass it on to set the variable with the key and value.
(pipe (in File (while (echo "#" ";") (till "^J")))
Reads the file and echos until it encounters a comment character and then reads til EOL
. > while (read) . > (skip) . > (set @ (or (line T) T)) ) ) )
Then read those echoed characters. read gets the first symbol, the key. Skip moves the input stream ahead by the space or nothing. Set assigns the variable @ which is the result from the last operation (read - which is the key) with the value from the rest of the line or T if it is blank (for boolean example in the config)
My brain has been trained to think of parens as whitespace. It didn't start that way. I can see why you may consider them tokens. I was also counting unique function/operation tokens, not characters. The idea being if I only have 4 english words with 3 characters each on a line, that is easier for my brain to parse than 5 operations using 2 ascii symbols that I don't recognize the meaning.
However as my J vocabulary improves it becomes less of an issue. I can parse i. or e. As fast as a function called idx or el?
Line length is still important I think. Also a functional style with splitting up the train may help reusability, comprehension, and may help identify small areas to refactor. Those small topics like "filter out lines starting with a comment character" can get lost to me in a long line of compound operations. Again, some balance and personal perference and familiarity
I became interested in picolisp for its speed, conciseness and expressiveness. Many of the same attributes as J. It is almost always in the shortest solutions on rosettacode too. Happy to help resolve your build issue off the list if you are interested.
---
from: [[User:Raul Miller|Raul Miller]] <rauldmiller@gmail.com> date: Tue, Jan 14, 2014 at 12:25 AM
Ok, I think I understand.
The basic issue, here, seems to be that PicoLisp is stream oriented and this is a stream oriented task. No one in the J community has cared enough to build a stream oriented library for J. J has enough to do stream oriented design for academic purposes, but ... Consider xml/sax as an example of how one might approach streams in J – call out to some standardized implementation and instead focus on what is unique to the application.
Meanwhile, for a J programmer, words like [: & @ [ and ] occupy a role not too different from parenthesis for a Lisp programmer. Parenthesis might seem simple, but in fact there are a fair number of contextual rules that one must learn before really understanding their significance. Do the parenthesis delimit a lambda definition? An argument list? Do they denote a function call? Some other special form? That, I think, is the issue you were focusing on when counting tokens - how many special rules does a person have to understand to parse the code. J has 9 parsing rules, each roughly at the same complexity level as a lisp-like lambda. Explicit contexts add a few more, though that's mostly syntactic sugar.
Meanwhile, J is and is not fast. It can be fast, but only if you design your code using big, homogeneous data structures to represent large data sets.
I'm not sure if I am making sense, so I suppose I should write some code instead.
Here's an implementation of a config file reader which should perform reasonably well:
ChrCls=: '#;';(' ',TAB);LF;a.-.'#; ',TAB,LF NB. comment, space, line, other tokens=: (0;(0 10#:10*".;._2]0 :0);<ChrCls)&;: 1.0 0.0 0.0 2.1 NB. 0: skip whitespace (start here) 1.0 1.0 0.0 1.0 NB. 1: comment 3.3 4.3 5.2 2.0 NB. 2: word 3.0 3.0 5.1 3.0 NB. 3: comment after word 3.0 4.0 5.1 2.1 NB. 4: space after word 1.3 0.3 0.3 2.1 NB. 5: line end after word ) readConf=: ({.,<@(;:inv)@}.);._2@tokens@fread
This uses a state machine to deal with all the low level character munging and then forms the result into a two column table where the first column is the name and the remainder is whatever followed that (with redundant whitespace removed).
Is it readable? Not if you are not familiar with the language. In fact, this took me about half an hour to write. And I would not bother doing something like this, normally, unless performance really mattered (which implies large file size). But, if performance does matter, this approach should behave reasonably well and (at least for J) should have much better performance than an implementation which loops or otherwise uses separate primitives for separate states.
That said, I should also note that the idea of using decimal fractions for state machine instructions was Ken Iverson's. I'll not go into why sequential machine was not implemented that way, because I feel guilty about it.
Thanks,
---
from: Don Kelly <dhky@shaw.ca> date: Tue, Jan 14, 2014 at 1:39 AM
I have had considerable experience with APL.
What I have found is that it is important, more than with many other languages, to document the code. With J it is even more important. This is a consequence of the power involved in a statement combined with what the hell did I mean when I wrote it…
Other languages are more concerned with the"computer science structure" than the problem that is being approached (hence APL and J are overall considered as "lesser".
When I think of a problem, I look at what is wanted and what is known and the tools to get to the former from the latter. I don't give a damn in defining something as integer vs floating point etc. If a number is close to integer- treat it as such, APL and J do this. and the array processing capabilities help with dealing with the problem of interest. An example. is +/ (somelist_ of_ numbers)_
The downside is that one can become overly focused on tight tacit programs and it might be better to have a documented explicit program filed somewhere, even in the same script file, that spells out the reasoning behind the tacit version.
Reading J is different from other languages because it removes much of the overhead from the tiddly details that can be handled more efficiently in the background by the idiot box.
. Don
---
from: Roger Hui <rogerhui.canada@gmail.com> date: Tue, Jan 14, 2014 at 2:29 AM
I find that it helps to describe the J or APL code as if you are writing a paper about it for the expert. For example, see the essays in the J wiki. If carried to the extreme, it becomes the Literate Programming >%20of https://en.wikipedia.org/wiki/Literate_programming>%20of Knuth.
. In such writing it is unnecessary to describe the working of primitives because they can be looked up in the dictionary or reference manual.
---
from: Ian Clark <earthspotty@gmail.com> date: Tue, Jan 14, 2014 at 9:40 AM
The problem of human vs computer readability resurfaced for me recently when planning a J paper for MagPi, the how-to journal for the Raspberry Pi.
It took no imagination to predict the response of the average (computer-literate) reader on seeing J code for the first time. I hoped to forestall it with a ref to:
. "The alleged unreadability of J - and what to do about it" . http://www.jsoftware.com/jwiki/Essays/unreadability
which essentially covers the ground of this topic for the "educated layman".
Problem not solved, however. I belatedly realise that the article's reading age needs to be (techie) 14-18 for MagPi, whereas the aforementioned essay has a reading age of 50+ (and maybe 70+ :-\ )
---
from: Don Guinn <donguinn@gmail.com> date: Tue, Jan 14, 2014 at 9:47 AM
It's always been a mystery to me why it is OK to spend several hours (or sometimes days) analyzing several pages of FORTRAN or C but when reading a few lines of APL or J which do the same thing I must grasp it in a few minutes or I start feeling overwhelmed. But I have written similar "run-ons". Why? Because I can set up test data and add a little at a time to a line or a few lines, executing it and looking at the results as I go. I have to force myself to break that monster up into more readable chunks. I can't do that in other languages as I have to compile or whatever, So I tend to write all the code then start debugging.
Then comes documenting. I put a brief description of what it's for and expected arguments. Then add references and why the code does what it does. I try not to repeat describing what the code does. But then I end out with comments many time larger than the code. That just seems weird!
---
from: [[User:Raul Miller|Raul Miller]] <rauldmiller@gmail.com> date: Tue, Jan 14, 2014 at 9:48 AM
It might be interesting to try it on a large file.
Here's another state machine implementation that might perform better:
StateMachine=: 2 :0 (m;(0 10#:10*".;._2]0 :0);<n)&;: ) CleanChrs=: '#;';(' ',TAB);LF;a.-.'#; ',TAB,LF NB. comment, space, line, other clean=: 1 StateMachine CleanChrs 1.0 0.0 0.0 2.1 NB. 0: skip whitespace (start here) 1.0 1.0 0.0 1.0 NB. 1: comment 3.3 4.0 6.0 2.0 NB. 2: word 3.0 3.0 6.1 3.0 NB. 3: comment after word 3.3 5.3 6.0 2.0 NB. 4: first space after word 3.0 5.0 6.1 2.1 NB. 5: extra space after word 1.3 0.3 0.3 2.0 NB. 6: line end after word ) NB. .0 continue, .1 start, .3 end SplitChrs=: (' ',TAB);a.-.' ',TAB NB. space, other split=: 0 StateMachine SplitChrs 0.6 1.1 NB. start here 2.3 1.0 NB. other (first word) 0.6 3.1 NB. first space 3.0 3.0 NB. rest ) NB. .6 error readConf=: split;._2@clean@fread
I think the performance problem you observed is because the first version started boxing too early. Here, I save boxing till the end, and create fewer boxes, both of which should reduce overhead.
Thanks,
---
from: jph.butler@mailoo.org date: Tue, Jan 14, 2014 at 10:33 AM
The difficult part in my experience is describing the structure and contents of my inputs and outputs. Usually, I provide sample datasets, and show what verbs to run on them. But I don't have too many readers of my code so I am not sure how practical that really is.
I was wondering whether naming the main data structures encountered would be useful?
Show-and-tell
We looked at a study of programming languages that reached the following conclusions:
. • Functional and scripting languages enable writing more concise code than procedural and object-oriented languages. . • Languages that compile into bytecode produce smaller executables than those that compile into native machine code. . • C is hard to beat when it comes to raw speed on large inputs. Go is the runner-up, and makes a particularly frugal usage of memory. . • In contrast, performance differences between languages shrink over inputs of moderate size, where languages with a lightweight runtime may have an edge even if they are interpreted. . • Compiled strongly-typed languages, where more defects can be caught at compile time, are less prone to runtime failures than interpreted or weakly-typed languages.
Array-thinking by Roger
from: Johann Hibschman <jhibschman@gmail.com> to: Programming forum <programming@jsoftware.com> date: Thu, Sep 25, 2014 at 9:06 AM subject: [Jprogramming] Repeated rolling dice
Hi all, For fun, I've been running some statistics for a game with an unusual rule for rolling dice: if a 6 is rolled, roll again and add the result, repeating on any subsequent 6s. I wanted to implement this in J, collecting all the individual rolls (rather than just the sum.) It seems like there should be a more clever and elegant way to do this, but this is what I have:
NB. Simple roll. roll0 =: >:@? NB. This seems to work, but it's not very clever. roll =: 3 : 0 r =. >:?y if. r=y do. r=. r,(roll y) end. r ) NB. Attempt at iterating via power. Fails because repeats signal termination. roll0^:(6&=)^:(<_) 6 NB. Attempt at iterating via agenda. Not even close yet. NB. ]`(]+$:) @. (=&6) NB. where to stick in the roll? This gives what I expect: roll"0 ] 10#6 6 1 0 3 0 0 3 0 0 2 0 0 5 0 0 2 0 0 6 6 2 2 0 0 1 0 0 6 3 0
But is there a better way to do this? Also, are there any known issues with the RNG? I've not gathered enough statistics to prove it, but the results look clumpier (more identical values in a row) than I expect. Now, I know that's a common cognitive bias, so it may just be me, but is there a discussion of the quality of the RNG somewhere?
Thanks, Johann
---
from: David Lambert <b49p23tivg@stny.rr.com> date: Thu, Sep 25, 2014 at 11:39 AM
Help! For you email skimmers please jump to the agenda version mystery.
Control the RNG
Under system global parameters, you can choose the random number generator and state. http://www.jsoftware.com/docs/help701/dictionary/dx009.htm starting from 9!:42 Power version Here's a working tacit version using power. This roll always uses a six sided die.
While =: conjunction def 'u^:v^:_' roll =: >:@:?@:6: game =: [: }. (, roll)While(6={:) game&> 36#6 NB. play the game 36 times
Agenda version
Where does roll go? Again, always with a 6 sided die, rolling the die needs to be an argument to the self reference. Thus $:@roll or as the fork ([| $| roll) ]
roll =: >:@:?@:6: g=:[`(, ([: $: roll))@.(6 = {:) 0 1 }. g">36#6 NB. play 36 games.
Mystery, if I replace behead for same in 0 { the_gerund, I get curtailing rather than beheading.
g_behead=: }.`(, ([: $: roll))@.(6 = {:) g_behead">5#6 6 0 6 6 6 0 6 0 6 6
---
from: 'Pascal Jasmin' via Programming <programming@jsoftware.com> date: Thu, Sep 25, 2014 at 11:50 AM
this works
(, >:@?@6:)^:((0=#) +. 6={:)^:_ i.0 ([: +/ (, >:@?@6:)^:((0=#) +. 6={:)^:_) i.0 11
---
from: Aai <agroeneveld400@gmail.com> date: Thu, Sep 25, 2014 at 11:51 AM
e.g.
]`(,$:) @. (=&6)@roll0&> 10$6
---
from: Roger Hui <rogerhui.canada@gmail.com> date: Thu, Sep 25, 2014 at 12:32 PM
Compared to the other solutions posted so far, I believe it is more straightforward to do a long sequence of dice rolls and then split it up according to your specified criteria. The split points are where the _previous_ roll is _not_ a 6. Thus:
x=: 1+1e4 ?@$ 6 NB. long sequence of dice rolls x 5 4 6 1 5 4 5 6 1 3 5 1 2 3 3 6 5 2 4 5 6 5 5 3 6 2 4 4 1 3 6 4 2 1 5 6 1 6 6 6 6 6 5 4 6 1 2 3 2 5 1 2 1 3 2 ... (}:1,6~:x) <;.1 x ┌─┬─┬───┬─┬─┬─┬───┬─┬─┬─┬─┬─┬─┬───┬─┬─┬─┬───┬─┬─┬───┬─┬─┬─┬─┬───┬─┬─┬─┬─... │5│4│6 1│5│4│5│6 1│3│5│1│2│3│3│6 5│2│4│5│6 5│5│3│6 2│4│4│1│3│6 4│2│1│5│6... └─┴─┴───┴─┴─┴─┴───┴─┴─┴─┴─┴─┴─┴───┴─┴─┴─┴───┴─┴─┴───┴─┴─┴─┴─┴───┴─┴─┴─┴─... ...──┬───────────┬─┬───┬─┬─┬─┬─┬─┬─┬─┬─┬─┬... ... 1│6 6 6 6 6 5│4│6 1│2│3│2│5│1│2│1│3│2│... ...──┴───────────┴─┴───┴─┴─┴─┴─┴─┴─┴─┴─┴─┴...
---
from: 'Pascal Jasmin' via Programming <programming@jsoftware.com> date: Thu, Sep 25, 2014 at 1:54 PM
there's a small issue with it, (if last roll is a 6)
(}:1,6~:x) <;.1 x=. 6 1 6 6 2 3 6 ┌───┬─────┬─┬─┐ │6 1│6 6 2│3│6│ └───┴─────┴─┴─┘
… ---
from: Roger Hui <rogerhui.canada@gmail.com> date: Thu, Sep 25, 2014 at 2:49 PM
Just makes sure that n is sufficiently large so that when you (}:1,6~:x) <;.1 x=: 1+n ?@$ 6 there are at least m complete sets (where a set cannot end on a 6).
---
from: [[User:Raul Miller|Raul Miller]] <rauldmiller@gmail.com> date: Thu, Sep 25, 2014 at 3:10 PM
That is very close to what I came up with, for the case where we want only a single value from our result:
d6=:1 + ? bind 6 repd6=: [:+/(,d6)^:(6={:)@d6
Here's a variation on Roger Hui's approach, for the case where we want N values from our result:
d6s=: 1 + [: ? #&6 bulk=:{.#&0(],~(+/;.1~1:}:@,0~:6&|)@(],d6s@[))^:(0=6&|@{:@{.)^:_~]
Example use:
bulk 20 5 5 5 4 3 3 2 3 3 9 1 4 16 3 3 1 3 17 3 4
This would probably be much clearer if implemented explicitly rather than tacitly, and probably would be more efficient also. So:
bulkd6s=:3 :0 r=. i. 0 while. y >: #r do. r=. r, d6s y mask=. }: 1, 0~:6|r r=. mask +/;.1 r end. y{.r )
But statistically speaking, this is still not as efficient as it could be. I think we'd do better with:
bulkd6=:3 :0 r=. i. 0 while. y >: #r do. r=. r, d6s 2*y mask=. }: 1, 0~:6|r r=. mask +/;.1 r end. y{.r )
Do you see why this tends to be more efficient?
---
from: Roger Hui <rogerhui.canada@gmail.com> date: Thu, Sep 25, 2014 at 3:12 PM
"This" is also an example where the problem statement, including the model and the display of its result, heavily influenced the solutions. To avoid this kind of bias in this case, perhaps you might say:
Roll a dice. If you don't get a 6, stop; if you do get a 6, roll the dice again. Do m trials of this experiment and report all the dice rolls. But I suspect this statement is influenced by what I know about the solution.
On Thu, Sep 25, 2014 at 10:40 AM, Devon McCormick < devonmcc@gmail.com > wrote:
. > This looks like a good addition to my growing set of "array-thinking" examples.
---
The following could also serve as one of these examples.
Project Euler: Counting All Rectangles
from: Jon Hough <jghough@outlook.com> to: "programming@jsoftware.com" <programming@jsoftware.com> date: Tue, Oct 7, 2014 at 12:37 AM subject: [Jprogramming] Project Euler 85, Python and J
This problem is not really conceptually hard, but I am struggling with a J solution.I have solved it in Python:
============================================= def pe85(larg, rarg): count = 0 llist = range(1, larg+1) rlist = range(1, rarg+1) for l in llist: for r in rlist: count += l*r return count if __name__ == "__main__": # test for 2x3 grid, as in question. k = pe85(2,3) print str(k) l1 = range(1,200) l2 = range(1,200) bestfit = 10000 area = 0 for i in l1: for j in l2: diff = abs(2000000 - pe85(i,j)) if diff < bestfit: area = i*j bestfit = diff print "AREA is "+str(area)
The above script will give the final area of the closest fit to 2 million. (The python code may not be the best). Also I tested all possibilities up to 200x200, which was chosen arbitrarily(~ish).
Next my J. I go the inner calculation ok (i.e. see the function pe85 above). In J I have:
pe85 =: +/@:+/@:((>:@:i.@:[) *"(0 _) (>:@:i.@:])) NB. I know, too brackety. Any tips for improvement appreciated.
But from here things get tricky. If I do the calculation over 200x200 possibilities I end up with a big matrix, of which I have to find the closest value to 2 million, of which then I have to somehow get the (x,y) values of and then find the area by x*y.
The main issue is getting the (x,y) from the best fit value of the array.
i.e. If I do pe85"(0)/~ 200, I get a big array, and I know I can get the closest absolute value to 2 million but then I need to get the original values to multiply together to give the best fit area. Actually I have bumped into this issue many times. It is easy enough in a 1-d array,just do:
(I. somefunc ) { ])
or similar to get the index. But for two indices the problem is beyond me at the moment. Any help appreciated.
Regards,
Jon
---
from: Tikkanz <tikkanz@gmail.com> date: Tue, Oct 7, 2014 at 7:41 AM
Note that 200 x 200 is a bit of an overkill given 3x2 = 2x3. The following chooses the lower triangular of a matrix of the different sized rectangles to investigate.
getSizes=: ,@(>:/~) # [: ,/ ,"0/~
Given the sides of a rectangle you can count the number of rectangles as follows:
countRects=: 4 %~ */@(, >:)
Now get the index of the rectangle size with a count closest to 2 million
idxClosest=: (i. <./)@(2e6 |@:- ])
Putting it together
*/@({~ idxClosest@:(countRects"1)) getSizes >: i.200
---
from: Devon McCormick <devonmcc@gmail.com> date: Tue, Oct 7, 2014 at 11:30 AM
Hi –
"countRects" seems like a bit of a leap. I think I understand "4%~" because you're overcounting by 4 rotations, but I don't comprehend the magic behind "*/@(,>:)". I see that "(,>:)" concatenates the shape to its increment, e.g. 2 3 3 4 for the input 2 3, but what's the rationale behind this?
Thanks,
Devon
---
from: Devon McCormick <devonmcc@gmail.com> date: Tue, Oct 7, 2014 at 11:50 AM
To answer Jon's last question, if "nr" is my matrix of results from "countRects", then this gives me the index of the lowest (closest to 2e6) in the raveled matrix:
(3 : '(] i. <./) ,y') 2e6(-|)nr 499
If we think of the indexes of a table as being a base ($table) number, we can decode the vector index into the table co-ordinates this way:
($nr) #: (3 : '(] i. <./) ,y') 2e6(-|)nr 19 24
Assembling this using my "13 : " crutch to give a tacit answer:
13 : '($y) #: ([: (] i. <./) ,) y' $ #: [: (] i. <./) ,
Finally, testing it:
($ #: [: (] i. <./) ,) 2e6(-|)nr 19 24
---
from: Tikkanz <tikkanz@gmail.com> date: Tue, Oct 7, 2014 at 4:07 PM
Sorry, yes that is a leap.
(x * (x + 1)) * 0.5 is the number of ways to choose two horizontal lines to make 2 sides of the rectangle.
(y * (y + 1)) * 0.5 is the number of ways to choose two vertical lines to make the other 2 sides of the rectangle
((x * (x + 1)) * 0.5) * ((y * (y + 1)) * 0.5) is the number of ways to choose the lines to make a rectangle.
Refactoring:
. 4 %~ x * (x+1) * y * (y+1) . 4 %~ */ x,(x+1),y,(y+1) . 4 %~ */ x,y,(x+1),(y+1) . 4 %~ */ (, >:) x,y
HTH
---
from: Tikkanz <tikkanz@gmail.com> date: Tue, Oct 7, 2014 at 8:19 PM
Here is another version of countRects
countRects=: */@(2 ! >:)
---
from: Devon McCormick <devonmcc@gmail.com> date: Wed, Oct 8, 2014 at 9:40 AM
This (2!>:) version seems more straightforward, especially if accompanied by a comment pointing out that you're looking for the number of combinations (*/) of all pairs of lines (2!) and the number of lines is one more than each dimension (>:) because they delineate the boundaries of the cells. It seems like this also extends to higher dimensions, so
countRects 2 2 2 27
gives the number of rectangular solids that could be formed within a 2x2x2 cube.
To make the initial version of "countRects" extend this way, you'd have to modify it by replacing the hard-coded "4" with (2^#y), i.e.
countRects=: (2 ^ #) %~ [: */ ] , >:
---
from: Mike Day <mike_liz.day@tiscali.co.uk> date: Tue, Oct 14, 2014 at 5:39 AM
OK - I've re-engineered a solution method which deals with required numbers several orders of magnitude higher than 2e6. I expect my original approach was the whole array approach as recently discussed, but I can't find it anywhere in my files. Apologies for any silly line-throws.
The maths shows that the number of embedded rectangles in a grid of size (m,n) is tri(m)*tri(n) where tri is a triangular number, ie one in the series 0 1 3 6 10 15 ....
tri(n) is n(n+1)%2
It's easy to find which (m,m) most closely approximates the required number, req, by solving the quadratic
m(m+1) = 2 sqrt(req)
Let mmax = ceil(m)
Also, for a given m, we can solve the quadratic
n(n+1) = 4 req % m(m+1)
In general, we get two integers bounding the non-integer solution, one of which will generally give a number of rectangles closer to the required value.
Find the best pair (m,n) over m in [1,mmax]
NB. Number of rectangles in mxn is tri(m)*tri(n) nrec =: *&(-:*>:) NB. best (m,n) over given vector m=y for target x bestmnv =: 3 : 0 : req =. x [ m =. y NB. solve n(n+1)m(m+1) = 4*req NB. ie n^2 + n + 1-4.req/m(m+1) = 0 NB. Get upper & lower integer bounds to each (usually) real solution n =. (<.,>.) _0.5 + %:_0.75 + 4*req% (*>:) m d =. (m=.m,m) (req |@-nrec) n NB. absolute errors i =. ((I.@(=(<./))@,)) d NB. index of least error m (,&(i&{) )n ) NB. Find best m,n for required number of rectangles y bestmn =: 3 : 0 req =. Y NB. get maximum m, when n=m mmax=. >. _0.5 + %:_0.75 + 2*%:req NB. round up solution when m=n req bestmnv >:i.mmax )
Here are a couple of targets suggested in the Project Euler discussion on this topic. The whole array approach discussed in the J forum would find them challenging.
timer'bestmn x:<.9.87654321e19' +-----+------------+ |5.628|20214 983262| +-----+------------+ timer'bestmn 123456789123456789x' +--------+----------+ |0.830002|7198 97621| +--------+----------+
Thanks,
---
Advanced topics
We looked at the in-roads Computer Science has made at the expense of more traditional fields like Statistics.
Statistics: Losing Ground to CS, Losing Image Among Students
August 26, 2014 / by Norman Matloff
The American Statistical Association (ASA) leadership, and many in Statistics academia. have been undergoing a period of angst the last few years, They worry that the field of Statistics is headed for a future of reduced national influence and importance, with the feeling that:
· The field is to a large extent being usurped by other disciplines, notably Computer Science (CS).
· Efforts to make the field attractive to students have largely been unsuccessful.
I had been aware of these issues for quite a while, and thus was pleasantly surprised last year to see then-ASA president Marie Davidson write a plaintive editorial titled, “Aren’t __We__ Data Science?”
Good, the ASA is taking action, I thought. But even then I was startled to learn during JSM 2014 (a conference tellingly titled “Statistics: Global Impact, Past, Present and Future”) that the ASA leadership is so concerned about these problems that it has now retained a PR firm.
This is probably a wise move–most large institutions engage in extensive PR in one way or another–but it is a sad statement about how complacent the profession has become. Indeed, it can be argued that the action is long overdue; as a friend of mine put it, “They [the statistical profession] lost the PR war because they never fought it.”
In this post, I’ll tell you the rest of the story, as I see it, viewing events as a statistician, computer scientist and R activist.
CS vs. Statistics
Let’s consider the CS issue first. Recently a number of new terms have arisen, such as data science, Big Data, and analytics, and the popularity of the term machine learning has grown rapidly. To many of us, though, this is just “old wine in new bottles,” with the “wine” being Statistics. But the new “bottles” are disciplines outside of Statistics–especially CS.
I have a foot in both the Statistics and CS camps. I’ve spent most of my career in the Computer Science Department at the University of California, Davis, but I began my career in Statistics at that institution. My mathematics doctoral thesis at UCLA was in probability theory, and my first years on the faculty at Davis focused on statistical methodology. I was one of the seven charter members of the Department of Statistics. Though my departmental affiliation later changed to CS, I never left Statistics as a field, and most of my research in Computer Science has been statistical in nature. With such “dual loyalties,” I’ll refer to people in both professions via third-person pronouns, not first, and I will be critical of both groups. However, in keeping with the theme of the ASA’s recent actions, my essay will be Stat-centric: What is poor Statistics to do?
Well then, how did CS come to annex the Stat field? The primary cause, I believe, came from the CS subfield of Artificial Intelligence (AI). Though there always had been some probabilistic analysis in AI, in recent years the interest has been almost exclusively in predictive analysis–a core area of Statistics.
That switch in AI was due largely to the emergence of Big Data. No one really knows what the term means, but people “know it when they see it,” and they see it quite often these days. Typical data sets range from large to huge to astronomical (sometimes literally the latter, as cosmology is one of the application fields), necessitating that one pay key attention to the computational aspects. Hence the term data science, combining quantitative methods with speedy computation, and hence another reason for CS to become involved.
Involvement is one thing, but usurpation is another. Though not a deliberate action by any means, CS is eclipsing Stat in many of Stat’s central areas. This is dramatically demonstrated by statements that are made like, “With machine learning methods, you don’t need statistics”–a punch in the gut for statisticians who realize that machine learning really IS statistics. ML goes into great detail in certain aspects, e.g. text mining, but in essence it consists of parametric and nonparametric curve estimation methods from Statistics, such as logistic regression, LASSO, nearest-neighbor classification,
. random forests, the EM algorithm and so on.
Though the Stat leaders seem to regard all this as something of an existential threat to the well-being of their profession, I view it as much worse than that. The problem is not that CS people are doing Statistics, but rather that they are doing it poorly: Generally the quality of CS work in Stat is weak. It is not a problem of quality of the researchers themselves; indeed, many of them are very highly talented. Instead, there are a number of systemic reasons for this, structural problems with the CS research “business model”:
· CS, having grown out of a research on fast-changing software and hardware systems, became accustomed to the “24-hour news cycle”–very rapid publication rates, with the venue of choice being (refereed) frequent conferences rather than slow journals. This leads to research work being less thoroughly conducted, and less thoroughly reviewed, resulting in poorer quality work. The fact that some prestigious conferences have acceptance rates in the teens or even lower doesn’t negate these realities.
· Because CS Depts. at research universities tend to be housed in Colleges of Engineering, there is heavy pressure to bring in lots of research funding, and produce lots of PhD students. Large amounts of time is spent on trips to schmooze funding agencies and industrial sponsors, writing grants, meeting conference deadlines and managing a small army of doctoral students–instead of time spent in careful, deep, long-term contemplation about the problems at hand. This is made even worse by the rapid change in the fashionable research topic du jour. making it difficult to go into a topic in any real depth. Offloading the actual research onto a large team of grad students can result in faculty not fully applying the talents they were hired for; I’ve seen too many cases in which the thesis adviser is not sufficiently aware of what his/her students are doing.
· There is rampant “reinventing the wheel.” The above-mentioned lack of “adult supervision” and lack of long-term commitment to research topics results in weak knowledge of the literature. This is especially true for knowledge of the Stat literature, which even the “adults” tend to have very little awareness of. For instance, consider a paper on the use of unlabeled training data in classification. (I’ll omit names.) One of the two authors is one of the most prominent names in the machine learning field, and the paper has been cited over 3,000 times, yet the paper cites nothing in the extensive Stat literature on this topic, consisting of a long stream of papers from 1981 to the present.
· Again for historical reasons, CS research is largely empirical/experimental in nature. This causes what in my view is one of the most serious problems plaguing CS research in Stat – lack of rigor. Mind you, I am not saying that every paper should consist of theorems and proofs or be overly abstract; data- and/or simulation-based studies are fine. But there is no substitute for precise thinking, and in my experience, many (nominally) successful CS researchers in Stat do not have a solid understanding of the
· fundamentals underlying the problems they work on. For example, a recent paper in a top CS conference incorrectly stated that the logistic classification model cannot handle non-monotonic relations
· between the predictors and response variable; actually, one can add quadratic terms, and so on, to models like this.
· This “engineering-style” research model causes a cavalier attitude towards underlying models and assumptions. Most empirical work in CS doesn’t have any models to worry about. That’s entirely appropriate, but in my observation it creates a mentality that inappropriately carries over when CS researchers do Stat work. A few years ago, for instance, I attended a talk by a machine learning specialist who had just earned her PhD at one of the very top CS Departments. in the world. She had taken a Bayesian approach to the problem she worked on, and I asked her why she had chosen that specific prior distribution. She couldn’t answer – she had just blindly used what her thesis adviser had given her–and moreover, she was baffled as to why anyone would want to know why that prior was chosen.
· Again due to the history of the field, CS people tend to have grand, starry-eyed ambitions–laudable, but a double-edged sword. On the one hand, this is a huge plus, leading to highly impressive feats such as recognizing faces in a crowd. But this mentality leads to an oversimplified view of things, with everything being viewed as a paradigm shift. Neural networks epitomize this problem. Enticing phrasing such as “Neural networks work like the human brain” blinds many researchers to the fact that neural nets are not fundamentally different from other parametric and nonparametric methods for regression and classification.(Recently I was pleased to discover–“learn,” if you must–that thefamous book by Hastie, Tibshirani and Friedman complains about what they call “hype” over neural networks; sadly, theirs is a rare voice on this matter.) Among CS folks, there is a failure to understand that the celebrated accomplishments of “machine learning” have been mainly the result of applying a lot of money, a lot of people time, a lot of computational power and prodigious amounts of tweaking to the given problem – not because fundamentally new technology has been invented.
All this matters – a LOT. In my opinion, the above factors result in highly lamentable opportunity costs. Clearly, I’m not saying that people in CS should stay out of Stat research. But the sad truth is that the usurpation process is causing precious resources–research funding, faculty slots, the best potential grad students, attention from government policymakers, even attention from the press–to go quite disproportionately to CS, even though Statistics is arguably better equipped to make use of them. This is not a CS vs. Stat issue; Statistics is important to the nation and to the world, and if scarce resources aren’t being used well, it’s everyone’s loss.
Making Statistics Attractive to Students
This of course is an age-old problem in Stat. Let’s face it–the very word statistics sounds hopelessly dull. But I would argue that a more modern development is making the problem a lot worse – the Advanced Placement (AP) Statistics courses in high schools.
Learning and Teaching J
We looked at this short essay on a historical view of choosing first programming languages. It seems somewhat dated even though it dates from 2008, but at least the more distant history remains relevant.
A History of Choosing a First Programming Language
Two of the major conclusions are
. 1) Until recently (mid 1990s), Pascal used to be the most widely adopted programming language [11] “for introductory computer science courses” [1]. According to [12], one of the principal advantages of Pascal is that it is a simple, small and concise language” specifically designed for teaching structured
programming.
. 2) According to [13] “(t)he most striking trend in the field of programming languages” in the 1980s had “been the rise of paradigms, of which the object-oriented paradigm is the best-known.” As “support for the creation of objects as instances of a class,” [1] function overloading, inheritance and polymorphism became more common, Pascal’s popularity gradually began declining - an increasing number of institutions were choosing to introduce undergraduates to programming by teaching object oriented languages, such as C/C++ and Java.
Interestingly enough, even though the essay makes the point that "[a]lthough Pascal, BASIC, FORTRAN and COBOL were all abstractions of assembly language ... their primary abstraction still required one to think in terms of the structure of the computer, rather than the structure of the problem one was trying to solve. The effort required to perform this mapping, and the fact that it was extrinsic to the programming language, produced programs that were difficult to write and expensive to maintain." This is interesting because it's unclear that the embrace of object-orientation has done much to remedy this deficiency, yet little attention seems to be paid to this fundamental flaw of most languages.
Computer Science Popularity Currently Very High
According to this article from the Harvard Crimson, "Nearly 12 percent of Harvard College is enrolled in a single course ... Computer Science 50: “Introduction to Computer Science I,” attracted a record-breaking 818 undergraduates this semester, marking the largest number in the course’s 30-year history and the largest class offered at the College in the last five years."
This popularity is also reflected in the growing number of sites offering to help people learn to code. This, from a posting on SlashDot, tells of a site that's popular even though it "is a bit lacking in the usability department." Perhaps one day we'll see a surge in interest in writing good code but don't hold your breath. Right now, according to the SlashDot posting, the emphasis is on writing well enough to get a job. From a J point of view, we could stand to be represented, even on sites like these.
File:Web-Era Trade Schools - Feeding a Need for Code.pdf also reports on the popularity of web-based and physical coding schools.
A Good App-Development Tool for Children?
Both of these previous topics combine in the excerpts at which we looked from an exchange on the NY Tech Meetup File:WhatAppDevelopmentToolsForChildren.pdf on the topic of "What's a Good App-Development Tools or Environment for Children" [sic]. The tools recommended by this group, which has a web- and app-development orientation with an emphasis on prototyping tools, were the following.
. allows user to develop mobile games by "drawing" them . app-development tool from MIT . web and mobile tool for prototyping, collaboration & workflow . powerful, yet simple way to build and play your own worlds, stories and games . for building IoT/Hardware projects; also recently launched a cloudbit as well.
Materials
-- Devon McCormick <<DateTime(2015-01-04T23:08:37-0200)>>