Wiki/Report of Meeting 2023-08-03
Report of Meeting 2023-08-03
Present: Art Anger, Chris Burke, Ed Gottsman, Raul Miller and Bob Therriault
Full transcripts of this meeting are now available on the its wiki page. https://code.jsoftware.com/wiki/Wiki/Report_of_Meeting_2023-08-03
1) We began answering Chris' question of when we would be done with the prototype wiki,https://code2.jsoftware.com/wiki/Category:Home with Bob saying that we were pretty much done now. Chris said he would bring a new machine that would incorporate a newer version of the MediaWiki and he would let us know when that change would be made in the next month. The prototype would be left up for a few weeks after and then taken down. If information from the prototype wiki is required after that the machines will just be left offline and information can still be retrieved. Target date for the new machine is September 1st.
2) We started off with a discussion about Live Search and there was some confusion between the Live Search and the J wiki Browser. Chris said the J Wiki browser was working well for him on Unix although Chris did not think he was a target for the application since he knows the wiki pretty well. Ed presented the challenges with Live Search which is an application that is based on his machine and requires some local storage. Ed then did a demo of Live Search which allows extensive search of forums and wiki for J glyphs. https://www.youtube.com/watch?v=IId4hgTKp08 Chris wondered how the page sources were being done and Ed said he had written a crawler. Chris said that he already has the pages in a database in text form at about 50 megabytes. Ed clarified that it was 1.25 gigabytes because it included forum posts as well. Ed thought that it would be useful to have access to Chris' files as it would solve page crawling challenges. Raul felt that the plaintext file is separate from URL's and Chris confirmed that the titles which are URL's are separate from the plaintext.
3) Discussion moved on to forum information and the forums have been stored on google groups for the last 12 years with Mailman as the member that distributes the information to the forums. Ed felt that if he has access to the forum information he could work with that. Chris felt that if Ed had an account on the group then the information could be downloaded as well. Chris said he would provide Ed with links to the files and scripts to facilitate this. Posts earlier than 2012 may need to be accessed from other files and Ed already has access to pre 2012 posts. Bob wondered if Live Search required JQt and Ed pointed out that the search mechanism is not tied to JQt but could feed html results. The J Wiki Browser is definitely tied to JQt.
4) Bob is moving his focus to updating the Primer and getting the wiki ready for the switchover.
5) Ed would like to extend the user base a little bit, with an emphasis on Unix users. Ed showed the 'rabbit hole' button that takes you back into forum posts within context of the threads. Bob felt that curation was aided immensely by being able to search the whole wiki for concepts to see where information is stored across the wiki. Ed is wondering whether non-curators would be as interested and whether it will get traction. Raul felt that he is a serial researcher and the J wiki Browser allows him to dive deep quickly. Raul also mentioned that the time information could be added to the forum threads so that it is a little easier to track. Ed showed the issue that he had overcome with threads crossing the months which was a bug that Chris had known about in Mailman. Ed would like to add 5 more people as testers that would indicate how much interest there might be. Bob felt that we could target individuals who were more likely to give feedback. Ed and Bob will put together a list of people to approach.
For access to previous meeting reports https://code.jsoftware.com/wiki/Wiki_Development
If you would like to participate in the development of the J wiki please contact us on the general forum and we will get you an invitation to the next J wiki meeting held on Thursdays at 23:00 (UTC) Next meeting is August 10th, 2023.
Transcript
And we're off.
Okay, so thank you, Chris, for joining us, because I'm sure there'll be lots of stuff to chat about.
And I guess one of the first questions you had, or one of the things you came up with -- oh, there's Art too.
Good, good.
Let him hook up to us.
One of the first things you were asking about is when we'll be free of the prototype wiki.
And, you know, actually, I think we're free of it now, and I don't think you have anything to transfer over.
Because for the last six months, we've been doing our development in the real wiki, in the main wiki.
Oh, I didn't realize that.
In fact, I'm actually planning to set up a new wiki anyway, because we're on an old version of the media wiki software, like old two or three years old, I guess.
And so usually, you know, as the machine, with the older machines, I'd rather not upgrade the older machine, but just get a new machine.
So I'm actually setting that up right now.
And with the latest LTS media wiki, which I think is 1.
394.
So I will do that and copy over the current wiki to that machine.
And then before we go live with it, I'll let you know, so that if you want to make changes, you can do that.
But if you're saying that you've already made the changes in the main wiki, that's great.
It means that there's almost no, there's almost nothing for me to do other than copy the old wiki to the new one.
- Yeah, I think we're continuing to make changes to the current wiki.
But what we're doing is it's almost like our approach to it is building the plane while we're flying.
We've got like a main page that's a parallel to the main page of the wiki.
And nothing links to it right now.
So all we need to do is have that main page link to our main page and then it's up and going.
- Okay, so that's great.
So I'm working on that this month.
So in two or three weeks, I'll have a new wiki up and copy the old one over.
And at that point we can freeze the old code and the old code too, we can just freeze them.
I'll leave them up for a few weeks in case you need to refurbish them.
but then after that I'll just take them down.
- Yeah, and in the minutes and stuff, I've quite often mentioned to people that we're coming to the end of it.
The only person I know who's really done very much, well, other than Art, Art did a fair amount of stuff in the old wiki that I'm not sure that he's transferred across yet, but I think it's the number of pages he's done, I'm pretty sure you can pretty much do it copy and paste.
- Yeah.
- And just do it that way, 'cause I don't, well, Art can maybe fill in if you think there's more than maybe half a dozen pages that you need to transfer across, right.
- It's about that.
- Yeah, okay.
- In any case, I should just mention that, you know, when I close the old machines down, I'm not going to delete them.
I just take them off, stop them running.
- Yeah.
- And then I just leave them for a few years so that, you know, if any time you really, really need to get something back, then you can always get it back.
- Yeah, it was hugely useful.
We were using it literally to go in and test stuff that we didn't wanna test with the new wiki or the current wiki.
And so that was really, really useful.
But I think the last, as I said, four to five months anyway, I've been telling people just work on the current wiki unless you're doing something really crazy.
And I don't think anybody's been doing anything crazy, so.
- Okay, that's much easier than I thought it would be.
So that's great.
- There you go.
(both laughing) - I always like to have less work rather than more.
I, whenever that can happen, I'm a fan of it.
Yeah.
- Yeah, that's good.
- So that's one of the things that the transfer across and you're saying probably like if we told people September 1st, that's probably accurate.
- I think so.
I think I should have it ready by then, yeah.
- Okay.
- It's not a major effort.
It's more just getting the time to do it.
- Yeah.
So that having cleared up, there was conversations, I think, Ed, that you were talking about today with Live Search.
I don't know whether you have got a chance to go in and play with any of this yet, Chris.
- I've looked at Live Search and it looks quite nice.
I have to admit this is, it's not something that will be aimed at me because I sort of know my way around.
It's really aimed at the beginning, at the beginner or the occasional user.
So I'm kind of the wrong person to comment on it.
But I mean, I was able to install it.
I run it on Linux and it seems to work the same way as it works in your video.
So it's good that way.
Yeah.
And I mean, it's going to be just a regular add-on and I think that's nice.
Yeah.
Right.
So Chris, forgive me.
Hi, by the way.
When you say live search, do you mean the add-on or because the live search feature is not enabled.
I was thinking of the add-on.
I was thinking of the add-on.
Okay.
Is the add-on the live search or is that something else.
No, we've been a little sloppy about our terminology in our discussions, and I apologize for that.
The add-on doesn't really have a name.
I think we've been calling it the viewer or the jwiki viewer or something along those lines.
Live search is a feature that's experimental at this point.
Most of the viewer seems to work pretty well.
I'm overjoyed to hear that it worked for you on Linux.
We're very light on Linux testing, and that's really good to hear.
Live search is a progressive search feature that only works on my machine at this point because it does need a very large local SQLite full text index database in order to work properly.
What most of the discussion over the last couple of days has been about when we say, how do we make live search work.
How do we deliver live search.
It's been about that progressive search feature that supports looking for J tokens, J glyphs, I guess I should say.
And I spent some time on Amazon Web Services this afternoon.
And I have to say that AWS is a lot more approachable than Google Cloud Platform is for the developer.
I've used GCP in the past.
And it's also quite cheap for the kinds of volumes of storage and transmission that we're talking about.
So my feeling is that I should be able to get live search, the live search feature of the add-on that is up and running pretty quickly on AWS.
They have a one-year free tier for new users.
I'll be a new user, and I certainly don't expect to exceed any of the limits that they've got in place for free.
So I think we should be able to deliver the live search feature of the add-on, one, fairly quickly, two, for free for the first 12 months, and thereafter at quite modest cost, modest in the sense of a few dollars a month, barring outrageous take-up by an audience that I think perhaps doesn't exist, just in terms of numbers.
I'm going to stop talking because I'm never sure how much sense I'm making.
Can I just ask about the live search.
Is the live search of the wiki, is that just reading the pages on the wiki.
No, no, no, no.
It's more than that, is it.
Yeah.
I've actually made some changes.
Chris, if you'll indulge us, one of the things that we do each week is I get to give a demo, which is, I live for that stuff.
I just do.
All right, so you should be able to see what we call the add-on now.
So that's the table of contents on the left.
And if you've played with it, I won't bother you with that part.
But the live search feature is in contrast to a conventional search.
Conventional search just front ends the wiki search mechanism that exists on jsoftware.
com and the forum search mechanism that already exists on jsoftware.
com.
So you can search for, let's say, slash colon we.
And it takes a little bit, and it comes back with 213 results in J programming, which you can load up by clicking three results in J beta, J chat, J source, and a bunch in J general, which you can load up.
The way progressive search or live search handles that same query, J slash co, and you get, I'm gonna go ahead and shrink the browser so we can have a little more room to look at the results.
You get color coded two column results.
And the color coding is that Rust, so for example, this hit right here, examples of the grade operator.
Rust is forum hits, so grade behavior is a forum hit, examples of grade behavior is a forum hit, and you can click to load those up on the right.
Teal is wiki hits.
And the way this works is I've got a local SQLite full text index database of all, I think it's about 5,000 wiki pages and all 120 odd thousand forum posts.
And as you type, so I've just typed, I've looked for slashco.
I'm sorry, on the left is snippets, keyword and context.
If I look for slashco/co, I get many fewer results because those two tokens occur together much less frequently than it occurs once.
I've got a slider for the years.
So right now I'm looking at back to 2021, but I can take that back to 2019, 2018.
I can go all the way to all documents, which goes all the way back to 1998.
And now I've got paginated results, page three, page four, page five.
I can also say I want only the wiki pages and get those.
I can search for English as well as code.
So if I add we, if I add Roger's last name, I can get posts from him and about him.
And of course, the one page from J4C that discusses the slash co slash co trick, which I think is described as having an economy that verges on sorcery, which I can only agree with.
So that's the story.
The problem is the local database that I'm going against here, the local SQL-like full-text index database that I'm going against here is one and a quarter gigabytes, which from some perspectives is enormous.
On the other hand, I think I mentioned in one of the emails, you can't even buy an SSD with less than 256 gigabytes anymore.
If you want one, you have to make it yourself.
So in that sense, from a storage perspective, this database is a rounding error.
If we can deliver updates to it, that is content for new posts, content for new and changed wiki pages, incrementally and just have the, basically maintain the index locally on each client machine, which I think we can.
I think I know how to do that with AWS, again, very cheaply.
Then we can deliver what we're calling live search, the live search feature of the JWIKI Viewer.
I think in a pretty straightforward and economical fashion.
- Okay, and how are you getting the page, how are you getting the data for the pages.
- In the sense of loading it now when I click or in the sense of indexing.
- No, no, when you're building the database, where do you get the page source.
- Oh, I had to write a crawler in order to build this table of contents mechanism.
that same crawler is used to load up, to suck down, I guess I should say, those pages from the forum and from the wiki.
- Okay, because I mean, I can help you here.
I know you put a lot of work, you probably put a lot of work into that, but basically, I do this already.
I already have the pages in a browsable format on the server.
I'll explain how that works.
- Yes, please do.
- Yeah, so what I do is, In order, for my own wiki search, what I do is I have a function which simply reads the database on the server.
The database I think is MySQL or MariaDB, I forget.
But it reads the database and reads all the pages and puts them in a text file, a flat text file that's delimited by an unusual element.
I think it's the first, it's one from AV, byte one.
And that's a delimiter.
And that's then stored as a plain text file in both upper and lower case.
So when someone does a search and say slash colon, I can just do an E dot on that file.
I suspect that if you had that, it would reduce the size to about 50 meg.
And moreover, I could read it directly from the server.
- I take your point.
The reason it's one and a quarter gig is that it's not just the wiki, it's also the forum.
Okay, the forum also, I'll take a look at the forum size.
But anyway, I mean, whether it's in SQLite or plain text, it's kind of wouldn't affect your program.
It's just that if you wanted to, if I gave you the locations of these files, then you could download, or your program could download them and you would get the latest version directly from the from the yes that would be wonderful i didn't realize that all of this yeah i had realized what you were doing otherwise i would have jumped in and said oh i've got this data already for you and and the thing is is it um if it's a plain text file you you can do a grep on it or from j you can do an e dot or run a regular well the the thing i'm not i what i'm doing for, well, what I wanted to do was to be able to use a real full text index engine and leverage that technology.
And the problem with those is that the tokenizers generally discard punctuation, which is kind of a problem if you're searching for J code.
And the approach I took was, I take all the pages that I want to index and I run them through semi dot, excuse me, semi co, the word formation primitive first.
And then I translate all of the J glyphs, so @co and @semico and so on, into strings, the English language equivalents of those glyphs.
And I put a J on the front of each token.
And that's what's given to the indexer.
And so when you type in slash co slash co, for example, that's translated into slash co slash co or j slash co j slash co.
And that's what the SQLite full text indexer works on.
And what we get from that is we get, well, for one thing, it's not treating them as slash and colon as two distinct tokens, which is good.
You really want them to be treated as one.
The other thing you get is as relevance ranking.
So you'll notice that if I just search for slash co, I don't get, excuse me, the top hits have multiple occurrences of slash co in them, which I'm pretty sure is the relevance ranking function, the SQLite relevance ranking function at work.
And that is something I can't get from grep.
The other thing I can't get from grep is the speed.
220,000 forum documents with graph is going to take a while Yeah, yeah, I understand and in fact Sam it sounds to me as if a mixture of what was suggesting it would work And that that is you don't have to crawl the pages called page crawling can take time and also yes, you might get you might have a problem in that the The server itself will limit page page crawling.
It'll detect a page crawler and says well you can't do it I have run into that Okay but in theory anyway, but what I what I can give you is the ability to Offer the server itself every few minutes simply to check the status That would be wonderful.
Yeah, in fact, what actually happens now is when you do a search on the on the on the wiki When you do a search on the wiki the system checks to see if the the wiki has been changed since the last search.
No change, it simply means the existing files.
If it has been changed, it simply does a dump from the SQL database into a plain text file.
I think you could work from that plain text file and just download that and that would give you - Oh, yes, absolutely.
- all the information, yeah.
Absolutely, and that would be dramatically simpler and more robust.
I think you're right.
And then you can take it and do your own, you know, do then your parsing and WordShare.
I think it's a plain text file and a corresponding list of URLs.
I think that URLs are separate from the plain text file.
Is that correct.
It is.
I maintain two things.
One is the plain text and the other is the titles.
So if you, if you, if you find something in the, say the third block of the text, then it'll be the third title.
Yeah.
The results include URLs.
So they're in there somewhere.
Yeah.
Yeah.
The titles are the, are the URLs essentially.
Yeah.
Yeah.
Yeah.
>> Okay, yeah.
And then may I ask, what about the forums.
That's really where the time is invested.
>> The forums are done very similarly.
It's just that we're going to have a problem when, you probably saw the message I sent out today about- >> I did, yeah.
>> Wanting to move to Google Groups.
But basically, currently on the forums, whenever Mailman, we use the Mailman software, which unfortunately is out of date now.
It was up to date when we started using it, but now we're using what is effectively an obsolete version of it.
But when a mail forum comes in, it automatically updates the text file with the forum messages.
And I'm just looking at those now, and each forum, the length of the messages is 170 megabytes.
That's unzipped.
So 170 megabytes, if it was zipped, of course it'd be quite a bit smaller.
- Is that all.
- That's all.
- How interesting.
- Yeah.
- No, I'm quite serious.
I thought it was quite a bit larger.
I wonder whether I'm doing something wrong.
- You might expect so, but it's quite amazing.
But one of the things we've done in the forums is that we limited any message to 100 kilobytes, which means that most messages, people when send a huge message, then it doesn't get through.
So the message is quite small.
Now, we would have a problem if we moved to Google Groups.
It's not an insurmountable problem, but right now the forum message handling is done automatically.
So you send a message to the forum, as soon as it gets to the forum, it's in the forum search.
With Google Groups, we'd have to figure out a way to get a Google Group message into the forums.
So that's.
.
.
(laughs) I always think I wouldn't have to solve that problem, but it sounds to me-- - I haven't.
- Unless you're kind of willing to stick just the Google groups.
But that's, um, I, if you can point me at the data, I will arrange to get it one way or another.
I'm not fussed about that.
Um, I assume I mean, Google groups must have a search mechanism of its own that we could front end in theory, if we had to.
That they do.
Yes, they do.
And of course, the Google groups search machine mechanism doesn't handle j characters very well.
Ampersand and so on and dotters as just plain English.
- So there must be a way to export the data, at worst by crawling it, which I'm more than happy to do.
That's my business these days.
- Yeah, you wouldn't have to crawl.
What do you want to do is to set up an account and it could be your own account, which gets all the forum mails.
and simply you'd have to have a mechanism to download them, say, to HTML, and then pass them as we do right now, and put that in the forum search.
- That strikes me as something that would be a fairly minor investment, and I would be happy to do that.
So yeah, don't worry about me as you make the transition.
I'll adjust.
- So what I'll do then is I'll give you details.
I'll give you links to the files.
Thank you.
I'll give you links to the files so you can download them yourself and see how it works and I'll give you links to the scripts as well.
Oh, thank you.
It's actually, it was quite a bit of work to set up but fundamentally, the fundamental idea is quite simple.
I say you have the plain text files and you just do an E dot on it or a regular expression.
I think what you're doing is a lot more sophisticated which is good.
- Thank you.
- And if we move to Google Groups with forums, does that partition the older forum posts from Google Groups.
Like, will it be split into several groups.
- Well, it is split into several groups, yes.
So it'll be the same.
We'll maintain the same groups as we have right now.
Things like programming in general, beta, et cetera, will continue.
What you may not be aware of is that the programming forums, the JForums are already Google groups.
They have been Google groups for the last 10 or 12 years.
And all that happens is that there are Google groups with essentially one member.
And that one member happens to be our mailman.
So you send a message to JProgramming, it goes to the Google group, JProgramming, which then forwards it onto the mailman server.
So that's why I'm able to keep the, That's why I can say I've got 12 years of archives in Google Groups, so that when we move over to that, all those 12 years archives will be publicly available.
- And then prior to-- - Before that.
- Yeah, before that.
- Before that, we only have the Mailman archives, which go back quite a long way, to about the beginning of the 2000 or-- - 1998, actually.
- 1998, okay.
(laughing) Yeah, but.
And so would those still be available the same way.
- Yes, I mean, there's no reason why I can't maintain them.
I mean, right now, if I hadn't had to, if we didn't have this discussion, what I was thinking of doing is simply freezing the old Mailman archives.
So you'd have exactly the same front end.
It's just, it would be limited to when we move over.
Because there's no reason why we can't preserve those.
But I think what Ed would like is to add new Google Group mails into the Mailman archives, which is-- I think it's a little bit of work.
It's not a major effort, but it's-- Don't do anything for me.
I will be perfectly-- That's what I like to hear.
As long as I know where the data is, I will adjust accordingly.
No, please don't put any effort in for this.
I can handle any data anywhere.
OK, that's great.
But it sounds like there's a continuity between the Google Groups being accessed by a mailman and what will be the future, which will be Google Groups accessed by Google Groups.
So whatever you get from 2012 on will just be continuous, right.
I'm sorry.
I missed the point on that one.
Well, I'm just saying that there isn't going to be a break when you switch to Google Groups.
the information that the information he's running from will be 2012 on is actually Google groups.
He's got access to that information.
He will have access to it.
But of course, it will be the same information as in the Mailman archive as well.
Yeah.
It's only the new mails.
Like if we switch over to Google groups, then any new mails will go to Google groups.
And at least right now, there's no mechanism that we built for new mails to be written to the Mailman archive.
Okay, I gotcha.
Okay, I gotcha.
Yeah, but we can continue doing that because as I said that the our Google groups are set up in such a way that you know, the sole member of the Google groups is mailman.
And that's the reason why we can't keep that.
It's just that some I have to stop mailman from sending out mail, or we'll have some better mechanism for that.
Yeah.
Unsubscribe people from mailman.
Yeah.
But there will be a discontinuity though that would be back in 2012 because Google Groups doesn't extend back any further than that.
No, no.
But that's if you're doing the Google Groups search.
If you're doing a search on Google Groups, it will only go back to 2012.
And it has the problem that all these big search engines is that they tend to barf on the Jcode because they treat Jcode as punctuation.
you know, the primitives are not treated as code, but they're treated as indications for the search.
They say that the search doesn't work very well.
It's just that a regular wiki search doesn't work very well in J code.
But it sounds to me as if what Ed's already got now, where he's building up that forum library, he's overcome that part of it because he's already got those stored in.
That's right.
So what Ed is doing, it seems to me like the perfect solution for that.
Yeah.
Now, the only limitation is what Ed's doing is limited right now to JQT.
Well, I guess you could decouple the add-on from the search mechanism.
So there's nothing in principle to stop us from setting up.
If we wanted a search mechanism independent of the add-on that used the tokenization mechanism and that you've got relevance ranking and so on, you could set up a SQLite-based web service that was accessible from anywhere you can access web services.
It wouldn't require JQT at that point.
That's a whole other effort.
We'd have to decide whether there was any reason to do that.
But the search mechanism is not married, is not in principle married to the add-on.
Well, that sounds really good.
I think there's a number of ways working together.
It'll actually reduce the amount of work, and when it's actually switched over, maybe more transparent to people that are already on it.
We shall see.
Yeah, it's just what you've got here doesn't have to be in Qt.
I think keeping in Qt, most people will use Qt, I think, but it doesn't have to be in it.
It seems to me that your functionality could work.
When you say the functionality, do you mean in particular the live search feature or more generally the whole table of contents.
I think the live search feature, yeah.
Right.
Agreed.
Yeah, no, we could we could containerize this and make it available in a web service on Amazon Web Services and yeah, that I and deliver it in any number of channels, it'd be up to us.
But yeah, that wouldn't be a huge effort, I think.
All right, thank you all very much for the discussion.
I really appreciate it.
I'm going to stop sharing.
And Bob, was there anything else you wanted to cover.
I think we covered the main things, because one of the things I want to touch on was Google Groups.
But we've gone through that and I think got to sufficient depth until we start exchanging information.
And then things may-- we'll have more to work on at that point.
Yeah, I'm still working on the JPrimer.
I'm now going to focus more on bringing these new pages, for lack of a better term, live.
They'll exist on the current wiki anyway, but I'll get them closer to being live so we can do that switch over sooner than later.
And that's kind of my projects right now.
- If nobody has anything else, I'd like to open it up to the question of figuring out whether there's a market for this thing and for the add-on.
I feel like at this point, there are three people now for Chris, thank you very much, who've actually made it work successfully.
And Bob, I know has been a regular user, Raul, I know has been a regular user, and they've both provided a lot of very helpful feedback, particularly on platform-specific issues.
And also Dave has used it successfully, although not as much.
What I'd like to-- I think the next step, and I'm certainly open to discussing this being corrected on it, is to try to expand the user base a little bit, just for two reasons.
One, to increase the amount of testing, the amount of exercise that the application gets, 'cause I still feel that it's very light in that regard.
But secondly, and more importantly, to find out whether there's actually any, whether it can get any traction, whether there's any enthusiasm for adopting it.
And Chris, I very much take your point about how experts don't need it.
Although I wonder about that a little bit.
Let me actually share my screen again 'cause there's something I've noticed that is kind of interesting in my own behavior.
If you load up a forum post, you do a search, you load up a forum post, there's a button called show post in thread.
I don't know if you looked at it, but if you click it, it teleports you to the forum browser with the current post selected and you can see the entire thread.
So you can see the original post and work your way through the discussion.
And my private name for this button the rabbit hole button.
This is the start of a lot of time.
You could either say invested or wasted, it's entirely up to you.
There is so much content going back so far, so much interesting discussion, particularly in the forums, that I wonder whether this improved access to that content might not be of interest even to very expert users.
I don't know.
But I would like to try to expand the user base and see whether, get more opinions on whether there's any enthusiasm for this thing or not.
I feel like we've done as much as we can sort of sitting around in a circle, and I'd like to expand the circle.
Once again, I'm never sure how much sense I'm making.
If If somebody could comment, I'd appreciate it.
Only thing I'll add to that is-- and not so much the forum posts, although I take your point.
I've spent a little time diving down those rabbit holes.
And as much for historical information as anything else, it's really valuable.
And it's a very quick way to get to access to that information.
But where I found it's even more useful for what I've been doing is in curation of the wiki, because I can find not just one area on a search that might mention something, but I can find all the different things where a certain technique or a certain part of speech is mentioned.
And from that, I can go in and check which ones are current and which ones are at a date.
And that's a huge advantage to having to go through the whole wiki and try and figure out where it is because this search mechanism is so quick.
Once you've done the search on a term, Well, the earlier versions you would literally just be hovering.
Now you have to click, but that's really not very much overhead.
And I find it very useful that way.
So that's interesting, Bob.
I have had the feeling occasionally since, I guess, we started this in April, that I'm mainly writing this for you as a curator, that you're my main user.
And what I'm unclear on is whether non-curators will have any use for it.
And that's what I'm hoping to establish as soon as I can.
Because it's been a lot of fun to build.
I'm very happy to maintain it going forward and keep everything humming, all the infrastructure.
But I would like to know whether it will get any traction or not.
My sense there, for a lot of things that I do are kind of bursty.
If there's something I'm doing, I'll research it and I'll focus in an area.
Then I'll be going off and doing something else and I'll stop because I found my answer or because I want to ponder something else.
So that's one part of it, as I expect there'll be a certain number of people that are going to be similar to me and they'll be doing it in pits and starts.
Another thing I was noticing is that back when you're doing the thread browsing, it's a little bit tricky to see the time, normally I think of mail messages having a date and time, and that's sort of there at the top, that'll give you a hint of the year and the month if you need to look for it.
And it's also sort of there on the right because you can actually see on the message you're looking at what the date, it's got a date stamp on it, and maybe when it's quoting, but it's not there in the, it's not there in the, I don't know if that's something that would fit, if it makes sense, but that's something that I just noticed, you know, from my way of thinking is I, I think of these things as having date and timestamps and I don't really see in there when I'm, when I'm picking a mail.
It's not after I pick it.
- Interesting.
Yeah, that's a really good point.
Excuse me.
And that's specifically when you're looking at search terms, you've done a search and it's at that.
When I'm doing thread browsing, when I'm browsing a thread, there's a sequence to it and it happened a certain range of time.
And that, you know, I can sometimes messages will be there'll be back and forth.
There'll be a quick discussion.
Sometimes there'll be, you know, later that month or a month later.
but it's my best as part of my mental model of a forum thread, it has that time information as part of it.
>> That's a really good point because one of the limitations of browsing the forums through the HTML Mailman archive is that, I believe I've got this right, if you look at December posts, so we're looking at December 2022 right here, a nub with extra column, you'll only get the posts in that thread from December.
>> Yes, that is a bug.
It's a bug in now.
>> Yeah, right.
>> It's extremely annoying actually, why they didn't fix it.
>> This is all of the posts.
I think it would be because they may in fact span multiple months, it would be especially good to put actual dates in this display.
I think you're absolutely right about that.
Let me do that.
That's good.
Thank you.
I wasn't sure what I was going to do on this vacation.
We all have our little neuroses.
If you can fix that, that would be great.
It's been a long time.
It's fixed.
It's just not apparent.
So this is, I don't know about this.
Yeah, this goes, This is actually posted initially.
The initial post was in November of 2022.
And if you go down to the bottom and click on that.
Yeah.
There you go.
Went into December.
So I did fix it.
It's just not apparent that I fixed it from the display, from the initial display.
Cool.
All right.
I will add that.
Yeah, I take your point, Roel, about the burstiness of it.
I got to the point, I don't know, a month or so ago, where I've got a shortcut set up, Command Shift H, and I find myself now using it by default.
I came to a point where I reach for it before I reach for Google, you know, when I'm looking for something.
And I've got muscle memory for some things, so ancillary pages, for in conjunction, expand the browser, and I can see the whole, you know, the whole table.
So as a reference mechanism, I find that it's actually pretty usable now.
But I take your point about burstiness.
And that's fine, burstiness is great.
But again, I'm happy to keep this thing humming, but I'd like to see some indication of likelihood of usage in the community.
So I'd like to expand the group.
- Yeah, I think so far, you'd kind of asked us not to advertise it widely, 'cause you didn't want it.
- Oh, sure.
- So I think it's now it's time, we're changing that, right.
- Yes, now we're changing that.
And I'm gonna be kind of difficult about it because what I don't wanna do is announce it to the group at large and say, "Come on, I'm not prepared to deal "with what I hope would be an enormous volume of complaints.
"I'd like to get those complaints "from a small group of people, "a smaller group of people initially, "but larger than just us.
" I hope I'm making sense here.
I'm just not sure how to reach out, how or to whom to reach out.
- Oh, we can use the Discord group, for example.
There's a relatively small community of J users there.
- I don't know this group.
- You know, they're on the APL forum Discord.
I'm wondering whether that's, like, as a number, you're probably looking at 20 to 25 people.
No, I'd be happy to add another five at this point.
OK.
So in that case, I think if we went to something general like the APL Farm Discord, word would spread, and you'd find that it went somewhat viral just for people testing it out.
Viral is OK, ultimately.
But I'd like-- I'd like an interim period where I'm fielding, um, bug reports from five people, not 50 people.
My experience there is that many people are quite shy about filing bug reports.
Really.
I, yeah.
Um, there's some people that, that, that are comfortable doing it.
Other people are more along the lines of, well, I'm not sure if I'm qualified to file to report on this.
If it's my mistake or, you know, they'll do that kind of thing.
Um, all right.
When you sent out the email this morning, my time, Chris, to the different people that you know about Google Groups, if I use the people on that, I'm thinking would be natural to include as testers and just say, can you take a look at this.
It may not be of absolute use to you right now, but it'll be basically a bit more focused beta testing on this.
What do you think of that.
I mean, you could do you could do I mean, many in that group are really, really more interested in the J engine.
Development of the J engine, but you could use those people or other people, you know, from the forum.
Yeah.
But I tend to agree that with this point that most people are shy about bug reports or improvements.
It's hard to get feedback from people.
That's interesting.
That's a shame.
Well, there's a few people I've seen on forums that are quite happy to give feedback.
And I think those would be your targets.
Maybe that's okay.
I know.
I myself, I'm not a great feedback person.
If I look at some software and I think it could be improved.
I very rarely would write to the developer and say we should really do this.
Well there's a difference between a feature request on the one hand and on the other hand a it ate my machine report.
It's really the latter that I'm more interested in at the moment.
All right well Bob how about you and I work to come up with a list And you've got more cred in the community than I do.
And what I think I'd like to do is come up with an email between the two of us that you could send out to five people we'd identify.
And I take Chris's point about the difference between folks who are more interested in building and maintaining the engine on the one hand and people who are J programmers, quad J programmers on the other hand.
And so we should probably be sensitive to that as we come up with our list.
But let's take that offline and see if we can do that.
And I'll scan the archive.
By God, I've got to-- I was going to say, and start looking at beta, because you'll see people reply on beta, and their responses give you an indication.
Really, really good point.
Yeah.
Yeah.
Yeah.
All right.
All right, good.
Thank you.
All right, I will be in touch.
Yes.
After your bike exploration.
Probably not after.
We're going for a few weeks, and I'm sure they'll be down time.
Yeah.
Oh, wow, that sounds like a-- that sounds wild.
Good for you.
It's going to be a lot of restaurant food.
I have mixed feelings about this.
We'll see how it goes.
I really enjoyed the restaurant food when I traveled off the West Coast of Ireland, but I like fish and chips.
Oh, the food is terrific.
But yeah, absolutely, absolutely.
But a steady diet of it for a long period can get to be a little oppressive.
So I don't know.
But you're riding a bike, right.
Yes.
Yeah.
Actually, that should help.
You're right about that.
That's the truth.
I've done long distance riding and you become an engine that gets fed three times a day Right.
Yeah, you start to live a lot lower on the Maslow's hierarchy.
Yeah, you just you're shoveling it and you can't believe you're not putting on weight.
You're just pushing it right through.
It's it's uh, yeah.
Anyway, um with that, um, I don't have really anything else to share.
I know it's coming up on 10 to 1 Ed's time, but anybody else want to add anything at this point.
I'm good.
Okay, well, thanks a bunch.
Ed, you and I'll be in touch to develop maybe five or six people that might be good quality feedback mechanisms.
And an email to send to them, yes.
Although I think your original email that I think I forwarded to Eric and I think I sent to Chris today, I think is a good start, really good.
I think it's a good start.
Yeah, I think I think it might work out pretty well.