User:Ian Clark/KillerApp
The killer app – how to make millions with ground-breaking software
Ian A Clark. Invited conference paper, plenary session: APL 2000 Berlin
(reproduced here because there isn't a free-of-charge copy I can locate on the web.)
Introduction
I’ve been a programmer for over 30 years, often at the leading edge, never in a classic IT shop. I’ve worked with several vendors’ mainframes, midis and micros, for big firms, small firms, central government, educational establishments, and for myself. In colleges, universities, laboratories and classrooms. In England, in Europe, in the USA. I’ve been involved in some total flops, but in one or two real successes too. I could never tell from the outset which it was going to be. The more I see of the software industry, the less I feel I know anything about it. Giving a talk like this is sticking my neck out.
But plus ça change... plus ça la même chose. I think I’ve seen enough by now, and maybe my experience will interest somebody. We’ve all dreamed of writing the APL program which will make our fortunes. I’ve had to launder my examples, to protect the guilty along with the innocent. But where I name names, what I have to say is already in the public domain. I’ve omitted a detailed reference – this will have to await the book. Most of the principles I’ve discerned I’m going to present as the experience of two fictitious companies, Company A and Company B.
Company A is not just based on one particular company I’ve known. Nor is Company B. I bought a honeycomb once. It said on the back ‘Produce of more than one country’. I thought, what a marvellous thing – the dear little bees had actually cooperated in an International Honeycomb! A lesson to us all. But my fictitious companies really are a blend. Not an International Beehive. (Not even IBM was like that.)
The first of the killer apps
What makes a killer application, or ‘killer app’ – a ground breaking software product earning millions in sales revenue? Is it feasible to write one single-handed, in one of the rapid development languages, even in APL, instead of a ‘he-man’ language like C/C++? The answer is yes, as young go-ahead companies continue to show. It happens to be the way many killer apps started life.
The first commercially sold spreadsheet, VisiCalc, clearly began life as a prototype written in Basic. Early Basics would only permit variables starting with a letter A-Z followed by a number, so no wonder cells were labelled A1, A2, A3, ... B1, B2, B3, ...etc. A virtue was born of necessity, solving a nasty problem of users having to supply their own name for every cell they wanted to use in a formula.
VisiCalc certainly wasn’t the first spreadsheet ever invented. In the early 1970s, visitors to IBM’s Advanced Systems Development Division (ASDD) Mohansic NY, laboratory were shown, in conditions of great secrecy, a hardware lash-up with a touch-panel taped over the front of a graphics screen. This contraption ran a prototype called the Intelligent Grid. You touched a cell in the grid and typed-in a value or a formula. As soon as any cell was changed, the grid was recalculated. It was called ‘intelligent’ because it knew in which order to recalculate the cells, no matter which one you altered. It was going to change the world, they said. It was to be the flagship of FS, but at the time, looking at this crazy prototype, it was hard to believe. I don’t know what language it was written in, but it was almost certainly APL because that was what Mohansic were using to prototype the software of IBM’s fantastic new Future Series (FS) computers.
The ‘fantastic’ FS project stayed that way. It was cancelled in 1974 when the time it took for the prototype to IPL (‘initial program load’) exceeded the psychological 24 hours threshold. At that time, mainframes were IPL-ed daily. Now I lean on my elbow and watch Windows starting up and I remember how DOS started up immediately, and I imagine this is going to be the death of Windows too. IBM wrote off the R&D investment in FS, which at one time embraced half its total R&D budget, and its stocks tumbled for the first time in the history of the company. But as a positive side-effect, a lot of mean software came to market in the following years with most of its initial R&D written off, which lowered the price considerably. VM for example. In a historical context the FS project was perhaps one of the two biggest nurseries of software innovations ever, the other of course being Xerox PARC, without which there would have been no Apple Macintosh, nor MS Windows.
Intelligent Grid sunk into obscurity, along with some other interesting prototypes. All of which, incidentally, have popped to the surface again, in some new guise. IBMers used to joke about ‘Project SCAN – Same Crap Another Name’. One thing seems to be common to all killer apps, though. Like helminthic parasites, they have long intricate larval stages.
Six or seven years after FS and Intelligent Grid, there appeared VisiCalc, running on the Apple II Computer, one of a number of ‘micro’ computers hitting the market at that time. It was estimated that half of all Apple IIs sold were bought primarily to run VisiCalc. The Commodore PET (absurdly named for the French market) followed hot on the tail of the Apple II. Its version of VisiCalc came with a superb innovation – you could type strings in both upper and lower case! The PET itself came with a hard-wired (mixed-case) Basic interpreter written by a small company called Microsoft.
Around that time IBM discovered that it was one of the world’s major purchasers of non-IBM desktop computers, in spite of having itself designed a clutch of desktop prototypes and short-run specialised workstations (e.g. the 5100 series). A case could be made simply for servicing its own internal market. So, in a major departure for IBM, it set-to and built a desktop computer to look just like everyone else’s! It even used ASCII instead of EBCDIC. At Boca Raton, home of the PC as it came to be known, they boasted that all its components would be externally sourced – only the badge would be made by IBM. Old IBMers shook their heads and said it would kill the company (it did, nearly). Bill Gates, in his book The Way Ahead, is very rude about the PC – ‘everyone could see that this machine had problems!’ But really it couldn’t fail. IBM gave away PCs to its own staff and soon got half the company writing applications for it in its spare time. Apple was to do something similar with its big educational giveaway of the early Macintosh in 1985 or thereabouts.
But to succeed in the world outside, IBM knew the PC needed to be released with three essential items of software:
- a disk operating system, to shift files around on the new-style 5 inch floppy disks
- a Basic interpreter
- VisiCalc.
In VisiCalc the concept of a cross-platform third-party application had burst upon the world. An application that actually created the market for the machines (plural) it ran on. An app that wasn’t just the stooge of the vendor, that hadn’t simply grown out of the vendor’s operating system, like a language processor or a sort utility. Suddenly the tail was wagging the dog. The Killer App was born.
The acceptance of third-party software as a saleable product
When talking about third party software products, we ignore operating systems, or what were once seen as just parts of the operating system, like language processors and sort utilities. There is a long history of third parties trying to sell their own improved versions of these, bypassing the hardware vendor (mainly IBM). This goes back even before 1969, a watershed in the history of the software industry. In 1967 Larry Welke founded the International Computer Programs Directory, listing programs for sale. Hitherto IBM simply collected customers’ programs and gave them away free of charge to other customers who might find them useful, without any guarantee of support. Much in the way a magazine might publish readers’ letters. Besides compilers for fancy new languages, there was a lot of interest in sort programs which ran faster than the sort utility supplied by IBM as part of its operating systems, crucial to the reel-to-reel batch applications standard in those days. So maybe the ancestor of the killer app was actually one of these go-faster sorts, for which people were prepared to pay good money.
But by 1980 the concept of a portable platform-independent operating system for the growing stream of new microcomputers was well established. Firms like Altair and Zilog led the way but this was the day of the roll-your-own computer, and the lone builder didn’t want to go writing his (or her) own operating system, or ‘monitor’ program as it was less grandly called in those days. IBM’s first choice of disk operating system for its new PC had been CP/M. It was what everyone else was using, after all. But after being stood up by the designer of CP/M, who apparently had an aversion to people who wore suits, IBM turned to a little known company which had written a Basic interpreter for the Commodore PET, serving it as a kind of hard-wired operating system, just as Basic (or APL too on some models, selectable by a switch) had done for the IBM 5100 benchtop computer. That firm was Microsoft.
IBM commissioned its own operating system with features resembling both CP/M and Unix, but with deliberate differences (backslash instead of slash in pathnames) and crippled the command macro language to prevent it being used, VM EXEC-style, as a serious development language in its own right (it lacked arithmetic). The command language was never properly christened, becoming known as DOS Batch Language. Both firms retained equal rights over the product and went ahead and published it separately. IBM’s version was called PC DOS, Microsoft’s was called MS DOS. The two vendors’ products parted company and were developed and released separately.
Microsoft rose from nowhere on the back of its DOS commission from IBM. But its core business was language interpreters, especially Basic. Fortune Magazine, Nov 22 1999, said ‘before Gates and Allen started Microsoft, pure software companies did not exist.’ This is simply untrue. What about Cullinane Corp (1968)? What about Informatics (1962) with its Mark IV for the 360 computer – the first million dollar software product – written by John Postley in 1967? And a host of others.
In one sense though, Fortune is right. Microsoft demonstrated to the world how to make money – big money – out of software products in general. Not just out of one lucky product, but to keep on generating them sustainably, just the same way IBM made money out of hardware products, which it did by an enormous investment in R&D, as big as the gross domestic product of a smallish nation. IBM aimed to bury its own products within 5 years, in which time knock-off copies would take away their profitability.
As an aside, let’s notice the difference in lifespan between hardware and software. 4-5 years at most for a hardware product. 30-40 years and upwards for a software product, with a juvenile stage of at least 5-10 years. Basic (1963). APL (1964). MS Word (1987). MS Excel likewise, a smash-together of Chart and Multiplan (1983) the latter an early VisiCalc competitor.
Word and Excel were themselves killer apps in more ways than one. Few people seem to know that both were initially developed on the Macintosh, and that Windows came about because Microsoft wanted to port them to the PC, cheaper and more readily available than the Macintosh (plagued with production problems in its early years). Bill Gates, never a fan of the PC, as we’ve seen, was reported as saying that putting a Mac-like GUI on the PC was like ‘putting lipstick on a chicken’. But in the end Bill saw what side his bread was buttered on.
Can an APL programmer ever write a killer app?
For a recent example of a killer app written in APL, see Adaytum Planning. Adaytum Software moved in 6 years from a small start-up company employing two developers in 1994 to a $10 million turnover company on the back of an APL product invented by George Kunzle [all this information is gleaned from their website: www.adaytum.com]. For those of us who might not know what APL is, it can be described as a rapid development language with the provenance of Visual Basic and the productivity of a rabbit warren, ideally suited to the lone programmer.
A Japanese study in the 1970s concluded that ‘APL enables [lone] programmers to undertake projects far beyond their competence’. That’s a topic for meditation. If you do it successfully, are you competent, or aren’t you? Perhaps the people who subsequently have to maintain your code should have a say in that. A killer app can be a two-edged sword. We’ll come to the First and Second Laws of Software Development shortly.
Now I’ve been a programmer since 1968. When I started, the programming profession was already 25 years old, if we leave out Lady Lovelace and start with Alan Turing at Bletchley Park. I started my programming career with Fortran, beefed up with 1130 Assembler, and I remember when it was said ‘real men don’t eat quiche’. Real men didn’t program in Algol, Pascal, or other fairy languages either. They used Fortran. But I went on and I learnt Algol, Pascal, PL/I, APL, and a score of other languages, many of which remained the private concern of a very small group of consenting adults. Who has heard of IS/1 nowadays, or MP/3 (not the audio standard)? Who uses SNOBOL? Who builds weird and wonderful implementation languages using C preprocessor macros? (That’s how Bjarne Soustrup prototyped C++, and how Roger Hui prototyped J, I think I’m right in saying, but the skill, once-widespread, seems to be sinking into obscurity).
But one of the things I noticed is that totally original systems were written by totally original people, who tended not to know Fortran and Assembler but got their prototype going with some macro lash up, or some fancy 4GL that they discovered they could do things with. APL fell into that category for a whole generation of IBM salesmen, planners and others outside the charmed circle of data-processing professionals. (George Kunzle, the creator of FREGI and KPS, the forerunners of Adaytum Planning, was an IBM planner). So I’ve come to the conclusion that if ‘real men’ don’t program in anything but C/C++, then ‘real men’ don’t write original software. I wouldn’t have been able to say such a thing 10 years ago, but attitudes have liberalised.
Case study: two mythical companies, A and B
Let’s discuss the ingredients for success by tracing the careers of two fictitious companies. Neither is based on any real company living or dead, but each is a sort of Frankenstein, stitched together from things I’ve seen happen at least twice. I’ll call these companies A and B. A is the good guy, who does everything right according to the book, but along comes B and kicks sand in his face. Company A aims for a technically sound product. By way of contrast company B aims at the most lucrative market to hand. So company A ships a reliable product with minimal development and maintenance costs, but doesn’t sell many copies. Company B sells hundreds of (very expensive) copies and makes millions, but has a rough ride developing and maintaining its product, eventually spending millions. Whether you can call B a success or not depends on the difference of two very large numbers. Have you got the nerve to play this game?
Where do prototypes come from?
This sounds like a child’s facts-of-life question. But it isn’t. It’s more like asking where did my goldfish come from? Now a goldfish is the sole survivor of many hundreds of small fry which have been eating each other as they grow up. People who haven’t been in the development business think they can break-into it with just one idea. But it’s necessary to have a stream of ideas. Out of a hundred, maybe just one will survive. Usually by finding a highly specialised niche and growing there quietly for years. Then all of a sudden it breaks out.
Now this sounds dreadfully discouraging if you’re an entrepreneur seeking next year’s killer app to put money into. Are you gambling on odds of a hundred-to-one against? Not necessarily. It’s possible to improve the odds in your favour. All you’ve got to do is to recognise when an idea has 1% chance of success and when it has 0%. Then you go looking for prototypes which have been maturing quietly in their niche for several years and are all set to break out.
Company A doesn’t do that. What it tries to do is abstractly formulate the product for which it perceives the market is ripe, and go ahead and design it. Now this might work for toothpaste, for washing powder, even for motorcars, where there really is little variation possible on the basic product. However software has such scope for variation that I wonder if this textbook prescription for product design really has a place in the IT world.
Company B is likewise sales driven, which (drawing on the experience of IBM and, perversely, the satellite phone company Iridium) is the thing to be. But it doesn’t attempt to dream up a totally novel product. It got into business by small incremental improvements on a straightforward product for which others have already defined a market. But all the time it keeps its eye, not on an abstract notion of the market, but on an intimate knowledge of its clientele. In Company B’s case it was a certain sort of professional in the finance industry. I’ll explain just why this is a clever choice of clientele, but for the moment think of it just as a homogeneous group. The managing director is himself the chief salesman, and as a result gets to talk to members of this group on a daily basis until he knows their needs intimately. Then suddenly one day he comes across a novel niche product, being sold without much success by its inventor. Instantly he knows just how he can sell this product to his pet customers. So he makes a deal with the inventor over a liquid lunch. And a future killer app is baptised.
This doesn’t answer the original question: where do prototypes come from? Who or what lays the spawn from which all the small fry hatch? In the 1980s, IBM did an informal study of how its current strategic products all originated. It concluded that none of them originated from its official product nomination process. This was incapable of throwing up a wholly new product because each nomination needed to show exactly how its proposal improved on what was out there already. So, by definition, the official process could only improve on existing products, not come up with wholly new ones. By contrast, all the software products that were currently earning it big money were either field developed to the requirements of a major customer (one of its biggest earners, CICS, was originally commisioned by Motorola), or written by some lone programmer, typically working overtime on a slush-fund sequestered by his manager.
So to find and launch a killer app, you either need to know these lone programmers, or be one of them yourself. But if you’re the latter, you’ve got to be very strong-minded and recognise when one of your colleagues has a far better idea than your own. In short, you’ve got to be daring, to go with something out of the ordinary, but also to be callous, ready to slit the throat of a loser, even if it’s your very own idea. Few people combine these irreconcilables in themselves, so it’s no accident that killer apps are the brain children of productive pairs of individuals or small teams.
The structure of the team
What can we say about the structure of the successful killer app team? Pursuing the marriage analogy, the brain-child isn’t typically by salesman out of programmer. It’s usual for the programmer to come into the partnership with the child already, but of course needing a good wash.
The most productive team seems to consist of: A programmer: a guru, able to keep his head when all around are losing theirs, etc., A salesman: a front man, able to mix with the high-and-mighty, on (seemingly) equal terms, An accountant: a bean-counter, as good at scraping the bottom of the barrel and juggling creditors as cooking the big fish when it lands.
In the 18th century it was said that the most successful partnership between journeymen was a printer and a tailor. The printer earned enough to feed them both, the tailor kept them both looking presentable. But today’s financial demands are so complex as to need a full-time individual minding the cash flow, debtors, creditors and bureaucrats. Especially the bureaucrats – the tax officials and regulatory agencies. Nobody who draws a regular salary in a 9-to-5 job all their life has the slightest sympathy for the needs of the folk who actually create the wealth to pay them. Governments don’t assist much in the making of killer apps.
Finding the money
Nothing happens unless somebody puts down money. Our two companies each had to obtain start-up capital. Now you might think that there is no such thing as a bad source of money if it pays up. That’s quite wrong. Governments pay up money, lots of money, for ideas they want to promote. But government money can be so tied up with provisions that it leads you astray from your true goals. Government initiatives are designed by people who have a clear, if unrealistic, idea of what they want to fund. If it’s too clear and too unrealistic then you’re doomed. Even if it’s realistic, then you’re working for them, not yourself, and you might as well be one of their employees, with a 9-to-5 job and a pension at the end of it all.
The easiest startup money I ever got was a grant from a private bequest. I simply had to be resident in a certain village, have a child entering higher education and write a single letter to the trustee, and the cheque arrived by return of post. The less your source of start-up capital resembles that for simplicity, the more grief it will give you.
Company A applies for government money earmarked for re-developing a run-down region. The administrators have clear ideas about what they want to support and this has a significant impact on the freedom to run the business. So if in the course of developing the product, Company A sees a better opportunity with a slightly adapted product, or that it is easier to recruit the right staff if it moves elsewhere, then just too bad. It has to go with the original proposal, or lose the grant.
Company B looks for so-called ‘guardian angels’. These are entrepreneurs with no preconceptions about what they want to support, except that they don’t want to support losers. (To be fair, no preconceptions except general ones like ethical, business sector, fellow-countrymen or co-religionists or both, which you either satisfy or you don’t.) Their representatives are invited onto the board (they expect that anyway). There they give the most useful, if general, advice about how to approach banks, not just as sources of funds but as customers for the product. But they leave the development of the product itself strictly alone. Recognising they’ve found ‘the goose which lays the golden egg’, they are wise enough not to pretend to be either vets or poultry farmers.
There is start-up capital (a few thousand dollars, the resources of an ordinary individual, and barely enough to support a one-man company) and there is a venture capital, running to millions. This is the big time. A venture capitalist, whether a rich individual or a firm, is interested in backing startup ventures for one purpose only – the possibility of their becoming a public company, with shares quoted on the stock market. This isn’t directly bound up with the success of the product itself, but rather with the viability of the company. But when a company goes public, its existing shareholders may see their shares increase in value tenfold, a hundredfold, a thousandfold.
If you want venture capital, you’ve got to be prepared to go down this route. Richard Branson went down it with Virgin, then discovered that he had relinquished too much control and actually bought back his own publicly quoted shares to make the company private again.
Your chosen market
Selling to the right market makes all the difference to the success or failure of your product. To say that this is nothing to do with the product itself is only partly true. Once upon a time, it was possible to add a few function libraries to Pascal and sell it either to teachers as a specialised courseware programming language, or to financiers as a specialised financial programming language. These days have gone. The market has become segmented. However, if you are equally competent to develop a product either for teachers or for financiers, and it’s money you’re after, then remember the old saying: ‘work in the kitchen and you’ll always get fed’.
Both Company A and Company B think they understand their respective markets. Company A designs a product for use by teachers. Company B designs one for use by financiers. Now teachers are some of the hardest people to sell to. Moreover, UK government legislation is in the process of changing the nature of the job entirely. Teachers, trying to keep up with the flow of legislation, totally lose interest in learning about computers in their spare time. In the course of developing the product (18 months), its market disappears in its original form.
By contrast, the customers of Company B don’t change much during the time it takes to get the product to market. It’s doubtful if they’ve changed much from the time of quill pens!
Moral: don’t aim at a changing market unless you’re able to bring out a product very fast indeed. Then almost certainly you will already be selling in a big way to your target customers, and have the tools to customise something saleable to them within a matter of days.
Containing your development and maintenance costs
Company A believed, like Schumacher, that Small is Beautiful. Its lone programmer set out to adhere to an elegant architecture, with heavy use of macros, functions, operators, and all the other engineering aids to code re-use, so that he wouldn’t have to go writing two independent bits of code which did nearly the same thing. Or when he wanted to alter a piece of code, he wouldn’t have to go searching by hand through all the code for things which needed to be altered in step.
Did he succeed? Well no, not entirely. He didn't anticipate all the updates he’d be needing to make. And the updating effort it saved him was counterbalanced by the time he had to spend on making sure everything was consistent in the first place. His efforts probably paid off mostly in being able to make last minute change reliably. He generally met his target dates, even if problems cropped up at the last minute. He burned a lot of midnight oil, but the company never needed to employ a additional programmer. It's a critical phase, going from one programmer to two. But had he needed to take on additional assistance, it would have been an easy matter to explain his code to the newcomer. He could talk in terms of rules and principles, and the new programmer would be able to rely on them holding good throughout the code.
The disadvantages were subtle. A’s techniques were ideal for efficiently maintaining a product that had found its market, or its niche. But Company A needed badly to position its product, whose scope was potentially too wide. A lot more time should have been spent in showing demos to sample customers, maybe even screen simulations rather than anything which actually worked, let alone worked to a high standard. The painstaking coding process cost Company A a lot in lost agility. It contributed to their failure to keep a finger on the pulse of the target audience, and so detect in good time that their original market was slowly changing out of all recognition.
Company B developed their prototype (in APL) crudely and quickly. Changes to the code were governed by:
- What made the customer happy?
- What made for a consistent architecture?
...strictly in that order. First things first, and second things last, or not at all. And so the product was launched, working just well enough to support the salesman’s demo. Orders came flowing in, and with them technical problems. Their technical support team was overwhelmed.
You see, they had shipped, not a product, but a prototype, which threatened to collapse under the weight of the extensive maintenance they were forced to undertake in a hurry. There would have been little problem had their code been of Company A’s quality. But would they then have come up with such a well-targeted product in the first place?
Even so, they got away with it. For two reasons:
- The shelfware market
- The junkware market.
Ask anyone over sixty how Mr Coleman (the founder of ‘Coleman’s Mustard’) made his fortune. They will tell you, not from the mustard people ate, but from what they left on the side of the plate. There is a large market for shelfware – software that gets bought but never used, or only rarely, so it remains on the shelf. Such software only needs to work well enough to avoid being sent back within the guarantee period.
An IT product is mostly bought by people who are never going to use it themselves, particularly the lucrative bulk orders from corporate customers. They buy it against a checklist of features, calculating cost-benefit ratios and adding them up. Even if their choice of product drives the poor end-users hairless, they may never actually get to hear of it. If they do, they may even suppress the complaints, for fear of being seen to have foisted a dud product on the company. Or an individual quietly leaves, and the customer stops using the product. But never a word of complaint filters back to the vendor. This is the junkware market.
However there is a limit to the length of time you can get away with servicing just the shelfware and junkware markets. A year or two at most. No, it doesn’t hurt sales revenues all that much for the first release, but one or two dissatisfied customers can blacken your name and a single journalist can cast aspersions in a trade journal over your product’s fitness for purpose. So you work your programmers to a frazzle trying to shore up the code and keep the awkward customers happy, or at least keep them one step away from serving a writ.
Then someone comes along with a product resembling yours, but which actually works. Or it’s actually user-friendly (the two overlap). All your marketing expenditure has simply gone into creating a market for your competitor. If you’re smart you’ll buy up the competitor and sell his product to replace yours. If the competitor is much larger than you, you’ll sell out to him. At the very least he’ll buy your customers.
Company B did the honourable thing and called in ten programmers for each one they had before, to rewrite the product to be maintainable. Why ten, where five might do? Well, whilst a programmer is maintaining code, he or she can’t be developing the follow-on product. Once you’re into rewriting your code ‘properly’, you lose the flexibility to tailor your product to your sample customers. Unless by now you’re big enough to employ several teams of programmers.
There’s inevitably a delay whilst all the new people get trained. Six months is not uncommon before a new programmer learns enough about the product to begin to contribute usefully. Let us say Company B started with two programmers, and employed an additional 20. That’s 10 man-years to learn the ropes, plus another 10-to-20 man-years to complete the task. At $50k to $100k per programmer (including accommodation and support) it’s not hard to see why you need to borrow one or two million bucks to rewrite a program originally created by just two programmers. You must be making your millions already before you contemplate it.
Compare this to Company A, where the programmer reckoned he could explain the architecture to a newcomer in a day. The latter could be effective immediately. From this viewpoint, the 6 months or so which 20 programmers needed to learn the ropes translates into $500,000 simply wasted by an inconsistent code architecture. In spite of its apparent success in the marketplace, Company B might just end up snatching defeat from the jaws of victory.
Agility or maintainability?
Here is a dilemma. How do you save millions on your development costs by applying good software engineering practice, like Company A, yet enter the market with an agile product which rapidly earns millions, like Company B?
To be frank, I don’t know. When I do, I’ll be in a position to offer you all a job.
But when I look at some of the great killer apps of the past, I see products which relied heavily on some novel platform, advanced for its time, the cost of development borne by some giant vendor. The app developers needed to add very little in the way of their own code. VisiCalc exploited the idiosyncracies of Basic, and was really quite a simple bit of code by today’s standards. Myst, an all-time classic game for the Apple Macintosh, in which the player had to explore a mysterious deserted island, was a relatively simple application of HyperCard, the latter developed at great expense by Apple itself. Myst relied on a good storyline and excellent graphics. As with VisiCalc, most of the expenditure seemed to go into the well-produced manuals, which spoon-fed the new user and had clearly been debugged in live trials with representative subjects.
We seem to be talking about sparrows on eagles’ backs. Within a year or two, bigger birds had come along and eaten them. The sort of maintenance problems experienced by Company B were either sidestepped, or the original vendors sold out and pocketed the loot inside 18 months. See below, when we come to talk about the Internet.
Moral: find your eagle, newly-hatched if possible. A fresh new platform, standing in dire need of your killer app. Linux? WAP phones? Or there’s a global network of communication satellites up in the sky called Iridium, badly in need of something useful to do.
Usability versus utility
By utility or usefulness I mean: do people find the product useful? By usability or user-friendliness I mean: do users find the product easy to use, convenient and safe? Good usability gives a positive feel to a product, but it’s hard to define a given product’s usability per-se except in the negative sense: is it lacking in those features which make for bad usability? Of course good or bad features work together synergetically – what made the S.S. Titanic so user-hostile was not only that there were not enough lifeboats for all the passengers, but also that the watertight bulkheads did not extend to the top deck.
There’s a paradoxical relationship between quality, usability and utility, which are so resilient that time and again projects founder on them like ships on submerged rocks. To defy them is almost like trying to defy the law of gravity. They deserve formulating as universal laws in their own right.
The first law of software development
The quality (Q) of a given piece of code is inversely proportional to its utility (T).
Q = h/T ....................(1)
A useful piece of code has generally been written in a hurry to meet an urgent need. So it is both good and bad – the concept is good, but the fabric is bad, often intended to be replaced at the earliest opportunity. To some people that signifies that you ought to rewrite it in a ‘proper’ language, like C++. But others point out that no ground-breaking app has ever been written in a ‘proper’ language, to ‘proper’ software engineering standards. People who write in languages like C++ are invariably rewriting something that exists already and is in widespread use. Because only then is the money available for what is a resource-hungry activity. It is typically performed only to keep abreast of new features of the platform, e.g. the next release of Windows and all the tricky new controls it comes with.
Henry Ford called accountancy ‘an extension of the banking conspiracy’. If so, then maybe Windows is ‘an extension of the programming conspiracy’?
The second law of software development
The usability, (U) of a given piece of code is bounded by a ceiling proportional to the product of quality (Q) and utility (T).
U < kQT ....................(2)
Corollary
A piece of code, rewritten to the highest quality standards, to be of the greatest possible use, never becomes any more user-friendly.
The second law and its important corollary are counter-intuitive. Although it stands to reason that usability (U) should be directly related to both utility (T) and quality (Q) via the expression QT, it is common experience that trying to improve T and/or Q never seems to improve U. The first law shows why. Substituting (1) in (2) you get:
U < kh ....................(3)
which implies that usability (U) is limited by a ceiling (the constant, kh) independent of both Q and T. It depends rather on parameters determined right at the outset, and no amount of conventional development effort is going to improve it in the slightest.
Usability is, strictly speaking, an aspect of quality. But what most people understand as code quality doesn’t necessarily result in a usable product. A product which is impossible to use obviously wouldn’t be at all useful. But often there’s a subtle threshold. It’s not easy to play table tennis, but a lot of people manage it. However if you try and sell a variation with two balls in play at the same time, your market will disappear. People actually like certain activities to be hard, provided they can do them well (but not too many other people can). However there’s a fine line between ‘hard’ and ‘too hard’, which only the creators of computer games really seem to understand.
The converse of this is that a popular yet hard-to-use software product (like CorelDraw) is really a sort of computer game to its skilled users. I don’t want to suggest for a moment that this is true of APL. But I have met programmers who prefer to code in such a way as to afford the maximum fun debugging the result, rather than write in a style which is likely to work first time, and won’t break with changes made elsewhere. Because that would be just too boring for words.
Perhaps the second law is best summed-up as: ‘good usability doesn’t just happen’. It has to be a design objective right from the word ‘go’. Whether your original prototype was usable or not, all further development conspires to make it less so, not more. You could see this happening clearly with the Apple Lisa/Macintosh, in its various incarnations since 1983, not to mention Excel, which has come a long way from the original Multiplan, one of the two major VisiCalc competitors (the other was Lotus 1-2-3). But in the process, the limpid clarity of VisiCalc has been submerged in a byzantine plethora of menus and toolbars and an unguessable behaviour pattern of cursors and multiple selected cells.
The Internet and the World Wide Web
I turn reluctantly to this topic. Any discussion about how to make a million with software is going to be overshadowed by the Web for the next year or so. But I don’t believe that it brings any novel principles to bear. The original Netscape web browser had all the features of a killer app, but it was too easy to copy, and Microsoft did. Web millionaires are being created weekly, but very few of their businesses are likely to be sustainable. They all seem to be selling out, pocketing the money and scramming. As always, the recipe for making millions out of software seems to be maximum impact for minimum effort in the shortest term.
What of the future of the Web itself? Remember that the telephone came first, a splendid tool for peer-to-peer communication among free people. But the same technology gave rise to the loudspeaker, the instrument par-excellence of centralised control. The World Wide Web is such an open system, so apparently all-embracing, the alleged cure for so many commercial ills, that it resembles cocaine in the late nineteenth century when it was hailed as a wonder drug and freely available across-the-counter. Like all wonder drugs it will disappear from open sale in a year or two, to be replaced with expensive, closed, secretive, burdensome, intrusive networks, designed to enslave, not liberate. There are strong vested interests working for it, and only a handful of hippies opposing them. The prevalence of trojans and viruses, and the ease of flouting national laws, will be the reasons given for its suppression.
In a year or two we may look back on the Web and decide that its major defining ‘killer app’ was the Love Bug. Which was simply a re-hash of the famous Christmas Card which paralysed IBM’s private VNET network in 1982.
As Santayana said: ‘he who forgets the past is condemned to relive it.’
Summary
Source of ideas. Select from hundreds, not just one. This means other people’s, not just your own. Infant mortality of new ideas is more than 99%. Nobody knows which one is going to live. But learn how to spot a doomed one, and know when to slit its throat if you have adopted it. Even big corporations with loads of experience get it wrong.
Choice of prototype language. Pick one that speaks to you. Let nobody tell you that there are ‘better’ languages. Can you make it do what you want, easily and quickly? Can you make changes rapidly and reliably? The acid test – can you do it locked in conversation during a demo?
Choice of language for maintaining the successful product. You’ll be relying on the skills of the programmers you employ. What went for you goes for them too. If your blue-eyed boy shines in APL then that’s the language you’ll be going with. If you can’t get APLers, then who can you get as your chief programmer? His language will become the product’s language.
Funding. Start small, but not too small. Remember that if 3% of your prospects convert (typical for mail-order), you’ll need to approach 100 to expect to win just 3 customers. You need sufficient investment to do this, $100,000 rather than $10,000. Maybe your 1st-year income will barely be 50% of your expenditure. This is normal, but your backers must understand this and not panic. Once you look like a winner you’ll attract speculators.
Choice of customer. Aim at people who can afford your product, and have the power to buy it for themselves. A class of people whose needs don’t vary too much, between themselves and especially over time. P.G. Wodehouse said ‘Chumps make the best husbands.’ If so, then humdrum financial types make the best customers. Avoid those for whom the latest IT fashion is all-important, and avoid the put-upon employees of goverments and large grey enterprises. Next year they’ll all be dancing to a different tune. Your product may no longer be relevant to them. If it ever was.
Maintenance and development. Adopt a consistent design methodology from the outset (they all work) and stick with it. The holes in your resource bucket are your newcomers – a ramshackle design wastes their time and everybody else’s whilst they learn about it. Once you’re successful, then your technical troubles really begin. Most will, alas, be traceable back to controversial design decisions in the prototype. But it’s hard enough coming up with a winning idea, let alone knowing in advance what you should have done differently. Remember the Hunchback of Notre Dame wouldn’t have amounted to much without his hunched back.
Ease-of-use. It’s like life assurance, or a healthy lifestyle. It doesn’t just happen – you have to work at it constantly. When you’re young and feeling well it doesn’t seem to matter. When you aren’t, it’s too late.
The Internet. A new, probably transient phenomenon. But nothing new in principle, where making money out of software is concerned. In every sense, it’s just got faster.
Endnote
Know when to cross your bridges and when to burn them. And maybe you’ll be one of the lucky 1%.
Most of the above (written in 2000) holds good today, except that my predictions of how the Internet would develop look unduly pessimistic in retrospect. This is thanks largely to one or two major players to have since emerged, notably Wikipedia, Amazon and Google. Their hallmark has been iconoclasm, visionary boldness and staggering altruism plus a hard-line attitude to pleasing the user. Without them my sour predictions might well have come true. One wonders what will happen though when these firms are no longer led by their visionary founders. Will their successors adopt a slightly-modified guiding principle: "Do Be Evil"?
Contributed by Ian Clark