Is a high-level dynamic language a one-size-fits-all solution for the community's problems, or do you think languages are likely to fragment further?
One of the biggest holes that didn’t get filled in computing is the idea of “meta” and what can be done with it. The ARPA/PARC community was very into this, and a large part of the success of this community had to do with its sensitivity to meta and how it was used in both hardware and software.
“Good meta” means that you can take new paths without feeling huge burdens of legacy code and legacy ideas.
We did a new Smalltalk every two years at PARC, and three quite different designs in eight years – and the meta in the previous systems was used to build the next one. But when Smalltalk-80 came into the regular world of programming, it was treated as a programming language (which it was) rather than a meta-language (which it really was), and very little change happened there after.
Similarly, the hardware we built at PARC was very meta, but what Intel and Motorola etc., were putting into commercial machines could hardly have been less meta. This made it very difficult to do certain important new things efficiently (and this is still the case).
As well as Smalltalk-80, you're often associated with inventing a precursor to the iPad, the Dynabook. Do you feel the personal computer has reached the vision you had in 1972, and where do you see it heading in the future?
The Dynabook was/is a service idea embodied in several hardware ideas and with many criteria for the kinds of services that it should provide to its users, especially children. It is continually surprising to me that the service conceptions haven’t been surpassed many times over by now, but quite the opposite has happened, partly because of the unholy embrace between most people’s difficulties with “new” and of what marketeers in a consumer society try to do.
What are the hurdles to those leaps in personal computing technology and concepts? Are companies attempting to redefine existing concepts or are they simply innovating too slowly?
It’s largely about the enormous difference between “News” and “New” to human minds. Marketing people really want “News” (= a little difference to perk up attention, but on something completely understandable and incremental). This allows News to be told in a minute or two, yet is interesting to humans. “New” means “invisible” “not immediately comprehensible”, etc.
So “New” is often rejected outright, or is accepted only by denaturing it into “News”. For example, the big deal about computers is their programmability, and the big deal about that is “meta”.
For the public, the News made out of the first is to simply simulate old media they are already familiar with and make it a little more convenient on some dimensions and often making it less convenient in ones they don’t care about (such as the poorer readability of text on a screen, especially for good readers).
For most computer people, the News that has been made out of New eliminates most meta from the way they go about designing and programming.
One way to look at this is that we are genetically much better set up to cope than to learn. So familiar-plus-pain is acceptable to most people.
You have signalled a key interest in developing for children and particularly education, something you have brought to fruition through your involvement in the One Laptop Per Child (OLPC) project, as well as Viewpoints Research Institute. What is your view on the use of computing for education?
I take “Education” in its large sense of helping people learn how to think in the best and strongest ways humans have invented. Much of these “best and strong” ways have come out of the invention of the processes of science, and it is vital for them to be learned with the same priorities that we put on reading and writing.
When something is deemed so important that it should be learned by all (like reading) rather than to be learned just by those who are interested in it (like baseball), severe motivation problems enter that must be solved. One way is to have many more peers and adults showing great interest in the general ideas (this is a bit of a chicken and egg problem). Our society generally settles for the next few lower things on the ladder (like lip service by parents about “you need to learn to read” given often from watching TV on the couch).
When my research community were working on inventing personal computing and the Internet, we thought about all these things, and concluded that we could at least make curricula with hundreds of different entry points (in analogy to the San Francisco Exploratorium or Whole Earth Catalog), and that once a thread was pulled on it could supply enough personal motivation to help get started.
At this point, I still think that most people depend so much on the opinion of others about what it is they should be interested in, that we have a pop culture deadly embrace which makes it very difficult even for those who want to learn to even find out that there exists really good stuff. This is a kind of “Gresham’s Law for Content”.