Are there any aspects of the Smalltalk-80 language that you don't feel were fully developed or completed during your involvement?
Quite a bit of the control domain was unrealized, even with respect to the original plans. And also, the more general notions of what it was you were doing when you were programming did not get fleshed out as originally planned. My original conception of Smalltalk aimed to be a felicitous combination of a number of language ideas that I thought would be hugely powerful for both children and adults.
Besides the object ideas, I wanted the simplicity of LOGO, the higher levels of expression from Carl Hewitt’s PLANNER, the extensibility of Dave Fisher’s CDL and my earlier FLEX language.
While this was happening, the famous “bet” caused a much simpler more LISP-like approach to “everything” that took a few weeks to invent and Dan Ingalls a month to implement. This provided a very useful working system just at the time that the Alto started working. We got into making a lot of personal computing ideas work using this system and never went back to some of the (really good) ideas for the early Smalltalk.
This was good in many ways, but did not get to where I thought programming should go at that time (or today). Doug Lenat at Stanford in the mid to late 70s did a number of really interesting systems that had much more of the character of “future programming”.
What contribution do you feel you made to successive programming languages like Objective-C and C++?
The progression from the first Smalltalk to the later Smalltalks was towards both efficiency and improved programming tools, not better expression. And I would term both Objective-C and especially C++ as less object oriented than any of the Smalltalks, and considerably less expressive, less safe, and less amenable to making small compact systems.
C++ was explicitly not to be like Smalltalk, but to be like Simula. Objective C tried to be more like Smalltalk in several important ways.
However, I am no big fan of Smalltalk either, even though it compares very favourably with most programming systems today (I don’t like any of them, and I don’t think any of them are suitable for the real programming problems of today, whether for systems or for end-users).
How about computer programming as a discipline?
To me, one of the nice things about the semantics of real objects is that they are “real computers all the way down (RCATWD)” – this always retains the full ability to represent anything. The old way quickly gets to two things that aren’t computers – data and procedures – and all of a sudden the ability to defer optimizations and particular decisions in favour of behaviours has been lost.
In other words, always having real objects always retains the ability to simulate anything you want, and to send it around the planet. If you send data 1000 miles you have to send a manual and/or a programmer to make use of it. If you send the needed programs that can deal with the data, then you are sending an object (even if the design is poor).
And RCATWD also provides perfect protection in both directions. We can see this in the hardware model of the Internet (possibly the only real object-oriented system in working order).
You get language extensibility almost for free by simply agreeing on conventions for the message forms.
My thought in the 70s was that the Internet we were all working on alongside personal computing was a really good scalable design, and that we should make a virtual internet of virtual machines that could be cached by the hardware machines. It’s really too bad that this didn’t happen.