No history of networking would be complete without an obligatory mention of the French and their optical networks of the late eighteenth century.
No doubt the technology was on show at Prince Albert’s Great Exhibition at the Crystal Palace in London’s Hyde Park in 1851 — making that show the first ever Interop.
But optical networks were abandoned in 1881, just in time for the advent of the electric telegraph network in that same year — despite Sir William Preece, head of British Mail and Telegraph, declaring: “We don’t need one, we’ve got messenger boys.”
In the early days of computing, computers were not networked. They were great big computing machines that were flat out doing their times tables, let alone talking to each other.
That is unless you count circuits as an equivalent to networking? Most people don’t. Circuits are live, dedicated point-to-point electric wires — connecting one computer with an external device, such as a terminal.
Networking is all about shared resources, an interconnecting system where computers can communicate with any other computer on the network.
Packets are the key. A packet can be sent into the network with a header of some kind containing the address of its target computer, and a body containing whatever the message is.
Packets were first described in theory by Leonard Kleinrock of MIT in 1961. Various tests were conducted in the years following — issues like handshaking were solved in 1969.
Then the Department of Defense in the US started taking networking seriously and set up ARPANet (Advanced Research Projects Agency). In 1970 the University of Hawaii set up Aloha Net, a radio-based packet network.
The technology behind Aloha Net was picked up by Bob Metcalfe of Xerox Labs in 1971, and Ethernet was born — ether referring to its origins as a wireless protocol. The Internet was already a proper noun, being used by MIT, ARPANet, the University of Hawaii and a few others. Chief concern was the transmission protocol — which was resolved to be NCP.
MIT led the development of NCP (Network Control Protocol). To one MIT technology leader, Richard Kalin of the network research group, success was inevitable.
“The central premise of this proposal [for NCP] is an insistence that all user-to-user connections be bi-directional. For those familiar with communication theory, this appears most reasonable,” Kalin wrote in 1970.
Vinton Cerf and Robert Kahn’s TCP protocol emerged in 1973 (IP was not split out till 1978 — the year Computerworld was born).
IBM responded to these open systems (Ethernet/NCP) with its proprietary SNA system in 1974, and Token Ring, a personal computing networking standard in direct competition to Ethernet, in 1981.
The Internet switched to TCP/IP on January 1, 1983.
At this point, the foundations for the networks we have today were laid.
All that was left to resolve was the network operating system debate and the struggle with bandwidth. Cisco, Retix, Wellfleet, SynOptics, Cabletron . . . dozens of network infrastructure start-ups emerged, one of them, Cisco, won the race convincingly.
Now everyone has switched enterprise networks, with routers guarding the way in and out — it is all basically Unix, whether it is labelled NT or Linux, running TCP/IP.
Geoff Johnson is VP and research director with Gartner. His watching brief is networking technologies.
For Johnson, the answer to the future is speed. First, the WAN bottleneck will be broken with WAN links moving to a 1Gbps standard in the near term. Telstra has been operating 1Gbps over its MAN (metropolitan area network) for the past 18 months.
Second, the big change — the enterprise will have to give way to user demand for video and streaming bandwidth.
“Instead of saying ‘no’ to bandwidth-intensive applications, IT departments are going to say ‘OK’, and get on with the job of facilitating it,” Johnson says.
In the US, CDN is the latest buzzword — content delivery networks. The focus is shifting from connecting computers, to delivering content to them.
“Enterprise network managers will overcome their phobias about bandwidth intensive content,” Johnson predicts.
The next big driver for change will be on the consumer front. An increasing appetite for digital entertainment will mean households will begin to demand 3-4Mbps links.
“Entertainment, then education, then business,” will drive bandwidth consumption at the home. On demand HDTV to two or three screens in any one house will mean much more than 4Mbps.
On the drawing board Johnson sees VDSL and promised speeds of 50Mbps. Simultaneously, wireless technology will see the Internet become mobile. For example, CDMA and GPRS technology will eventually reliably deliver speeds of 2-4Mbps.
Let’s look way out to Computerworld’s 75th Anniversary Edition.
The ultimate network is, of course, mental telepathy. Many years ago, Johnson was part of a strategic think tank. Tasked with looking 50 years into the future, mental telepathy was the choice.
“We are already approximating mental telepathy,” Johnson says.
Watch someone with a mobile phone hands-free kit walking down the street — seemingly talking to them selves.
Some researchers have found brainwaves that can trigger simple electronic responses from detectors, enabling disabled people to direct an electric wheelchair.
What if, in 50 years, all you had to do was think someone’s name and the phone, by then microscopic, would dial their number?
“Our networking technologies will ultimately enable us to communicate in a way that is effectively mental telepathy,” Johnson says.
It’s alright with me, although instantly uploading the 75th Anniversary Edition of Computerworld into my head won’t be nearly as much fun as actually reading it!