Wednesday evening, I flew in over the Sydney opera house and,
after checking in, darted across the street for a quick bite of barbecued octopus. With a little time to kill, I strolled through neighboring King's Cross, the home of Sydney's red light district. Sipping a
Foster's, I watched the strange melange of prostitutes looking for
their first action of the night, world-weary backpackers trying to entertain themselves on a dollar, and a hip-looking hippie driving an
antique Jaguar. Several Hare Krishnas trooped by, reminding me
that it was time to go to work.
I was soon met by
Bob Kummerfeld,
chairman of the University
of Sydney's Computer Science department. We dodged raindrops
from my hotel over to the
Café Hernandez,
where we sat and talked
over a café latte and a cappucino. Bob Kummerfeld was part of the
first wave of networking, sparked by the Landweber symposiums.
Bob attended his first one in Dublin, and ended up hosting the 1989
symposium in Sydney, the last one before the informal seminars
metamorphisized into the much larger INET conferences.
In the late 70s, armed with VAXen and PDPs, Australia, like everyplace else, wanted to connect their computers together. Running
an early version of UNIX, the obvious choice would have been to
use the UUCP protocols. UUCP had some features that not everyone liked. Its designers had intended UUCP as an interim solution,
but, as often happens with prototypes, it rapidly became entrenched.
Kummerfeld and his partner,
Piers Lauder,
decided that they
could do better and set about designing an alternative set of protocols. The result was the Sydney University Network, known as the
SUN protocols. The SUN protocol suite started out being used at
Sydney, the software was quickly picked up by Robert Elz in Melbourne, and, by 1988, 1,000 sites were on the network. Since the
acronym SUN was identified with the Stanford University Network,
the collection of computers that used Kummerfeld and Lauder's
protocols eventually became known as the Australian Computer Science network (ACSnet).
SUN is a protocol suite optimized for message handling. Unlike
TCP/IP, it was not designed for interactive services such as Telnet.
Message handling can be much more than e-mail and the SUN protocols supported services such as file transfer and remote printing.
The unit of transfer is the message, which can be infinitely long
(although in practice, very long messages are broken up into manageable pieces). While WCP performs error correction as it transfers the data, SUN uses a streaming protocol to send the entire
message, identified as a series of packets. When the message is
transmitted, the receiving end sends back a bitmap of corrupted or
missing packets and those packets are then retransmitted.
When a connection is established between two hosts, UUCP
only supports transfer of a single message in one direction. SUN,
by contrast, is multichannel and full duplex. Full duplex means that
messages can go in both directions. Multichannel means each direction is divided into four logical channels of different priorities.
Channel 1 is the highest priority and is used for network management messages. An example would be a message to decrease
the packet size from the default of 1 kbyte down to the minimum of
32 bytes. On a dirty line, small packets reduces the amount of data
to be resent later since an error would affect a smaller chunk of the
message. It does so, of course, at the expense of throughput.
The other three channels are used for different kinds of message
transfer. E-mail messages would typically travel on channel 2, allowing mail to get through even though a large file might be occupying channel 4. While it might take a longer time to get a large file
through a link, it does mean that other applications are not frozen
totally off the line.
With multiple channels sending data, a cut in the line can interrupt several different messages. Rather than start sending the message again, SUN allows the session to start up where it left off,
something that UUCP does not do.
Use of multiple channels also tends to improve line utilization.
When a message is received, the destination node must typically
perform some processing. In a single channel application, the line
would be idle during that time. With four channels, data transfer
can continue.
Riding on top of the message transfer facility are applications.
The suite has standard facilities such as file transfer and electronic
mail and users can design their own protocols.
The file transfer protocol allows both senders and receivers to
initiate transfers. To send a file, for example, the recipient's username and host are specified. The file would be spooled onto the
target machine and e-mail sent to the recipient. The recipient could
then chose whether or not to accept the file, moving it into personal
disk space.
E-mail supported standard Internet-style mail based on RFC 822,
making a gateway to the TCP/IP mail system almost a trivial task.
E-mail with the SUN protocol suite allows the transfer of arbitrary
binary images, a feature that has been only recently added into
SMTP and RFC 822.
The architecture of messages and packets used in SUN allows a
wide variety of different substrates to be used. Direct lines, modems, and X.25 are all used and the protocols can run on top of IP.
Bob even claimed that the protocol could run successfully on telegraph lines, although nobody had yet had a pressing desire to run
applications at 50 bps.
While Bob went on and described features ranging from dynamic routing protocols to broadcast and multicast support, I began
to wonder why I was using UUCP on my home systems. The answer was fairly simple: WCP was free and SUN wasn't.
Kummerfeld and his partner had formed a company called Message Handling Systems to market the software. The University of
Sydney had a one-third stake in the company and the software was
being pedaled as MHSnet. Formed in 1988, the company was still
in its early stages, but Bob told me about a few recent successes.
One was a large insurance network that links brokers together. The
other was the Australian diplomatic corps, which would use
MHSnet to link their worldwide operations.
Both groups were attracted to MHSnet by its ability to run over
a wide variety of different transports, ranging from telegraph to IP.
The message-based paradigm is quite appropriate for environments
with only sporadic connectivity. The software is small and efficient,
taking less than 100,000 lines of code.
Selling MHSnet to the government brought up the specter of
GOSIP.
"Didn't you have problems with OSI bigots?" I asked.
OSI was certainly an issue, he explained, but eventually it came
down to practical issues such as whether software existed that could
do the job. OSI as a general architecture certainly had appeal to
many, but a lean and mean OSI that could run on low-quality, low-speed lines was not readily available. Indeed, one could argue that
searching for such a beast was akin to looking for a hippopotamus
capable of doing the limbo.
To Bob and many others in the research community, OSI has actually had a negative effect. If he proposed to do research on message handling, for example, somebody would invariably suggest
that X.400 had already solved the issue and further research was
superfluous.
Bob's theory was that the OSI wave had finally crested and people were settling on an environment characterized by IP over Foo
and Foo over IP. OSI had promise, but while large groups had large
meetings in a large number of locations, the IP community had
mounted an equally ambitious but much more successful standards
effort.
The IP process was organized to suit the needs of the technical
people, not the desire to reach a political consensus among marketing representatives. The crucial difference between the two groups
was that implementation and testing was the cardinal rule for the
engineers. Bob cited the recent extensions to SMTP and RFC 822 to
handle different kinds of data as an example. By the time the implementations reached standards status, a half-dozen implementations would already exist and be deployed.
The success of the Internet community didn't necessarily mean
that there was no role for the more formal (i.e., slower) procedures
like those used at the ITU. Bob pointed to a low-level substrate,
such as the definition of a digital hierarchy of line speeds, as an
issue that needed to be codified, ratified, and digested by a hierarchy of committees.
Of course, if low-level issues are the province of the public
standards cartel in Geneva, that kind of leaves OSI out in the cold.
After all, OSI starts work at about the same level as TCP/IP does.
Since the OSI standards potatoes insist on defining theory first and
leaving the implementations to "the market," they have little to contribute to the upper layers of the network, where implementation
and testing are crucial.
I bid Bob Kummerfeld goodbye and went to my room to watch
Dallas. It struck me that the politics of the Geneva standards cartel
had an overwhelming resemblance to the goings-on at the Ewing
Ranch. Perhaps I could write a nighttime soap called "ISO," complete with a season finale of "Who Killed FTAM?"