Boulder

On the flight back to Boulder, I struck up a conversation with the prosperous looking yuppie on my left. He worked for Trammel Crow, the quintessential Texas real estate developer. We talked about the current fiscal performance of Compaq, his Vail vacations, and the relative profit potential of commercial and residential development in the greater Houston area.

Over a second drink, I asked this seemingly normal modern businessman what sort of activity brought him to Boulder. A merger, perhaps or an acquisition? Dividing virgin forest into poor imitations of Southern California?

"I've quit my job to open a Great Harvest outlet in Madison, Wisconsin," he explained, referring to the popular whole wheat and granola bakery located in a Boulder shopping mall.

Back in Boulder, I brought a box of Parisian chocolates over to my neighbor, a freelance wholesale carpet salesman and the proud owner of at least a dozen vehicles. The vehicles gave the place a bit of a hill-country look, but I didn't mind since it made it harder to tell that my house was deserted most of the time.

"Do you ever suffer from writers block?" he wanted to know. "How about jet lag?"

"Um, sort of," I said with a lack of conviction, pretty sure this wasn't just an idle question.

"Here," he said, handing me several packets of powders. "I've added a new line," pointing proudly to the pouches of Wow!š and Focus!TM ("Nutrients for the Brain"), products of Durk Pearson & Sandy Shaw®, the founders of Omnitrition International and the inventors of Designer Foodsš.

Pearson and Shaw have built quite a nice little business taking the hopes we all share for intelligence and vigor and peddling them nationwide in a hierachical sales scheme of life extension that has taken in such world-famous businessmen as Jerry Rubin. I thanked my neighbor and told him I'd be in touch, remembering the advice my mother gave as I left home to never fight with the woman who feeds your cat when you are going away soon.

Soon, in fact, was an understatement. I had less than two weeks to dump the text I had accumulated, sort out my finances, and get back on the plane for my third round-the-world. First on my list was to bring my primitive PC in for a new disk drive, the cause of its failing to function as my mail system for the last few weeks.

Next on the list was responding to a couple of hundred messages from people trying to understand why they couldn't get ITU standards anymore. Bruno was dead and engineers all over the world wanted to know all the grisly details. Most agreed that the situation was ridiculous and couldn't understand why the ITU would reverse such an obviously useful service.

Most touching was a message from a blind engineer in Australia who had hoped that he could work with the ASCII text, piping it out to /dev/audio so he could hear the texts. The message reminded me of Milan Sterba, who told me about the pitiful number of copies of the standards that existed in Czechoslovakia. I forwarded both bits of information to Dr. Tarjanne at the ITU, inquiring if it was truly the policy of a major UN agency to deny access to the physically handicapped and those in developing countries. A cheap shot perhaps, but that's politics.

In the middle of the week, while trying to come up with 12 pithy columns for Communications Week, I checked my mail to see a urgent note from Mike Schwartz. David Farber, a grand old man of networking, was giving a lecture in an hour and a half and nobody in Computer Science seemed to have been aware of the event.

When Mike found out from out-of-band sources he promptly broadcast a message to those of us who would realize that this was a worthwhile way to spend an afternoon, and I promptly rearranged my schedule.

David Farber had actually been on my mind that morning. One problem with trying to write a technical travelogue is that you can't possibly see everything. I wasn't going anywhere in Africa, Central America, or South America, for example. I wasn't going to any of the supercomputer centers. I wasn't even going to visit Merit, the network operations center for the NSFNET.

And, I couldn't figure out how to squeeze in visits to try and meet people like Ira Fuchs and David Farber, people instrumental in a host of key projects ranging from ARPANET to CSNET to BITNET to the NSFNET.

It was my luck that Boulder is close to enough ski areas to attract frequent guest lecturers, including David Farber. When I got to the lecture hall, it appeared that the computer science department wasn't as awed as I was. Only Mike Schwartz came from the department, the rest of the professors being unable to climb down from their towers in time.

Farber started by explaining how the gigabit testbeds came to be. People like Farber, Gordon Bell (the father of the VAX) and Bob Kahn from CNRI had spent several years working to get the project funded by NSF and DARPA. The genesis for the testbeds came when researchers started asking what would come after Broadband ISDN and ATM cell switching.

In particular, the researchers wanted to know what applications would run on these testbeds. Visits to places like Bellcore and Bell Labs didn't yield any answers: the telephone companies didn't see a market for gigabit networks and weren't paying attention to them yet.

Even if gigabit networks existed, the researchers saw a fundamental problem: the current technology would not evolve gracefully into gigabit or multi-gigabit speeds.

Switching technology, for example, had progressed up an evolutionary chain from Tl to T3 to the 622 Mbps speeds of Broadband ISDN. These switches had become increasingly complex, and many felt they had reached the edge of the technology. What DARPA called a "paradigm shift" would be needed to move to the next stage.

Government leadership was obviously in order here, and Kahn and Farber made a proposal to NSF for the gigabit testbeds. The proposal had a few unusual features to it. For one, the testbeds would depend extensively on participation from industry, and industry would have to pay its own way.

Money from NSF and DARPA would go to pay for the academics and program overhead, and this leverage would be used to bring in industrial participants. To prove that industry would indeed participate, Farber and Kahn produced letters from IBM, DEC, Xerox, Bell Labs, and Bellcore.

The other unusual twist to the proposal was that although government would start the program, the operation of the project should be left in the hands of what Farber called "an organization with a faster metabolism." Ultimately, this management group ended up being CNRI.

After a series of white papers, proposals, and intensive lobbying, the project was set up with U.S. $15 million in funding. Bob Kahn hit the road and used that seed money to raise another U.S. $100 million in facilities and staff commitments from industry.

All the resources were put into a pot and reorganized into the five gigabit testbed projects. In some cases, additional participants (particular telephone companies to provide fiber) were added to round out a testbed.

Most of the testbeds were set up to look at a combination of the underlying technology and applications. The CASA project in California and New Mexico, for example, would run three test applications over wide-area gigabit channels.

The Aurora project, the testbed that Farber had joined, was a bit different, concentrating on the underlying technology. The chief participants were a group from MIT led by David Clark, Farber's group at the University of Pennsylvania, and teams from Bellcore and IBM. Several telephone companies were also participating to provide up to 2.4 Gbps of wide-area bandwidth.

Typical of the research flavor of Aurora was the work being done in the switching substrate. Bellcore would look at cell-based switching using ATM technology. IBM would run an entirely different datagram substrate based on their PARIS project.

Farber and Clark were looking at the upper levels, the protocols that ran on the fat pipes. Both felt that current protocols, such as the TCP/IP suite, might not work on the next generation of networks.

Most protocols were based on many modules that exchanged messages, with protocols layered on other protocols. While this was architecturally convenient, the result was that software layers would endlessly transform data, copying it from one buffer to another.

The immediate problem was thus not on the network. A gigabit was more than most hosts could handle. The problem was not only in the CPU or the programming model, either. The problem was the network software. Clark was advocating application-layer framing, collapsing the protocol stack and giving the application much better control over network operation. He saw many of the flaws in today's networks as the result of modules making decisions in areas such as flow control that didn't make sense in the context of the upper layer user.

Farber was proposing an equally radical solution, doing away with the message abstraction of current protocols and moving towards a totally different way of looking at the network. Instead of messages, Farber wanted to look at the network as a memory moving mechanism, a backplane.

Programmers on a single host, or even a multiprocessor, deal with memory as the main abstraction. Calling a subroutine by name, for example, is a simple way to refer to a segment of memory. If the network implements that same abstraction, programmers can continue to work within the model they were used to and the network simply becomes an extension of that model.

With its roots in an early research project conducted by Farber and his student Paul Mockapetris, the concept first really took shape in a 1985 Ph.D. thesis by Gary Delp, another one of Farber's students. The system was called Memnet, and consisted of a bunch of PC ATs connected to a modified memory interface and a cache, connected in turn to a 600 Mbps token ring.

The LAN acted as a way for requesting, moving, and writing segments of memory. The caches on other hosts could keep copies of segments, alleviating the need for a network operation on frequently accessed pages.

Memnet demonstrated that a LAN could indeed be a simple extension of memory. It operated within the flat address space of DOS and enforced tight consistency guarantees on pages of memory through mechanisms such as only permitting one writer per segment.

By 1988, the Memnet concept had been moved up to software, running on an Ethernet LAN. A modified version of Sun's UNIX, developed by Ron Minnich, another Farber student, allowed several hosts to share memory. The software was actually put into production use at a supercomputer center, being a useful mechanism to provide parallel processing on certain classes of applications.

It was clear to Farber and his students that Memnet was one example of a general class of networks. To extend Memnet to a wide area environment, some changes would have to be made. A ring topology or a simple bus was not appropriate, and constraints like the flat address space would certainly have to be relaxed. Most importantly, looser consistency guarantees would be needed if the system would successfully scale to become a "national memory backplane."

Farber and his students were actively working on an implementation, called GobNet, that would eventually work on the Aurora testbed. On a gigabit testbed, the software certainly showed promise. The latency of a 3,000-mile gigabit network is roughly equivalent to a local page fault on a machine, meaning that the network could appear to the host as the equivalent of a memory segment that had been paged to disk.

While page faults were natural on a host, they were also something to be avoided. Virtual memory systems would preemptively cache memory segments to try and have the proper segment waiting when it was asked for. This was also going to be the key to good performance on a wide-area Memnet.

The switches in GobNet would have the caches. If a host requested a segment not in the cache, the cache would have to go and find the requested object. In a token ring or Ethernet, finding an object is simple: you ask for it.

In a WAN, though, the topology may be quite complex and finding the correct path is the real challenge. GobNet would use a flooding algorithm, similar to what a bridge would use in trying to decide where a host lay on an extended Ethernet.

For the first request on a segment, the request would be flooded, being sent out every available path on the network. Eventually, one hopes, the page would come back on a certain path. Each of the switches on that path would set up a segment to host binding so that future references to the segment would be easier to find.

Flooding a network to find an object and then setting up a binding for future references turned out to be a strategy that was good for more than just memory segments. Researchers at Bellcore were using the same technique in personal communications systems to find the current location of a portable phone anywhere in the country.

The view of the Internet as a national memory backplane was exciting, but what impressed me the most was the overall scope of the research being conducted in the gigabit testbeds. It was clear that this was money well spent and that we would all benefit from the incredible collection of talent that had been assembled.

After the lecture, with my column deadlines looming, I raced to the Trident, a coffee shop and bookstore, to try and find some pithy sayings to write. Munching a Blueberry Tahini bar, I walked around the bookstore hoping to get inspiration.

I picked up a copy of The Secret Books of the Egyptian Gnostics but couldn't find any parallels to my current subject. When The Aromatherapy Workbook didn't prove to be any help, I sat down and tried to grind out words of MIS wisdom while around me people laid out tarot cards and discussed the many paths to personal fulfillment.

<<<>>>