Alan G. Nemeth
Corporate Consultant,UNIX Architecture and Technology
"The Internet is dying." I feel quite confident you
will regularly see articles with this message in the industry and
general press over the next few years. The message wont be
as new as the authors of the articles might believe, and the work
to remove the most frequently identified problems was begun years
ago within the Internet Engineering Task Force (IETF). Internet
Protocol version 6 (IPv6) is a large family of protocols that
form the basis of the IETF response to a set of problems
identified in the early 1990s and for which the need is
accelerated by the explosion of Internet usage.
One of the major concerns about the current Internet is the
limited amount of address space. The underlying address for IP
endpoints is 32 bits wide, permitting a total of 4 billion
distinct addresses. Although this number seems large (and it
seemed truly gigantic in the early 1970s when the width was
selected), it is currently a real, practical barrier to current
deployment patterns. Large users of Internet addresses can no longer get
the address space they need for assignments. Because the Internet has
run as a decentralized organization over the years, there is no
effective central administration to support competition for
scarce resources such as address space. Instead, the response of
the community is to provide resources sufficient to keep allocation
as a low-overhead activity. So IPv6 defines an address space of
128 bits. This currently seems like a gigantic number!
But limited address space is hard to build into a persuasive
case for change. End users are much more likely to be concerned
about the local problem of getting just "one more
address," rather than the problems of keeping the Internet
as a whole alive and functioning. So the IPv6 design deliberately
incorporates a set of functionality improvements that provide
attractive end-user capabilities. IPv6 includes much easier schemes for
assigning addresses, which will reduce the administrative burden
for users and their network managers. IPv6 provides a better framework
for encryption and an expectation that it will be widely
available and used. And IPv6 provides some systematic mechanisms
for describing requests for specific quality levels in the
service offered by the transport provider. These capabilities
will address some very real, practical problems that do afflict
individual end users of the Internet.
However, there is no expectation that it is acceptable to
switch the set of Internet users to IPv6 either simultaneously or
even over an extended time period. IPv6 must interoperate with
the current installed IPv4 protocols for an indefinite period.
This implies services that translate between the different
addresses (and address assignment approaches that ease mechanical
derivation of IPv6 addresses from IPv4 addresses), dual protocol
stacks to permit communication with both protocols depending on
the capabilities of the participants in the conversation, and
schemes to accommodate security mechanisms and quality of service requests.
The entirety of IPv6 represents a large implementation effort
to be undertaken by many different organizations. The Internet
represents the largest example I know of a distributed
computation that has survived for 27 years. (I date from 1969
when the first ARPANET [Advanced Research Projects Agency Network]
nodes were installed.) With a few notable exceptions, this
computation has run continually, despite constant changes in
hardware, software, implementers, and operators. It has survived explosive growth
far beyond the designs of its originators. It has done so with a volunteer
organization driving the development direction. The community spirit
has been crucial to making this work. IPv6 is an example of that
community at work; no one organization can implement it all,
either at a product level or at a deployment level.
The IPv6 paper in this issue describes the technical design
needed to build an IPv6 implementation for the core protocols
under the Digital UNIX operating system. Digital has been one of
the leading prototype builders of the design specifications as they
have evolved in the industry debates. At the time the Internet Protocol
Next Generation (IPng) Directorate officially adopted key
elements of the protocol, Digitals implementation was the
only one running to demonstrate that the design was indeed feasible.
But we dont believe that we can implement all the pieces of IPv6
as a single company. Therefore we choose to share the implementation experience
through this paper to aid others in their efforts to deal with
the implementation problems. We also dont claim
completeness; the full suite of specifications for IPv6 is
evolving, and the software to implement it is large. We fully expect
that portions of our ultimate product offerings will be developed
by others in the industry.
The long-term evolution of the Internet captured in the IPv6
implementation paper is but one example in this issue of the
extent to which computing now has a history that gives us much
insight into the future. Certainly the paper by Supnik and Burnet
is an explicit trip through computing history. The re-creation,
both physical and logical, of computing systems of the past can
only help remind us that the artifacts we create have a longer
life than we anticipate. As our programmers write new code, or
our hardware designers produce new architectural approaches, or
our storage designers push the boundaries on new media
technologies, do they consider the imponderables of running these
systems 25 or more years in the future? The view of archivists
trying to preserve this history reminds us of the difficulty of
preservation after the fact and of the amazing duration of design
decisions.
The paper on the evolution of Fortran is yet another example
of the rich history of computing. Here we see clearly the
evolution of a key language to accommodate the changing patterns
of system architectural designs and parallel program concepts. The computer
industry frequently develops commercially important programs by evolution ---
the 100,000-line program that 10 years later has become 10
million lines of code in an assortment of languages and computing
styles. Here the venerable Fortran (first introduced in 1954!) adds
support for some of the latest approaches to fast system interconnect
represented by MEMORY CHANNEL and the parallel architectures of clusters
of SMP systems.
MEMORY CHANNEL reappears in the paper about TPC-C performance
on TruCluster systems. This paper, one of a pair on the issues of
tuning a commercially important benchmark, presents an attractive model
for the benefits in performance that can be derived from a very
fast interconnect and software structures to match. The performance
levels achieved shatter world records on a benchmark that has had extensive
attention and work.
The other paper on TPC-C performance with very large memory
(VLM) illustrates the truth of an old design maxim, "If
memory is getting cheaper, use more of it!" When Digital
first built a 2-gigabyte (GB) memory board, it took more than a
million dollars worth of DRAM chips to populate the initial
instance. However, memory prices have continued to drop sharply,
and today over 40 percent of the AlphaServer 8400 systems ship
with 2 GB or more of memory. The memory prices will continue to
come down, and the insights offered in this paper will help in understanding
where additional memory can provide real benefits to customer workloads.
The final paper in the collection is on the AltaVista Forum
approach to collaboration among groups exploiting the Internet
and WWW technologies and brings us back around to the initial
thoughts in this foreword. The ubiquitous nature of the Internet permits
and encourages tools such as this that utilize computer systems
in new ways. This approach builds on the fabric that we
emphasized in the IPv6 paper but sees the Internet as a tool and
a component of a larger solution and shows how to exploit these capabilities
to allow new ways of working. Using imagination and building on the
work of others are characteristic of the approach taken by those
who are catalysts in the industry. The paper demonstrates how
easy it is to build a system that would have been a major project
just five years ago. This ease of construction is a benefit of
the programming techniques and infrastructure investments and a
spur to keep doing more of it.
|