|
William C. Blake,
Director, High Performance Technical Computing
Director, Core Technology Groups
You might think that the cover of this issue of the Digital
Technical Journal is a bit odd. After all, what could
be the relevance of those ancient alchemists in the
drawing to the computer-age topic of programming
languages and tools? Certainly, both alchemists and
programmers work busily on new tools. An even more
interesting metaphorical connection is the alchemist and
the compiler software developer as creators of tools that
transform (transmute, in the strict sense of alchemy) the
base into the precious. The metaphor does, however, break
down. Unlike the myth and folklore of alchemy, the
science and technology of compiler software development
is a real and important part of processing a new solution
or algorithm into the correct and highest performance set
of actual machine instructions. This issue of the Journal
addresses current, state-of-the-art work at Compaq
Computer Corporation on programming languages and tools.
Gone are the days when programmers plied their craft
"close to the machine," that is, working in
detailed machine instructions. Today, system designers
and application developers, driven by the pressures of
time to market and technical complexity, must express
their solutions in terms "close to the
programmer" because people think best in ways that
are abstract, language dependent, and machine
independent. Enhancing the characteristics of an abstract
high-level language, however, conflicts with the need for
lower level optimizations that make the code run fastest.
Computers still require detailed machine instructions,
and the high-level programs close to the programmer must
be correctly compiled into those instructions. This
semantic gap between programming languages and machine
instructions is central to the evolution of compilers and
to microprocessor architectures as well. The compiler
developers role is to help close the gap by
preserving the correctness of the compilation and at the
same time resolving the trade-offs between the
optimizations needed for improvements "close to the
programmer" and those needed "close to the
machine."
To put the work described in this Journal into
context, it is helpful to think about the changes in
compiler requirements over the past 15 years. It was in
the early 1980s that the direction of future computer
architectures changed from increasingly complex
instruction sets, CISC, that supported high-level
languages to computer architectures with much simpler,
reduced instruction sets, RISC. Three key research
efforts led the way: the Berkeley RISC processor, the IBM
801 RISC processor, and the Stanford MIPS processor. All
three approaches dramatically reduced the instruction set
and increased the clock rate. The RISC approach promised
improvements up to a factor of five compared with CISC
machines using the same manufacturing technology.
Compaqs transition from the VAX to the Alpha 64-bit
RISC architecture was a direct result of the new
architectural trend.
As a consequence of these major architectural changes,
compilers and their associated tools became significantly
more important. New, much more complex compilers for RISC
machines eliminated the need for the large, microcoded
CISC machines. The complexities of high-level language
processing moved from the petrified software of CISC
microprocessors to a whole new generation of optimizing
compilers. This move caused some to claim that RISC
really stands for "Relegate Important Stuff to
Compilers."
The introduction of the third-generation Alpha
microprocessor, the 21264, demonstrates that the shift to
RISC and Alpha system implementations and compilers
served Compaq customers well by producing reliable,
accurate, and high-performance computers. In fact, Alpha
systems, which have the ability to process over a billion
64-bit floating-point numbers per second, perform at
levels formerly attained only by specialized
supercomputers. It is not surprising that the Alpha
microprocessor is the most frequently used microprocessor
in the top 500 largest supercomputing sites in the world.
After reading through the papers in this issue, you
may wonder what is next for compilers and tools. As
physical limits curtail the shrinking of silicon feature
sizes, there is not likely to be a repeat of the
performance gains at the microprocessor level, so
attention will turn to compiler technology and computer
architecture to deliver the next thousandfold increase in
sustained application performance. The two principal laws
that affect dramatic application performance improvements
are Moores Law and Amdahls Law. Moores
Law states that performance will double each 18 months
due to semiconductor process scaling; and Amdahls
Law expresses the diminishing returns of various system
speedup enhancements. In the next 15 years, Moores
Law may be stopped by the physical realities of scaling
limits. But Amdahls Law will be broken as well as
improvements in parallel language, tool development, and
new methods of achieving parallelism will positively
affect the future of compilers and hence application
performance. As you will see in papers in this issue,
there is a new emphasis on increasing execution speed by
exploiting the multiple instruction issue capability of
Alpha microprocessors. Improvements in execution speed
will accelerate dramatically as future compilers exploit
performance improvement techniques using new capabilities
evolved in Alpha. Compilers will deliver new ways of
hiding instruction latency (reducing the performance gap
between vector processors and RISC superscalar machines),
improved unrolling and optimization of loops, instruction
reordering and scheduling, and ways of dealing with
parallel decomposition and data layout in nonuniform
memory architectures. The challenges to compiler and tool
developers will undoubtedly increase over time.
By not relying on hardware improvements to deliver all
the increases in performance, compiler wizards are making
their own contributions -- always watchful of correctness
first, then run-time performance, and, finally, speed and
efficiency of the software development process itself.
Bill Blake
|
|