|
The Digital Data Storage or DDS format for tape drives was developed in 1989 to meet the need for high-capacity, compact tape backup for network servers
and small multiuser systems. The DDS standard is based on the Digital Audio
Tape or DAT standard and has been extended as backup capacity requirements
have increased. DDS-2 drives have recently become available, and higher-capacity
DDS-3 and DDS-4 specifications have already been approved. DDS-2 drives can
store four gigabytes of data on a single cartridge, or typically eight gigabytes
with 2-to-1 data compression. The HP C1533A DDS-2 tape drive can record a full
DDS-2 cartridge in just over two hours, running at a data transfer rate of
510 kilobytes per second. This is almost an hour faster than typical DDS-2
drives. As explained in the article on page
6,
achieving this performance required
improvements in tape material, tape length, tape thickness, read and write
heads, drum design, and linearity measurement and adjustment.
For many systems, eight gigabytes isn't enough, and it isn't convenient for
the typical user to change cartridges during the backup, which must often be
done at night. The HP C1553A DDS tape autoloader was developed to meet this
need. As Steve Dimond tells us in the article on page
12,
the size constraints
gave the designers their major challenge. The autoloader had to fit into a
standard 5 1/4-inch peripheral enclosure (about 5.75 inches wide), incorporate
the four-inch-wide HP C1533A tape drive, hold as many tapes as possible, and
be reliable and ergonomic. "As many tapes as possible" turns out to be six,
giving the autoloader a typical capacity of 48 gigabytes with 2-to-1 data compression.
Different strategies for using this capacity for network backup are discussed
in the article. The complex retry algorithms required for controlling the autoloader
are defined in state tables, which are generated by an automatic tool that
greatly increases readability and maintainability, as explained in the article
on page
21.
In a companion article, on page
27,
firmware designer Mark Sims
shares with us his experience using different approaches to the implementation
of state machines, exploring the advantages and disadvantages of each approach.
Debuggers are software tools that are used by software developers for finding
bugs in programs and for analyzing programs. The debugger described in the
article on page
33
is called HP DDE, which stands for distributed debugging
environment, meaning that this debugger can debug programs running on remote
computers. It's an event-based debugger, which means that it responds to user-specified
events that occur during program execution. It consists of a main debugger
that communicates with several modules called managers, which handle dependencies
on specific languages, object code formats, target platforms, and user interfaces.
This modularity has made it easy to retarget the debugger to many different
languages and computer platforms, both HP and non-HP.
Most engineers are familiar with Fourier analysis, in which a time-varying
voltage is expressed as the sum of a set of sinusoidal basis functions of different
frequencies and amplitudes. Wavelet analysis, described in the article on page
44,
is similar, but the basis functions, called wavelets, are not sinusoidal
and are localized in time and frequency. The properties of wavelets make them
useful for processing nonstationary signals such as a sum of gliding tones
or a sum of three signals that start at different times. The article gives
an overview of wavelet analysis and describes a software toolbox created by
HP Laboratories Japan to aid in the development of wavelet applications.
Test vector is a test-engineer term for a pattern of ones and zeros that an
automatic tester applies to the inputs of an integrated circuit or a printed
circuit assembly to make sure that it works. Generating and verifying test
vectors is a nontrivial process that's carried out with the aid of specialized
software tools and can take six months or so to complete. As explained in the
article on page
55,
engineers at one HP laboratory have been able to reduce
this time to four months, thanks to five techniques that mainly verify that
the test access port is functioning properly.
It's known that detecting and fixing software bugs early in the development
cycle is much less costly than dealing with them late in the cycle. Software
inspections are one means for early bug detection. One HP software laboratory
has used data collected during inspections and testing to estimate the value
(expressed as the return on investment) of inspections and early testing. Their
results show a return on investment of 787% for inspections, compared to 229%
for testing. Details are in the article on page
60.
The latest generation of high-performance microprocessor chips operates at
clock rates up to 150 megahertz and leaves little room for imprecision or uncertainty
in the clock delivery system. Using Intel's Pentium(TM) chip as an example
of this new class of processors, the article on page
68
shows that the jitter
tolerance for the clock delivery system can be as low as 50 picoseconds, making
it essential to use a low-jitter signal source such as the HP 8133A pulse generator
when making measurements on such systems. The HP 8133A's jitter specification
of five picoseconds (rms) ensures that most of the measured jitter comes from
the clock delivery system and not from the signal source.
The report on page
80
presents some recent results of an HP Laboratories project
aimed at modeling and simulating a manufacturing enterprise. The goal of this
ongoing research is to learn to predict the likely results of changes using
sound engineering principles and techniques. The results in this report are
from simulation experiments using a model called the Simple Model because of
its structural simplicity. Despite its simplicity, the model displayed complex
dynamic behavior and produced unexpected results. The author suggests that
application areas for enterprise modeling and simulation include estimating
the effects of incremental improvements, studying the impacts of process changes,
generating enterprise behavior information, and increasing the chances for
success of reengineering efforts.
R.P. Dolan,
Editor
|
|