We have called the Technological War the decisive war, and
have stated that the United States has not always done well in
its conduct of that war. The reasons for our repeated
failure in technological warfare -- despite the fact that we are
far and away the most advanced technological power and have
expended far more money, manpower, time and resources on military
technology than all other nations combined -- require careful
study. There is no reason why the United States cannot
maintain a decisive advantage in the Technological War, and,
moreover, do so with the expenditure of no more resources than
are now being used up in our present wasteful efforts.
In our national strategy far too much attention has been paid to current affairs and specific conflict situations. Instead of a real technological strategy we have a series of unrelated decisions on specific problems. There have been attempts to integrate the individual decisions, but these attempts have often resulted in even more waste and inefficiency. Examples abound. Consider, for example, the fanciful expectations about the TFX (FB-111), the joint service fighter aircraft program; and the Sergeant York missile, which, originally a reasonable idea, was micromismanaged, given impossible goals to meet, and eventually cancelled.
The fact is, we had no mechanism for generating a strategy of technology. The Joint Chiefs of Staff have been an inter-service negotiating board; and since the officers who serve the Joint Chiefs must depend on boards of officers drawn from their own branch of service for promotion, there has been little chance that anyone will or can develop loyalty to the Joint Chiefs as an institution.
In the late 1980's, the situation began to change. Under the urging of the Reagan Administration, the Commanders in Chief (CINC's) of the major operating forces -- SAC, EURCOM, PACOM, SOUTHCOM, SOFCOM, and SPACECOM -- were given responsibility for generating requirements and for both advocating and defending programs. The struggle within the Joint Chiefs thus became one of struggle among the CINC's for resources with the JCS, and especially the Vice Chairman, being the adjustors. The Services started to become responsible solely for personnel, R&;D, logistics, and budget, and their role within operations began to disappear. However, there is no technological CINC, and no clear career path for the developing technological strategist within any branch of service.
In the pages below we open with an overview of Soviet technological strategy as it contrasts with ours. We will then give examples of U.S. successes and failures in four periods:
1950's: ICBM and the nuclear powered airplane
1960's: SSBM, Apollo, space techology and satellites, and TFX
1970's: MIRV, new fighters, and the Shuttle
1980's: B-1; SDI; cruise missiles; MX, and C3/I; B2
We follow with more examples of Soviet achievements during the same time periods:
1950's: H-bomb; ICBM/IRBM, Space boosters
1960's: Nuclear powered submarines, advanced fighters, tanks
1970's: Manned space program; MIRV
1980's: Mobile ICBM
We will then examine the lessons learned from these examples.
Although the Soviet Union begins from a lower technological and industrial base, some of their achievements in the Technoloigcal War have been impressive.
In contrast to the diffusion of effort, centralization of decision making, and micromanagement which characterize American technological strategy, the Soviets have a strategy of focusing their efforts, including basic and applied research. Central direction and control are key aspects of their use of technology. This means that discovery must be on schedule. The motivation of Soviet scientists has been an important factor in meeting goals, but sanctions and punishment are also an important part of the Soviet system. By focusing their efforts the Soviets allow to atrophy those areas which they do not consider important to their strategy.
The Soviet priority system places military technology and fundamental industry a long way ahead of any other aspects of technology. In part this neglect of other technology is then compensated for by purchase of nonstrategic goods and technical processes from the West; scientific exchange programs; industrial espionage and piracy; and general exploitation of Western achievements.
Arms negotiations to slow down the U.S. technological challenge by eliminating key weapons and technologies have always been a key part of the Soviet strategy of technology. The INF is a prime example of this. The Soviets naturally seek to negotiate the elimination of technologies in which they are weak, and to retain those where they are strong.
The INF treaty is a prime example. Under INF an entire class of weapons -- nuclear and non-nuclear -- was eliminated. Not only were the nuclear tipped IRBM's destroyed, but the non-nuclear systems, while not destroyed, cannot be improved by new technologies. The result was to increase, not decrease, the strategic imbalance in Europe, because the U.S.S.R. has no great need of IRBM systems, while the U.S. and NATO do not have a good substitute.
The Soviet commanders of the Technological War can afford to wait for consumer technology and goods, and concentrate their efforts on winning the decisive war. This remains true during the era of glasnost; although there is an emphasis on decentralization of the civilian technology and the production of consumer goods, there has been little noticeable decrease in military spending; this remains true in late 1989, even after the fall of the Berlin Wall. Given that there will be cuts in the overall Soviet military budget, it is highly likely that there will be little to no decrease in military R&;D.
The Soviets concentrate their technical and engineering talent
on the decision and design phases of technology for those systems
which are most important to their strategic goals. This
permits them to weigh the relative merits of alternative
technical approaches to their strategic goals and use what they
have learned from Western technology to aid the production
process. Their strategy facilitates finding a near-optimum
approach to a variety of goals, and is designed to compensate for
their inferiority in overall technical resources. The
point is, despite the enormous Western superiority in total
quantity of technological resources, the U.S.S.R. has been able
to concentrate more effort than we have on selected portions of
weapons technology and to gain superiority in many phases of
military technology driven by strategy.
In their designs the Soviets make simplicity an important criterion for both production and operation. Success in achieving simplicity leads to low costs of production and, importantly, to high reliability of operation. Simplicity also allows them to operate the systems with personnel who have only rudimentary training and skills, and to reserve their limited supply of highly skilled technicians for research and development.
Because their deadlines are self-imposed, the Soviets can take their time about selecting designs. This was the pattern they followed in military computer technology. After making a survey of Western advances on a variety of fronts, they chose an optimum path to follow.
The West has a defensive strategy. Although we would welcome the disintegration of the Soviet Empire, we strive mostly to preserve the status quo. This imposes few deadlines on the Soviets, who can afford to take their time. Western achievements in the Technological War are not threats to Soviet national existence. The defensive strategy nature of the West prevents us from fully exploiting our advantages. However, there are ways in which we can force the Soviets to react to our initiatives.
Recently, through programs like SDI and high-precision weapons to target command posts, we have started to find ways to exploit our strengths and Soviet weaknesses. The new [1988] buzz word for the concept is "competitive strategies." The result has had spectacular success in recent weeks.
This may be the place to note that the first edition of this book was written at a time when the US was NOT doing well in the technological war. That changed, partly -- some would say in large part -- due to this book's employment as a text in the military academies and War Colleges. Things change so quickly now that we cannot rewrite everything; there will of necessity be residual elements of our older polemic against US policy. The fact is, though, that much of what we advocate was adopted in the Reagan era. Alas, not everything; which is the purpose of this second edition. |
The Soviet strategy in the Technological War would not be an
optimum strategy for the West, precisely because neither motives
nor resources are symmetrical. The West has vastly
superior resources, and can afford nonspecific research to find
unsuspected technological advantages. The West can afford
to decentralize a part of its decision-making process and employ
a variety of technological approaches, particularly during the
scientific and advanced engineering research phases of the
technological discovery process. Whereas the Soviet Union
can afford only one "center of gravity" for their
efforts, we can afford several.
As a consequence of the asymmetries of motive and resources, it would be foolish to copy the Soviet strategy for the Technological War. We can afford a more sophisticated strategy, and will have a far higher probability of success. What we cannot afford is the luxury of having no strategy at all.
By contrast with the Soviet strategy of focusing effort on the development of specific technological achievements, working on each problem until it is solved, and concentrating their technological forces as may be directed by a carefully-chosen center of gravity, the United States has had a number of projects, some successful and some not; there has been little or no overall technical strategy.
Our technological decision-making process is scattered throughout a number of agencies and departments of the government, most of which are not under the control of the Secretary of Defense and many of which are not represented on the National Security Council. For example, even though it may be supported by appropriations our civil space program under NASA has rarely been coordinated with military requirements, and can hardly be governed by our nonexistent strategy of technology.
When we wrote those words in 1969 it was all too true
that there was no technological strategy. During
the Reagan era that changed somewhat. Although
there never was implemented a full reorganization that
would create a technological war plan, at least the
subject was taken seriously. General Daniel O.
Graham's analysis of moving to space as a "strategic
side step" spoke in explicit strategic terms, and
had considerable influence on strategic thinking. After the low ebb of the McNamara era there was renewed interest in an overall strategy of technology. The decisive moment came in Iceland when Gorbachev pleaded with Reagan to abandon SDI and strategic defenses; Reagan refused, and thereby brought about the collapse of the Soviet Union, although it was not apparent at the time that this would happen so quickly. The USSR was at that time spending far more of its national budget on weapons (hardly defense) than was admitted by the CIA or Department of State. Possony, Pournelle, and Kane, along with General Graham, continued to insist that the USSR was spending some 30% of GNP on weapons and military power. We privately suspected that it was more (and in fact it was), but official opposition to our 30% estimate was surprisingly hostile. The official US estimate was under 20%. |
The history of nuclear-propulsion aircraft illustrates the
problems inherent in the present system. In
an effort to advance nuclear technology while living within
budget limitations, the military tried to play scientific
politics. Because of the need to justify funds on the
basis of practical systems rather than their contribution to the
Technological War, at times the military tried to set up
requirements for nuclear-propulsion aircraft systems. These
requirements were beyond the realm of technological possibility
and resulted in opposition from the scientific community. At
other times, the military justified funds on the basis of
scientific experiment.
Here the generals
subjected themselves to the inevitable arguments and divisions
among scientists.
The decision fell to the timid.
There was never an attempt to analyze the problem in its strategic context, or even to consider it historically, such as comparing it to the invention and development of the jet engine. If Whittle's work had been subjected to an experience similar to that of the nuclear engine, we would not have jet aircraft today. In addition to the arguments about technical feasibility, moreover, the question was raised, What can the nuclear aircraft do that the jet aircraft cannot do cheaper and faster? Inasmuch as there were no nuclear-propulsion aircraft and its ultimate capabilities were unknown, this question was hardly intelligent; and although its detractors admitted that the nuclear aircraft could stay aloft for long periods, the significance of this characteristic for our defensive strategy was not understood. More importantly, the far-reaching consequences of practical development of nuclear propulsion were never seriously analyzed.
A further difficulty was that some members of the military never quite understood the problem and some were ready to sacrifice the overall project for systems that could be made available earlier. Others wanted immediately an airplane with performance characteristics superior to those of our most modern jets -- as though an entirely new technology does not require lead time and as though a mature chicken jumps out of the egg.
The scientists should not really have mixed in the strategic debate, but they were in fact the only ones who argued the question. They broke up in several small groups, opposing or rejecting nuclear aircraft, nuclear-rocket propulsion, or nuclear ramjets, or dismissing nuclear propulsion altogether. The scientists who have had the greatest impact on the negative decisions affecting the nuclear-propulsion aircraft are the graduates of one laboratory which always was opposed to this program -- for good or bad reasons. While they were instrumental in killing the plane they did not appreciably advance the cause of the nuclear ramjet or rocket that they were in charge of developing and that they claimed was a more promising approach.
The politicians didn't understand the problem, either. One Secretary of Defense called the nuclear-propulsion aircraft "a shitepoke which could barely get off the ground."
As a result there were innumerable stop-and-go decisions. While it is true that about $1 billion was spent, at least one half was spent on waste motion. It is said that we have nothing to show, but this is not true. We do have the know-how to fly low-speed, experimental and test aircraft. This is precisely the one type of aircraft we could be flying now, and which someone will one day develop.
This experiment should have been the signal for the military to face up to the technological age, especially to prepare a technological strategy to meet the new Soviet challenge and to organize better ways and implement such a strategy.
In 1988 almost nothing remains of the nuclear propulsion experiments; and although nuclear aircraft may never play a role in the technological war, nuclear propulsion could in future be decisive in space. Unfortunately, the nuclear rocket programs, such as NERVA and DUMBO, were also mired in internecine warfare, and eventually closed down as well.
The mismanagement of the nuclear airplane project is a text-book example of how not to conduct a program.
By contrast, the IRBM and ICBM programs were well developed and well managed in the 1950's. As an example, the Thor IRBM was brought from conception to operational capability in just over three years. (Thor follow-on rockets are used for satellite launches to this day.) Instead of programs designed by scientists to investigate a technology, IRBM and ICBM systems were designed, fielded, and operational in a very short time period, largely because General Schriever instituted dramatically new management procedures, including concurrent development of the components and subsystems.
In this period Admiral Red Raborn married the nuclear submarine and ballistic missile in a "special project" which produced the Polaris, and later the Poseidon and Trident boats and Submarine Launched Ballistic Missiles.
The program was important not only because of its direct effect on strategic deterrence, but on its adoption of new management principles, and the demonstration that it was still possible to produce strategic weapons systems in a timely and cost-effective manor without micromanagement from the Pentagon.
The Apollo program of manned exploration of the Moon was certainly the outstanding achievement of this Century. It is a landmark of what the U.S. could achieve given a challenge to the scientific and engineering community.
The Apollo program was also the most complex action ever undertaken by the human race. It is interesting to note that the second most complex activity in history was Overlord, the Allied invasion of Normandy in 1944. Although Apollo was accomplished outside the Department of Defense, it was no accident that many of the key leaders, such as General Sam Phillips, were highly experienced managers of advanced military technology programs.
The Apollo program was mission oriented. Its management structure closely resembled a military organization. Instead of micro-management from the top, there was delegation of authority. Tasks were narrowly defined, and responsibility for achieving them was spelled out in detail. As with the ICBM program, parallel processes were set up to investigate alternate ways of achieving critical tasks.
The result was that technology was produced on demand and on schedule. Setbacks and even tragedies such as the capsule fire did not halt the program. On 20 July, 1969, the Eagle landed on the Moon, a little more than eight years after President Kennedy began a task which much of the scientific community said could not be accomplished in two decades.
In 1962 Project Forecast identified a requirement for new military aircraft. Systems designs began shortly thereafter.
Unlike the Apollo program, both the fighter and bomber programs were micromanaged from the top. There were endless reviews and appeals.
As a result, the first of the new generation of fighter aircraft was not rolled out until the mid-70's, and were not in the operational inventory in numbers until considerably later; and both the Navy and Air Force are now flying aircraft whose basic designs are twenty years old.
The B-1 fared even worse. Not only was there micromanagement, review, and appeal, but the program itself was cancelled by political authorities. The first operational B-1 was delivered in 1983; we now have a full inventory of 100 B-1 bombers.
The B-1 bomber and the F-14, F-15, F-16, and F-18 fighters are probably the most advanced aircraft of their kind in the world; but the contrast between the 8 years from conception to operation of Apollo, and the 16 and more years from design to operation of these aircraft, is worth noting; particularly when contrasted with the rapid development and deployment of the P-51 and P-47 aircraft during World War II. Recall that the P-51, then the world's most advanced fighter, went from drawing board to combat operation in under a year.
Note also that the reviews and delays characterizing the development and procurement of the B-1 and the new fighters did not save money. The total program costs were considerably higher than they would have been had we set up a management structure similiar to Apollo; indeed, the total costs of these programs exceeded that of Apollo, which was brought in on time and under budget.
The major technological developments with stategic implications for the 1970's were new techniques for increasing ICBM accuracy, and the capability for deploying Multiple Independently Retargetable Re-entry Vehicles (MIRV).
These capabilities stimulated spirited debate between the advocates of security through Arms Control and the military services.
Arms control advocates said that MIRV was inherently destabilizing: that is, if each missile had the capability for destroying a large number of enemy missiles, then there would be a military incentive to launch first in crisis situations.
Strategic analysis gave a different answer: given the limited size of the U.S. missile force, any increase in numbers of Soviet systems would pose an increasing threat to the U.S. SOF, especially since it was known that the Soviet Union was developing new techniques for increasing the accuracy of its missiles. The threat to the SOF could be countered by three different means:
(1) Increase the numbers of missiles in the US SOF
(2) Increase the survivability of the SOF
(2.1) Hardening silos or other passive means
(2.2) Active defense
(3) Increase the effectiveness of U.S. missiles that survived a Soviet first strike.
Of these options, (1) was declared politically undesirable; (2.1) was extremely expensive and given increased Soviet accuracies would soon be impossible; and (2.2) was rejected on political grounds. There remained only (3), which in practice meant MIRV.
The MIRV system was accordingly built, and once the decision was acutally made was reasonably well implemented. However, we should note that the Senate Armed Services Committee tried to prevent the Minuteman III MIRV from becoming accurate enough to attack Soviet missile silos. These efforts delayed the deployment of accurate MIRV by several years.
The most spectacular program of the 70's was the Space Transportation System, popularly known as the Shuttle.
By 1968 it was clear that the Apollo program would perform its mission on schedule. At the same time, the Viet Nam war had created a budget crisis, leading to considerable opposition to the space program. NASA, concerned about retaining its large army of development scientists recruited for the Apollo program, searched for new missions to keep them on the payroll.
The original proposal for the Shuttle was as a large reusable general purpose system for putting heavy payloads into orbit. Simultaneously, the military needed a much smaller and more maneuverable system along the lines of the Dyna-Soar concept.
In order to obtain funds for the Shuttle, NASA combined these incompatible missions, and set out to kill all competing programs. Not only were the remaining fully operational and man-rated Saturn rockets laid on their sides as lawn ornaments, but all Saturn facilities were closed, and even the plans for the Saturn were ordered destroyed as "useless archives." NASA officials conducted a campaign to discredit all possible opposition to Shuttle.
The Shuttle became the "National Space Transportation System", able to meet all possible space missions. The Air FOrce had previously studied a mission in which an orbital surveillance vehicle would be launched in polar orbit from Vandenberg; overfly the Soviet Union; then reenter and land at Edwards AFB after one orbit. It was not a mission that inspired USAF enthusiasm, but the Air Force was bullied into supporting Shuttle, and this looked as good as anything.
Unfortunately, the specified mission requires atmospheric maneuvering, and dictated that the Shuttle would have wings. The wings dictated horizontal landings. They also greatly complicated the system design. A smaller vehicle intended for this mission could have been built, but NASA insisted that Shuttle could do the entire job. Wings plus Shuttle's large payload requirements dictated increasingly large rocket engines to get the craft into orbit.
There were other design changes. The original concept of a spacecraft that would be "reusable like an airplane" disappeared; instead there would be a lengthy refurbishing period whose cost could only be estimated.
The original design for a reusable vehicle proposed liquid fueled booster engines as well as a liquid fueled main engine. The alternative was solid fuel boosters. Developing the liquid booster engines would have cost more money to begin with, but would make for great savings in operational costs; NASA chose to argue for the lower up-front costs, on the theory that once the committment was made, Congress would have no choice but to appropriate the additional funds for Shuttle operation.
The solid fuel engines could have been designed in one piece; however, except for barges on the Intercoastal Waterway, there was no transportation system for shipping such large objects filled with high explosives. The only plant on the coastal waterway system capable of building the one piece engines was Michoud in Louisana. That plant had been closed, and re-opening it would require up-front money. There were also political considerations. The result was that the boosters were designed to be built in segments and made in Utah.
The Congress, partly in reaction to NASA's constant inability to meet either budgets or schedules, imposed funding limits and budget stretchouts. Since delaying a program never saves money, the overall costs grew accordingly. However, this was not the only reason for runaway costs in the program, as NASA continued to make design changes at every stage of the development process.
Shuttle program expenses grew until each Shuttle craft cost more than $2 billion. The first Shuttle flew on April 12, 1981, more than three years after it had originally been scheduled. During that time we lost Skylab, an operational space station, which could have been rescued had we retained the Saturn rockets which NASA deliberately destroyed.
No Shuttle ever met its design criteria for payload weight or refurbishing costs. Shuttle Challenger was destroyed by the failure of the joints in one of the segmented solid booster rockets.
The Reagan administration ordered the ressurection of the B-1 program which had been cancelled by President Carter. The procurement was turned over to a slimmed-down organization, and, with little interference from above, the full inventory of 100 aircraft was delivered on time and under budget, in under four years.
During the 1980's, the Strategic Defense program has clearly been the dominant area of competition in the Technological War. When the decade began, most scientists and miitary strategists believed that defense against the ICBM was impossible. How could you hit a bullet with a bullet?
Nevertheless, on March 23, 1983, President Reagan challenged the scientific community to develop a meaningful ballistic missile defense system. As happened with the ICBM and Apollo programs, the response was nearly incredible. Within two years a range of new applications of technology in the areas of propulsion, sensors, guidance, and even production were generated. By 1988 there were a number of alternate systems which could meet the challenge.
We will draw the lessons to be learned from these examples in later sections and chapters. First, we should examine the way technological planning is now conducted.
The assumptions that appear to govern our conduct of the
Technological War are shown on Chart 3.
They derive from a misunderstanding of the nature of war and
from a failure to appreciate the nature of technology. Because
these assumptions are based on an improper appreciation of the
real world, it is no surprise that despite our enormous
expenditures the United States has failed to exploit its
advantages to take a commanding lead in the Technological War.
As of 1988 there remains a window of vulnerability: new
advances in both defense and offense technologies now make it
possible for the U.S.S.R to develop a Full First-Strike
Capability unless we act swiftly and skillfully.
Fortunately, the Soviets under Brezhnev were unable fully to exploit their opportunity; even so, they were able to construct a highly threatening ICBM force, and their lead in strategic nuclear forces continued to grow during the Brezhnev regime and beyond. Meanwhile, the Soviets began an extensive program of R&;D into missile defense systems, and deployed some long term components of a working continental missile defense system.
Although the present U.S. assumptions are based on a false picture of strategic and technological reality, they are all the assumptions we have, and they generate what little strategic direction our efforts are given. The assumptions, and the various directives which can be derived from them, therefore merit a great deal more study than has been given to them in the past.
CHART THREE
|
Other postulates, derived from the assumptions on Chart 3, include the proposition that since we are not at war, we do not need an overall technological strategy and should not seek technological surprise even if it is possible to obtain it; that since the U.S.S.R. is also interested in stabilizing the "arms race," we should not exploit our advantage by engaging in technological pursuit even if we could so exploit them; and that since we can do anything we imagine and the technological explosion will inevitably produce anything we need, there is no necessity for an orderly accumulation of the building blocks to expand our military technological base.
If these propositions were put to the managers of our military technology in the explicit form given here, it is likely that many of them would disagree. Yet, an examination of the history of our technological management indicates that each of these factors is at work.
For example, the exploration of space, probably the most important military medium of the future, has been given to civilian agencies that are often unresponsive to military requirements. Worse is the artificial distinction imposed on development of space technology in the National Space Act of 1959. This Act creates a civil space agency, NASA, exclusively for "peaceful purposes" in space. The effect was to constrain the use of space for military missions.
NASA by law is not supposed to respond to military
requirements for space systems. Admittedly, various
pragmatic expedients have been followed to coordinate the
separate civil and military program requirements, such as the
Aeronautics and Astronautics Coordinating Board, and the Space
Task Group of 1969, but those efforts could never produce an
integrated national space program to execute a national
technological strategy for space applications. We
have yet to establish environmental laboratories in space to
develop the basic building blocks for making the use of space the
routine operation that a military mission must be.
Similarly, the National Defense Education Act doles out money
for technological training with no regard to whether those who
have received it will participate in or will hinder national
defense.
Many decisions on military technology have been centered in
the office of the Director of Defense Research and Engineering,
who is sometimes a scientist with no military training. When
we have achieved advances or breakthroughs in military
technology, we often halted short of exploiting them and
attempted to negotiate with the Soviet Union to put them back in
the bottle.
In general there has been little planning for technological
surprise, no integrated strategy of technology, and no
understanding of the meaning of technological pursuit.
The above analysis was written in 1970. By
1989 the situation had changed, although not as much as
it should. Our educational establishments have so
deteriorated that normative scores on both the Scholastic
Aptitude Test and the Stanford-Binet IQ Test have been
lowered; our space program was cut back to a single
Shuttle system which was then mismanaged, delayed, and
stretched out; and our manned space program was
non-existent through the last part of the 70's. Then,
when the Shuttle Challenger was lost, instead of
rethinking the situation and generating new means for
routine access to space, we spent more than two years
redesigning new launch vehicles. By late 1989 the consequences of the 1983 SDI decision, coupled with the sheer weight of US economic power and the total incompetence of the Soviet economic system, brought about heavy pressures for change within the Soviet system. This has not changed the fundamentals of technological warfare. It has bought the West a respite. [1989] The respite was followed by the collapse of the USSR, giving the US a chance to rethink our strategy of technology. We are not making good use of this opportunity. The US is at present the only superpower but this situation need not be permanent. TECHNOLOGY HAS A WAY OF EQUALIZING vast disparities. The Dreadnought made obsolete much of the naval establishment of 1900. Space weapons can do the same in the year 2000. [1997] |
Of the present assumptions, probably the most dangerous is that it is sufficient simply to react to Soviet initiatives in the Technological War. By failing to seize the initiative, we place ourselves in a clearly impossible situation: either we must maintain such decisive superiority over the Soviets at any possible point of breakthrough so that we can concede to them a long lead time and still be able to counter their new weapons; or we must abandon superiority to them whenever we fail to do so.
Wealthy as we are, with enormous reserve power in the form of our industries and laboratories, we cannot keep this posture forever. The abandonment of the initiative is probably the most expensive mistake we have made in the Technological War.
Until SDI there was little conscious effort to use the initiative to drive the U.S.S.R. to decisions which add to our security. For example, we have announced that we will develop penetration aids for our missiles, and deploy those as needed to overcome the Soviet missile defense system. This strategy presupposes high confidence in our estimates of the characteristics and limits of their system, which is a dangerous assumption because the U.S.S.R. is a secretive society about which it is difficult to obtain reliable technological information; but that is not the only hazard. Since the Soviets proceed to exploit defense technology while we merely study endlessly whether or not to pursue what needs to be done, the chances are that they will understand defense far better than we; and understanding defense technology is at least as important to the designers of our penetration systems as it is to our defense systems designers. For lack of a sophisticated understanding of the nature of defense technology, we may fail to understand Soviet defense capabilities and limits.
By contrast, we could have deployed a series of penetration aids, some of which are quite inexpensive, forcing the Soviet Union to adapt their defenses to our offense. As they made such an adaptation, we could change the nature of our offensive weapons, engaging in technological pursuit and forcing them to waste their resources reacting to our initiative. Admittedly this kind of strategy is not simple, but the point is that it was not seriously considered.
In fact, though, we did nothing of the kind, but once again relied on negotiations and treaties. Under the 1972 Anti Ballistic Missile (ABM) Treaty, both the United States and the Soviet Union agreed to build no more than one ballistic missile defense system; and that system was supposed to protect missiles or the national command. The Soviets chose to protect the missiles near Moscow; we soon abandoned our defensive systems entirely.
Fortunately, this policy was reversed after 1983; but it is instructive to understand the situation prior to the SDI effort. Most of this analysis was written prior to 1980. |
Under the ABM Treaty, neither side was to build battle management defensive radars, or to test certain ballistic missile intercept systems. The Soviet phased array radar near Abilokovo is clearly in violation of that treaty; so was teh Kraskyarsk radar (by their own admission). As of 1989 the United States has not begun construction of the radars and other auxiliary equipment needed for a large-scale ballistic missile defense system, nor have we made any other move to seize the initiative in this phase of the Technological War.
We did announce our new policy of SDI. This will be discussed in more detail in a later chapter; for the moment, it is sufficient to note that although strategic defenses cand be decisive in the Technological War, SDI is formally defined as a program of pure research, and is not integrated with any scheme for deployment. The United States remains utterly defenseless against nuclear ballistic missile attack.
We also could be devoting some of our technology to making life difficult for the U.S.S.R. in other theaters and areas of the world. It is unlikely to do any great harm if we manufacture small, short-range handguns of extremely inexpensive design and either scatter them broadside in Cuba or threaten to do so. This would, of course, be a diversionary move intended to force some kind of reaction from the other side and cause them to waste their resources. It has no great merit other than as an example; but nothing like it has even been discussed.
We have taken few military initiatives in space. The
list of Soviet 'first' in space is long. Our manned space program was in
trouble long before the Challenger disaster. Skylab, the
world's first operational space station, was launched (without
crew) on May 14, 1973. Key elements of the environmental
control system failed to deploy, but on May 25 the first Skylab
crew arrived and soon managed to make the space station
operational. On November 16, 1973, Skylab 4 carried Jerry
Carr, Ed Gibson, and Bill Pogue to the space station, where they
remained for 84 days. That was the last mission to Skylab,
and the last American manned mission until the flight of the
Shuttle Columbia in 1981. On December 18, 1973, the Soviet
Union launched Soyuz 13. The crew remained in orbit only 7
days; but over the next fifteen years, the Soviets sent up Soyuz
flights of increasing duration, until on February 19, 1986, they
launched their MIR space station, and on March 13, Soyuz T-13
docked with MIR and placed a crew aboard. There have been
many crew changes since, but MIR has been continuously manned
from 1983 to present.
Skylab was not visited again after the February, 1984 return of Carr, Gibson, and Pogue. Manned space was utterly neglected during the Carter Administration. On June 11, 1979, the space station's orbit decayed the Skylab burned up in the atmosphere.
In 1982 in a speech at Edwards AFB, President Reagan announced an intention to "look aggressively to the future by demonstrating the potential of the shuttle and establishing a more permanent presence in space." On January 25, 1984, in his State of the Union address, President Reagan directed NASA to develop a permanently manned space station within a decade. After the initial excitement, it became known as "The Incredible Shrinking Space Station"; every year it was redesigned to have fewer capabilities while costing additional billions of dollars. The present design calls for a station smaller than Skylab.
Meanwhile, our efforts to investigate the military potential of man in space continue to languish. We have no serious program for making space a theater of military operations; instead, we require the military to describe their mission requirements in detail before they are given a chance to explore the space environment and discover its potential. Because they cannot solve this dilemma, we do not capitalize on the military potential of space.
This unfortunate state of affairs continues in the 80's, with the added new wrinkle that space installations are now said to be too expensive and too vulnerable. This will be discussed in detail in the chapter on space systems.
Our missile programs have not yet been designed to maximize the variety of threats and missions inherent in using the aerospace, so that the Soviets have had to do little in the way of wasting resources to be ready for what we might do. By abandoning the initiative we give the enemy the chance to concentrate upon his strategic plans entirely unmolested by the options that we do not take up; and where, by accident, we do achieve a breakthrough ahead of the Soviets, we do not develop the new technology at all.
Yet, a defensive strategy does not imply abandoning the initiative. Properly conducted, a defensive strategy can be stronger than the offensive, particularly if the defender enjoys resources superior to those of his opponent -- as we do. The essence of a good defense is not so much a good offense as planning for surprise -- which requires that the defender exercise initiative and ingenuity.
The foremost characteristic of a good defense is timing. The side which first achieves a new advance can gain advantage can gain significant advantages in the Technological War by exploiting it to the fullest, keeping the opponent uncertain of what may be developed and how it might threaten him, and forcing him always to guard against surprise. A major goal of strategy should always be to achieve surprise, regardless of whether the strategy is offensive or defensive. Weapons systems and scientific research programs should be designed not only for minimum cost, technological elegance, and logistic ease, but also to create maximum uncertainty in the mind of the opponent. Surprise may result from the proper use of technology, but its main impact is upon the enemy's mind.
Surprise may be achieved through the sudden unveiling of a secret weapon. It is more often achieved through the novel use of a familiar system, as in the use of the B-52 against the guerrillas in Vietnam. Surprise is still more often achieved by taking an action the enemy did not consider because, although he knew perfectly well you were capable of performing it, it was completely outside the doctrines he thought governed your actions. This miscalculation may result in a paralysis of thought, because now the enemy has no idea of what to expect next. If you were capable of doing that, what else might you do?
The first bombardment of North Vietnam could have been used to create such a state in the minds of the enemy, had we not gone to such pains to make him aware of just what limits we placed on our future actions. A classic example of surprise is Guderian's thrust through the Ardennes followed by deep penetration of France, producing the collapse of the "finest army in Europe".
Another common method of achieving surprise is through the exploitation of small advantages. Sometimes very small technological differences can be decisive; for example, in air combat during World War II, a speed differential of 20 miles per hour was crucial, even though it was only a small percentage of the total speed of the two airplanes involved. A 10 percent performance advantage in a radar can work a similar result. In war, there are very few prizes for having the second best equipment, even if it is almost as good as the enemy's; if before the combat you thought yours was better, the resulting surprise could be as disastrous as the actual inferiority.
Sometimes surprise can be achieved by deliberate manipulation of the expectations of the enemy, through the design of military equipment to maximize certain crucial variables at the expense of others. The Spitfire was designed to have a faster rate of climb and more firepower than the Messerschmitt, yet it was inferior in most other respects. It was then employed in an operational environment which made use of its advantages and minimized its disadvantages. The result was the disaster to the Luftwaffe that we call the Battle of Britain. Yet, to an aeronautical engineer or an aerodynamics scientist, the Messerschmitt was clearly the better airplane. German scientists and pilots alike were victims of a deliberate policy of technological surprise.
The above example is worth studying. In particular, it should be noted that victory was produced by the combination of aircraft design and strategy, which required careful analysis of far more than aerodynamics and engineering. The victory was won by military decisions, not scientific theories.
The Spitfire example is illustrative of the principle that science, computers, and systems analysis cannot make military decisions, although they can be greatly useful. It was not merely the Spitfire's advantages but the strategy which used them effectively that gave victory in the Battle of Britain. The art of war is the art not only of using your advantages to best account but also of creating advantages you did not previously have by inducing the opponent to make mistakes. It is rather difficult to simulate this on a computer.
The current miraculous substitute for military judgment and creativity is called systems analysis. The authors are familiar with the techniques of systems analysis and often employ them for certain limited purposes. When, however, these techniques are used as a substitute for strategic analysis the results are usually disappointing. One outstanding example is the TFX.
The problem of the TFX (FB-111) is not that it will not fly. Although its crashes have received spectacular publicity, as this is written (1970) the aircraft has in fact a better safety record, for this stage of introduction into the force and number of hours flown, than any attack bomber in recent history. The difficulty of the TFX is that it is not the best airplane for any mission it can fly, and was deliberately designed that way.
This difficulty is the result of trying to save money by designing the plane to do reasonably well at many different missions, at the sacrifice of performance in all. Thus we have an airplane which is a very good second best to the new MiG in the air superiority mission; and although useful in other missions, it is not as good as the aircraft we could have for those roles, yet it is costing more than the optimum plane for any single mission would.
In the first edition we did not argue against the continued introduction of the TFX into the force. If called the A-111 and used for the attack-interdiction mission, it remains a good airplane. During the bombing of North Vietnam, the FB-111 was so clearly superior to anything else we had that a sortie by three TFX gave results equivalent to strikes by up to 40 other aircraft, and at far less cost. (FB-111 was also effective in the strikes against Libya.) This illustrates the well-known principle that in general the most technologically advanced system is the cheapest system when it must actually be employed in war.
However, the TFX is not an optimum attack bomber. It costs far more than the attack bomber we should have built and must build in the 1990's. It suffers from design defects directly traceable to the effort to make it useful for other missions, and these defects contributed greatly to the much-publicized crash record of the TFX. For a lot less money we could have had not only a better attack bomber but a second airplane to give close support of ground troops -- something the TFX was also supposed to do but for which it was so badly designed that it was never attempted. It was also supposed to be able to fly from aircraft carriers; that too was never attempted, but the requirement delayed the aircraft and influenced its design.
Analysis of the TFX is compounded by the political interference with the military source selection boards, and the awarding of the production -- over the objection of eleven military boards -- to a Texas company instead of the greatly-favored Boeing, of Seattle. This was not, however, the crucial decision in the TFX mess. Given proper design, almost any competent airplane company can build a good airplane, although some will have more difficulties and charge more than others. The critical problem of the TFX was in the systems analysis-spawned concept of the airplane, not in the subsequent efforts of the engineers to build an airplane to a set of impossible specifications.
The original concept of the TFX was born during a visit by then President Kennedy to an aircraft carrier. The Navy, in a misguided attempt to impress the Commander In Chief, landed a variety of aircraft on the carrier, prompting Kennedy to ask Secretary of Defense McNamara why there were so many different kinds of military planes. McNamara did not know, and after a few moments of thought decided there was no reasonable cause, and that a great deal of money could be saved by building general purpose machines. Then, in a burst of insight, he promised the President that not only would there be a reduction in the number of kinds of aircraft, but that both the Navy and the Air Force could use it, thus reducing costs still further.
The interservice airplane was itself a questionable concept, inasmuch as the missions and roles of the two services differ greatly. However, it would be possible to create such an aircraft, provided that its purposes and intended missions were not impossibly contradictory. It would be highly difficult to do so, and an aircraft required to take off and land on carriers would almost inevitable have more performance restrictions than airplanes designed for use from Air Force land bases; but the savings in costs of construction, stores, inventories, etc., might be sufficient to justify degrading the performance of, say, an attack bomber or close-support airplane.
The really crucial decision came when Secretary McNamara decided that the TFX should be both an air superiority fighter and an attack bomber. Once these roles and missions were mixed, the airplane was doomed. Such a multi-mission aircraft looks extremely good to the budget-minded. By assigning proper numerical values to various levels of performance on different missions, adding them up, and calling that effectiveness, you get a figure which -- compared with the cost of producing several different types of airplanes each of which is optimum for a mission -- makes it the best airplane you could ever buy. The TFX will remain a wonderful general-purpose craft until it fights the airplane that takes first place in the air superiority mission. In war, there is no prize for second place.
In fact, the TFX was intended to perform not two but four incompatible missions, and to do so for both the Air Force and the Navy. In its original conception, the TFX was intended to be: (1) our general-purpose all-weather air-superiority (or dogfighting) fighter, with the possibility of being a continental defense interceptor as well, (2) a reconnaissance-strike attack bomber, (3) a long-range, deep-interdiction attack bomber for all-weather missions, and (4) a close-support, attack-weapons delivery platform for missions in combination with ground troops. We note here that the TFX is not a strategic bomber and was never intended to be one; attempts to call it that were for the political purpose of hiding the fact that our bomber force was approaching obsolescence in the 1970's.
TFX designers were therefore called upon to do the impossible. The requirements for missions 2 and 3 above are not completely incompatible, and cost considerations may well dictate a single airplane for these two purposes; at the moment, the TFX could have been the best craft in the force for either of these missions. However, each of these two missions is incompatible with the air-superiority mission, so that after years of delay the Congress approved the design and construction of the F-14, F-15, and F-16. Because of the long lead times involved in airplane development, before the F-15 was operational the Soviets had will have at least one generation of fighters superior to anything we could put in the sky.
The TFX was not the best airplane we could have had for missions 2 and 3. It is too expensive, for a start. The compromises made in its design to make it useful as a fighter and a close-support weapons platform not only degrade its performance as an attack bomber but are extremely expensive. For a lot less money we could have an attack bomber as superior to the TFX as the TFX is to the older planes that are still the mainstay of the tactical air force.
Despite the incompatibility of missions, the proposed TFX designs were evaluated on the basis of a single number: the effectiveness of the proposed airplane for all four missions. This is similar to the point system for determining the winner of the Olympic Decathlon or the Modern Military Pentathlon, by which the winner of a single event may be ranked behind the man who has taken second or third place in all contests of the decathlon. A second criterion employed was the degree of commonality between the Air Force and Navy versions of the plane, that is, the percentage of parts the two versions had in common. This criterion compromised the aircraft design, and eventually was worse than useless because the completed airplane could not land on carrier decks. The Navy finally canceled its orders for TFX and began design of an aircraft suited to the Navy mission environment.
Thus, instead of bringing the heralded savings of billions of dollars, the completed aircraft cost more than would three separate airplanes optimized for individual missions; the Navy got no attack bomber at all; and the Air Force finds itself with an airplane useful only for the attack bomber mission, and not optimal for that. Finally, because of political interference in the selection of an airplane producer, the TFX was built by a company that had a reputation for delivering aircraft late and with high cost-overruns. At this time (1970), the airplane is grounded until studies can reveal the cause of the latest crashes. Instead of having a splendid general-purpose aircraft, the services are presently fighting a war with airplanes that were in the inventory when the TFX was designed.
Fortunately, the United States was never required to fight new Soviet MiG aircraft with the TFX.
The use of numbers to calculate effectiveness -- that is, taking a number of different missions or parameters and adding or otherwise combining them to get a single criterion measure -- was once known in engineering as the figure of merit fallacy. In the McNamara era and after, the civilian leaders in the Pentagon promoted the figure of merit fallacy to the major principle by which we chose new weapons. Most Congressional staffer continue to operate this way.
If the weapons are to be chosen by scientists through scientific means, some such figures of merit will be necessary. Used properly, they are quite useful because they are not inherently misleading. What is misleading is the fallacy that military decisions can be made by scientific means.
The problem with scientific criteria and analyses is not that they are false or useless, but that they are incomplete. It is simply not enough to use cost-effectiveness or "most bang for the buck" as the means of choosing weapons. (One of the authors can remember when he was designing a small missile for use in defeating an enemy field army near friendly inhabitants. The nuclear physicist working with him was near to tears when he discovered that he had to design a very clean weapon with a rather low yield. "Why, for that much fissionable material and weight," he said, "I could give you a megaton." It took some patience to explain that a megaton delivered near the city would defeat the purpose of the weapon system.)
In other words, some systems that are militarily best are not necessarily the scientifically most elegant, as the Spitfire was hardly the "best" aircraft to the aerodynamicist, or the new MiG to the TFX systems analyst. It is the nature of the military decision that it has to take into account a large number of factors, most of them uncertain and in no way amenable to mathematical modeling.
In some cases, of course, scientific calculations are of immense value. If you are trying to discover how many missiles you must aim at the enemy to achieve a given probability of killing a target of (assumed) hardness, given an enemy attack of (assumed) effectiveness which will knock out a given portion of your force (surviving force to be calculated from assumptions), the systems analyst can be of great help in telling you how many more missiles you have to aim at the target because your own birds have a given reliability (calculated from insufficient testing data).
He can tell you what improvement you must make in this
theoretical reliability to knock out the target system with the
force you already have. He can even construct a little
cost-effectiveness model in which he analyzes whether it is
better to spend your money on improving the reliability of your
present force or buying new missiles. His calculations
will, of course, be based on assumptions about what the
reliability improvement research will cost, and he will probably
ignore Pournelle's Law of Costs and Schedules in the calculation, but he
will come up with a recommendation which at least has the merit
of letting you see where it came from and on what it is based.
What systems analysis cannot do is tell the commander if it
might be better to not use this force against a particular target
at all, but rather attempt to achieve surprise or in some other
way defeat the will of the opponent rather than his forces.
Strategy in the Technological War must be based on strategic analysis, not systems analysis. The decision process must employ an appreciation of the enemy, the operational environment now and in the future and the principles of the art of war. It cannot simply be based on a highly artificial figure of merit.
Before we take up the nature of the technological decision process, it will be helpful to discuss some additional common fallacies. These are important not because they are common today but because they seem to be attractive to technological planners. A list of common fallacies, not exhaustive but illustrative of the more attractive errors, is show on Chart 4. Some of these have been touched upon above and require no detailed analysis.
CHART FOUR
|
The first and last of the fallacies shown on the chart may seem to be self-canceling; that is, it may at first appear that no one could hold both simultaneously without being aware of the contradiction. This is in fact not true. It is possible to believe that technological progress can be halted through treaties and agreements, and yet also to imagine that advances are automatic; moreover, in our judgment, much of the technological policy of the past ten years has been based on these twin delusions.
The belief that technological progress can somehow be halted
comes, we believe, from an imperfect understanding of the nature
of technology, and in particular from failure to consider the
interdependence of technological discoveries. There is no
possibility of halting all scientific research or engineering
development; yet you cannot predict in advance what the results
of a particular discovery will be. For example, modern
computer science, plus the development of complex mathematical
models of the laws governing the combinations of particular
molecules to form atoms, have made it possible for the chemist to
make "dry lab" experiments with new chemical processes,
discover new compounds, and determine much about their nature,
all without soiling a single test tube. The research is
carried on entirely by computer simulation. This technique
is adaptable to weapons technology for the discovery of
propellants, war gases, nonlethal incapacitation agents,
"psychological" gases, and dozens of other militarily
useful agents. As nuclear forces are better understood,
most weapons tests may be conducted in the same way. An
agreement by all governments to halt research and development in
military chemistry or nuclear physics simply cannot be enforced,
even if the governments actively strive to do so.
Other examples of the interdependence of technology include the following: the utility of various fiberglassing techniques, developed for automobiles and boats, in rocketry and space warfare; the great increase in the accuracy of the ICBM from 1964 to 1968, not as a result of deliberate application of technology but merely through the reduction of International Geophysical Year data, which gave a better understanding of gravitational anomalies and thus reduced the largest single factor in the ICBM error budget; the military communications revolution brought on by the civilian invention of the transistor, which was also the prerequisite of the Minuteman. Unless you are determined to halt all technological progress -- which is inherently impossible -- you cannot stop the progress of military technology. No agreement can bind, because the stream of technology will flow on despite any effort to swim upstream.
Information about technological progress in the United States and the Soviet Union is not symmetrical. Despite the expenditure of billions of dollars for intelligence, the United States has incomplete knowledge about the state of Soviet technology in many military fields. If we are determined not to exploit our technological advances, we can not be sure the Soviets are not exploiting theirs; soon enough, we may find that they have been doing so, and that their exploitation has given them a decisive advantage.
We have mentioned above that centralized decision making is no substitute for a strategy. Indeed, in the absence of a strategy centralization of the decision process is the worst mistake possible because it suppresses innovation in discovery and application. The military services cannot themselves generate a technological strategy, cannot orchestrate our technological research, development, and procurement into a grand design; but they can pursue rational substrategies which may be the best we can obtain. Until we have a workable mechanism for making use of military inputs and conducting strategic analysis to generate workable policy guidelines for achieving a strategy of technology, decentralization is probably the best protection against paralysis at the top of the decision pyramid.
Even when strategic analysis is conducted regularly and a national strategy for the Technological War is generated, over-centralization of technological decision making is useless at best and can be disastrous. In World War II (The Great Patriotic War, according to the U.S.S.R.), the major weakness of the Soviet army was the tendency to make all decisions at the top, the generals going so far as to order the placement and deployment of individual companies. This is not strategy.
A strategy provides subordinate commanders with the information they need to make intelligent choices and trusts them to carry them out. The strategist may well be unable to determine the best approach to a particular technical problem, just as a brilliant staff officer may not be able to place a company of soldiers for maximum defensive effectiveness. Even though the strategist may know better how to command a rifle company than its present commander, the strategist is not there. He cannot know the peculiar problems and strengths of this particular company; he cannot know that Privates Roe and Doe are individually worthless but nearly unbeatable in combination. The same is true of technological decisions. The human element of scientific management counts at least as much as the human element in military management. There is little to be said for the kind of centralization which centers all decisions at the top, saying in effect to those who must carry out the orders that they are untrustworthy; and there is much to be said against it, especially that over-centralization burdens the top.
The notion that small advantages cannot be decisive stems from an imperfect understanding of the military arts. There is no prize for second place in combat. A system that is second best in each of ten areas is excellent until the moment it must be used in combat; then it is nearly worthless. Many examples of small decisive advantages come to mind: for example, in an air battle conducted with air-to-air missiles at long ranges, a two-mile difference in radar ranges can result in one side being destroyed before it even detects the other. Small percentage improvements in missile accuracy can result in enormous increases in target kill probabilities. Moreover, if you have misgauged your position on the technological S-curve (see the section on the nature of the technological process), what is expected to be a marginal improvement may in reality be quite a large one. Refusal to make small improvements usually stems from lack of desire to improve the force at all; that is, from failure to conduct technological pursuit and exploit your advantages to leave the enemy well behind.
Failure to exploit advantages, through technological pursuit or through a deliberate effort to achieve surprise, is often caused by the assumption of symmetry of motives and behavior. We are all too prone to believe the enemy will never do what we ourselves would not do, and if it is suggested that he would, we cannot understand why. This is the result of faulty intelligence and imperfect understanding of the enemy's objectives and philosophy. Similarly, we may be overconfident in our own analyses, believing that certain technological enterprises are worthless. We then refuse to believe the intelligence we do obtain when it shows the enemy is doing something we would not do. For years there was hard evidence of Soviet deployment of Anti-Ballistic Missile (ABM) systems; actual photographs showed installations employing radars not remotely useful for air defense and oriented such that they could only be part of an ABM system; yet the official word from the top was that they were air defenses or else mere sham. It had been proved that ABM was technologically impossible, thus the Soviet Union would not build them: ergo, they were something else.
The trouble with that kind of analysis is that the enemy may know something we don't. The Soviet operational tests of nuclear-tipped ABM systems in which they shot down several incoming RVs (reentry vehicles) and destroyed one of their Cosmos satellites with a nuclear interceptor may well have given them information which we could never gain because shortly after their operational tests they induced us to sign the Treaty of Moscow (atmospheric test ban). If, for example, nuclear weapons in outer space have much greater kill effects than we think, and operate at longer ranges than we have postulated, Soviet deployment of ABM systems would be quite justified. Several physicists have attempted to prove to the authors (on purely theoretical grounds, since the United States never conducted any real tests designed to get empirical data on the effective range of nuclear weapons in space), that the ranges could not be greater than we have postulated. The scientists eventually conceded that something might be achieved in exotic ways, but then contended that the Soviets could not know about them and certainly could not have tested them. Yet, the U.S.S.R. has continued to pour concrete and build an ABM system which we knew could not work. It would appear, from our present efforts, that the ABM effort is worthwhile after all; our blind refusal to believe the obvious cost us several years.
Incidentally, the Soviet ABM system, from its location and orientation, is obviously directed against the United States, not China; anyone familiar with the principles involved would know this. U.S. theorists simply cannot conceive that the U.S.S.R. might be willing to build a less-than-perfect defense system; therefore certain members of the technological community, finally convinced that the U.S.S.R. was in fact deploying ABM, decided to explain these efforts as China-oriented. Self-deception, once begun, can continue to absurd lengths.
The above was written in 1969. It has since become clear that there are numerous ways to intercept ballistic missiles. Regardless of what the Soviets knew then, they continued not only to search for, but to prepare for technological breakthroughs. We did not being serious research into new ABM technologies until 1983, and in 1988 we have yet to do serious preparation for implementing what the laboratories have discovered. |
The "overkill" argument appears to us to be self-contradictory, especially when presented by an advocate of Mutual Assured Destruction. On the one hand, the greater the forces in the inventories on each side, the greater the destructiveness of war if it does occur -- something surely known to the leaders in both Washington and Moscow. Thus it is unlikely that anyone would deliberately engage in thermonuclear strikes against another's homeland. On the other hand, it is when one or both sides have more weapons than targets that wars can begin. Furthermore, the technological race inevitably makes previously invulnerable forces quite vulnerable as time goes on. Reliabilities of aged equipment are lower than those of new.
The best protection against losing one's second-strike force to an enemy first strike is constant updating of the force; but the second best protection is to keep in the inventory numbers that seem superfluously large, so that some marginal improvement in the enemy's counterforce will not result in a decisive advantage. The more weapons in inventory, the larger the surviving number of weapons, no matter what the respective percentages of kill may be; the larger the surviving force, the less likely the enemy is to strike in the first place.
Overkill is a good phrase, but, unless one assumes that military planners and political leaders are moral monsters and strategic idiots, it is unlikely that weapons of mass destruction will be accumulated simply for their own sake. To those who believe that motives of the services are in fact tinged with moral imbecility, no analytical work is likely to appeal.
A common argument against investment in technological weapons systems is the engineering maxim "If it works, it's obsolete." This is a hangover from the mobilization strategy of the thirties, and stems from misunderstanding the nature of the technological revolution in war. It is true that whatever system one deploys, it is likely that if one had waited a few years, a better one could have been constructed. If this were carried to its extreme, nothing would ever be built.
Technology is dynamic by nature. Whenever a new field of technology opens up, the people who will use it must learn how. They must be trained, and become operationally effective. In the case of aircraft there must be pilots. For space systems there must be satellite controllers.
Because of the long lag between generations of military bombers, the U.S. pilots of the B-1 and B-2 must be retrained. Because we have neglected manned space for years, military astronauts will have to be trained from scratch.
Had we waited until third-generation missiles were available before we constructed any (and had we also left the bomber force as it was), the world would not be as safe as it is today. A time comes when systems must be built, even though we know they will be obsolete in future years. Proper technological strategy will plan for such obsolescence, will seek systems of maximum salvage value, flexible enough for refit with the latest advances in technology. A proper strategy also forces the enemy to react to what you have done, so that he too must deploy hardware to avoid losing the Technological War.
The fallacy that prototypes and research are all that are needed should have been laid to rest by the experience of the French in 1939. The French army had -- and had possessed for quite a long time -- prototypes of aircraft, armor, and antitank weapons far better than those of the German army. The French did not have these weapons in their inventory because still better ones were coming. While they waited for the best weapons, they lost their country.
Military action must be routine; it cannot be extraordinary, planned months in advance like a space spectacular. Operational experience with a weapons system is required before operational employment doctrines can be perfected. Troops must be trained, logistics bases developed, maintenance routines learned, idiosyncrasies -- and modern technological gadgetry is full of them -- must be discovered. This cannot be done if the latest technology is confined to the drawing board or laboratory.
Clearly, all the above arguments doubly apply in the space era. Military space missions can only be routine when we have personnel experienced in performing them.
Finally, we come to the quaint notion that since the stream of technology moves on inexorably there is no point in wasting resources on developing military technology. It will come of itself, without effort. This is, of course, nonsense. It is true that technology has a momentum which cannot be halted; but the direction and timing can be changed drastically. The interdependence of technology will eventually produce improvements in weapons whether you want them or not; but it does not guarantee sufficient improvement when the enemy has been devoting considerable effort to his own improvements while you have been waiting for what will come inevitably. In keeping with our analogy of the stream, those who simply drift with it will be carried along with little effort but those who swim with the current will be far ahead.
A force is at work that produces technological advances without regard to our intentions, but major specific advances in military capabilities result from deliberate human action. Technological discoveries may be self-generating in their own due time, but the timing can be speeded up. Advances not resulting from planned action cannot be fitted into an overall strategy, and often are not even recognized as militarily useful until long after they have been discovered. Although other advances are uncontrolled, their use is not.
Initial deployment of GPS NAVSTAR took place in the late 1970's with partial operational capability to become available in the early 1990's, and full capability later in the decade. Dr. Francis X. Kane, Col. USAF (ret.) was one of the original planners of GPS, and closely followed its career.
First, we must note that the GPS NAVSTAR satellite navigation system will revolutionize the way the world lives and operates. Although its applications are just beginning to be understood by the world at large, they have been known and forecast by strategic analysts for more than a quarter of a century. The reasons why it has taken so long to bring about the happy marriage of concept and technology provide a case history of how hard it is to introduce advanced technology into our military forces, and indeed into our society.
Any strategy of technology has to cope with the brakes on innovation applied through ignorance, bias, prejudice, lack of foresight, and short-term special interests. The planner's task is to overcome ignorance, bias, prejudice, and lack of foresight, and to fight special interests. Only by perseverance can he capitalize on the potential of new technology. History is replete with examples of this problem. GPS NAVSTAR is only one of them: but it is the one which may yet show that the problem can be solved.
In 1963-64, under General Bernard Schriever's leadership, USAF
planners conducted a top-down analysis of the relationships
between strategy, policy, military requirements, and advanced
technology. The study was called Space Policy and Doctrine
(SPAD). They studied the relationships between
military functions: offensive and defensive systems,
communications, weather, reconnaissance, surveillance, and
navigation, on the one hand, and advanced space-based
technologies and programs on the other. They soon found
that our space-development program was not giving sufficient
attention to space-based navigation, which, it appeared, could
serve an almost infinite variety of military and civilian uses.
True, at that time the U.S. did have an operational satellite system: TRANSIT, which had been developed for the Navy by the researchers at Johns Hopkins Laboratory. The system did a good job of meeting the Navy's stated requirement: to determine the positions and locations of ships and submarines. Today, hundreds of thousands of agencies, units, and individuals, both civilian and military, use TRANSIT. The ships are from all countries, including the U.S.S.R. (to whom we provided a limited number of satellite receiver sets).
However, because of its design and performance, TRANSIT cannot be used by many others who need precise position- location information. Obviously, TRANSIT is independent of the weather. Navigators do not need to have clear weather in order to take sightings, but they do need several minutes to receive signals from the satellite and calculate position locations. Moreover, TRANSIT does not provide instantaneous read-outs (for example, for pilots of high-speed aircraft), and the calculations are too complex for a tank driver or a jeep driver to make, especially in rough terrain or under fire. These 'dynamic' users need a different type of system.
To overcome some of these problems, in the sixties the Navy developed a technology program called TIMATION. The objective was to develop and test orbiting clocks of unprecedented accuracy. That technology was supported by the Office of the Secretary of Defense.
At the same time, the USAF planners at Space Division together with Aerospace Corporation had developed and analyzed a new concept for a system called NAVSAT. This called for a constellation of four satellites at near geostationary orbit over the United States. One satellite was to be geostationary and the other three were to be in slightly inclined orbits, so that viewed from the ground, the three outer satellites appeared to rotate about the one at the center. These four satellites would be available at all times to navigators on the surface or in the air over the U.S.
The revolutionary aspect of NAVSAT was that the user, wherever he was, would be able to receive signals from the four satellites nearly simultaneously. He could then correlate the four signals and compute his position with unprecedented accuracy. Predictions at the time were for position-location accuracy on the order of 10 meters.
Extensive analysis was conducted to determine the number of users and examine the revolutionary effect of NAVSAT on military operations. The range of applications covered low-level bombing by fighter aircraft; high-level bombing; reconnaissance of targets with precise location known; strikes by aircraft or missiles; missile launches; anti-submarine warfare (ASW); surface-ship location; submarine navigation; amphibious landings, perhaps in remote regions; operation of aircraft from austere bases; en-route navigation by civil and military aircraft; helicopter operations; tank navigation; jeep and foot- soldier position-location; mapping; range operations; and even navigation by other satellites. There are thousands of potential users.
The NAVSTAR planners were certain that the world would unite to make a reality of that potential. Instead, the list of nay-sayers was as long as that of the users who stood to benefit from the system. The nay- sayers fell neatly into the four categories that are all too familiar to innovators:
In the Air Force's Research and Development Program, the budget for new concepts and new technologies was very small. However, a long internal struggle, characterized by numerous reviews and demands for more data, finally resulted in a funded program called simply "621B." This was a competition for concept formulation to cover military requirements, technical analyses, costs, program formulation, and organizational development -- all simultaneously.
The Air Force spent several million dollars on operational and systems analyses in order to determine the military requirements the system would have to meet. Almost every conceivable military operation was considered. Aircraft operations (weapons delivery and air defense) were high on the list. Among the ground targets were bridges, airfields, transportation, and hardened bunkers. The war in Vietnam provided data on types of targets and on force effectiveness in such operations. For example, the many nearly futile attacks on the Paul Doumer Bridge dramatically illustrated the effects of inaccurate weapons-delivery systems, in spite of the efforts of experienced and dedicated airmen. The accuracy that a global positioning satellite (GPS) system would have provided would have let the pilots "drop the spans" in only a few attacks with few weapons.
Similarly, more accurate artillery fire, made possible through precise location of the Fire Control Center and individual pieces in the battery, could have produced dramatic improvements in "fire for effect."
Air refueling, rendezvous at sea, and concentration of ground forces and close air support demonstrated the utility of operating in a "common grid" and with very precise timing; reconnaissance and surveillance for tactical target location and eventual mapping with extreme accuracy would have provided that common grid.
Anti-submarine operations using a variety of sensors would have permitted accurate delivery of weapons by aircraft, surface ships, or submarines.
A virtually unique application was the potential use of the GPS system for air operations from austere bases, particularly bases in remote areas. If an airfield wasn't equipped with navigation and landing aids, GPS transponders located next to its runways would provide "differential navigation" with accuracy on the order of a few feet. This application would be equally useful for small civil airfields.
Precision location of satellites on orbit and ballistic missiles on test ranges would also be possible. In brief, knowledge of precise location and time would permeate all aspects of military operations and have equally dramatic civil applications.
It seemed strange, then, that with so many potential beneficiaries, the answer to the question "Who needs it?" was "No one." No program, whether it was the F-15 or the F-16 or a satellite system, wanted to sponsor any project that would disrupt its own plans, increase its costs, and (worst of all) give anyone else a free ride. Like many public programs which in theory belong to everybody, in practice the NAVSAT program belonged to no one. In fact, 621B was a rival for funds and a potential threat to every existing R&;D and operational navigation project.
In the end, after many different lengthy field tests of NAVSAT technology, the individual and combined opposition of the services was overruled by the Office of the Secretary of Defense.
In the technical analysis, the story was much the same. The challenges were numerous. "It won't work because of ionospheric effects." "It won't work because you're in the wrong frequency band." Various distinguished groups agonized over such issues. They usually reached the same conclusion: "The theoretical analysis appears sound but there are very few data to support it. At no time did anyone say "I know the answer to the ionospheric effects" or "You should be in 'L' band because..."
The only sensible answer to these objections was the one that prevailed: to conduct tests and to collect data on technical performance and military effectiveness. Even that process was a slow one that met with constant opposition. Finally, though, R&;D satellites were approved and developed; twelve were launched into orbit. Prototype receivers were built for a limited set of users: a fighter-bomber, a helicopter, a ship, and an individual foot-soldier. Literally hundreds of tests were conducted, their time and duration being determined by the four satellites' presence in the proper locations.
Satellite positions were a problem because the birds were in low earth orbit rather than geostationary orbit. If the original system design had been followed, four satellites would have been constantly "in view" over the U.S., and the tests could have been run whenever the user platforms were available. That option might have accelerated the program; nevertheless, a different constellation was used for a number of reasons, primarily survivability and power required for transmission of signals at lower altitudes.
The original NAVSAT study identified nearly 30,000 potential military users. The total number of military and civil users was and is in the hundreds of thousands. The users were classified according to their level of performance, and thus according to the kinds of electronics they needed. Obviously, high-speed aircraft, particularly fighters, had the most stringent requirements. It became clear early in the technical design phase that a combination of inertial navigation systems, GPS receivers, and computers would work in concert to meet pilots' needs. At the other end of the requirement scale were the surface users: trucks, tanks, and foot soldiers. Instead of the signals from at least four satellites, the low-speed users could do very well with the signals from three or even two of them.
Because the system was to be a global one, the users and satellites had to be linked by a ground network that would control the satellites and keep them in position. A prototype was built at Vandenberg Air Force Base. The user tests were conducted principally at the Yuma Test Range, but ships in the San Diego area were also involved. The results proved conclusively the technical nay-sayers were dead wrong.
Perhaps the biggest brake on the development and deployment of the GPS system was its overall cost. The cry "It costs too much!" went on for years. In fact, principally at the insistence of the Congress, a novel control mechanism was imposed on the program. The Congressional budget legislation stipulated that GPS had to show that it would cost no more than the money saved by phasing out other navigation aids (LORAN, OMEGA, and TRANSIT). Naturally, the sponsors of these programs had no intention of letting them be de-activated in order to pay for the GPS system. In the end, however, such a schedule was drawn up.
That Congressional constraint was followed by another, which proved almost fatal to the program: make the non-DoD users pay. A scheme was developed that involved designing an integrated circuit (microchip) that contained the essential codes. The chip would be changed periodically; in order to use the system, users would have to buy updated chips. The impracticality of this idea fortunately led to its demise.
Finally, there were the nay-sayers whose attitude was "Even if it works and I can afford it and it improves my operations, I still don't want it." Their argument was that satellites would always be vulnerable, and therefore the GPS system could operate only in peacetime. They had no intention of depending on it for military operations. To meet this objection, the constellation was changed so that the satellites were deployed in six planes, with three satellites per plane and three others on orbit. The satellites would be hardened to resist the effects of radiation. Last, three other satellites were to be procured to replace any birds that were lost for any reason, including hostilities or direct attacks.
Another set of objections came from the "guidance mafia": the people who make inertial guidance systems for ballistic missiles. Typical is a dialogue with an internationally renowned scientist who chaired an adversary group. His comment was: "Don't develop the GPS system; spend the money on inertial guidance." That resistance still remains.
The most discouraging attitude, however, was that of some of
the principal users. During the early phase of the
program, NAVSTAR planners made an extensive analysis of air
operations in Vietnam, comparing the actual performance of
weapons-delivery systems in a large number of raids with the
improvement in effects which would result from GPS-level
accuracy. The analysis showed not only more target
destruction, but also lower aircraft and crew losses, and an
overall cost reduction. When the results were released,
the reaction was "You don't understand the war. We're
not destroying targets. We're flying sorties and dropping
bombs."
Furthermore, the GPS system fell victim to the "18-month rule," of the Viet Nam War, which was our counterpart of the British "ten-year rule" that had prevailed in the thirties: There will be no war for ten years; therefore if this program takes more than ten years to develop, we can well afford to wait. The same approach held in Vietnam: If it takes more than 18 months to field the system, we won't need it. Obviously GPS would have taken more than 18 months to implement; therefore...
The long struggle to deploy the NAVSTAR GPS system culminated in another bureaucratic innovation: multi-year procurement of the entire constellation of 24 satellites. Just as it seemed the positioning revolution would finally begin, the program met another setback. The satellites were scheduled to be launched into orbit by the Shuttle. The Challenger disaster and the resulting hiatus in launches have delayed those operations for at least two years. Nevertheless, the revolution will still begin in the 1990's when the full constellation has been placed in orbit and thousands of receivers will be in the hands of the operating military forces. Civilian applications such as surveying, oil exploration, and navigation will be commonplace. Before the end of the century NAVSTAR will have affected everyone's life, perhaps in ways we can still only guess at.
NAVSTAR illustrates both the positive benefits of strategic analysis -- the system was invented that way -- and the difficulties that bog down or halt the actual deployment
of systems relevant to a strategy of technology.
Most of these difficulties stem from insufficient study of the technological process. We turn now to a description of the march of technology.