Cyberwarfare
WHAT IS INFORMATION WARFARE?
By Martin
LIBICKI
Of the seven forms of information warfare, cyberwarfare -- a
broad category that includes information terrorism, semantic
attacks, simula-warfare and Gibson-warfare -- is clearly the least
tractable because by far the most fictitious, differing only in
degree from information warfare as a whole. The global information
infrastructure has yet to evolve to the point where any of these
forms of combat is possible; such considerations are akin to
discussions in the Victorian era of what air-to-air combat would
be. And the infrastructure may never evolve to enable such attacks.
The dangers or, better, the pointlessness, of building the
infrastructure described below may be visible well before the
opportunity to build it will present itself.
Information Terrorism
Although terrorism is often understood as the application of random
violence against apparently arbitrary targets, when terrorism works
it does so because it is directed against very specific targets,
often by name. In the early days of the Vietnam War, the Viet Cong
terrorized specific village leaders to coerce their acquiescence.
Done well, threats can be effective, even if carried out
infrequently; targeted officials can be forced to accede to
terrorists and permit their reach to spread. As the term is used
here, information terrorism is a subset of computer hacking, aimed
not at disrupting systems but at exploiting them to attack
individuals.
What would the analogy for information war be to that kind of
terrorism? Note 59 Targeting
individuals by attacking their data files requires certain
presuppositions about the environment in which those individuals
exist. Targeted victims must have potentially revealing files on
themselves stored in public or quasi-public hands (e.g., TRW's
credit files) in a society where the normal use of these files is
either legal or benign (otherwise, sensitive individuals would take
pains to leave few data tracks). Today, files cover health,
education, purchases, governmental interactions (e.g., court
appearances), and other data. Some are kept manually or are
computerized but inaccessible to the outside, yet in time most will
reside on networks. Tomorrow, files could include user-built agents
capable of interacting with net-defined services and therefore
containing a reliable summary of the user's likes, dislikes, and
predilections. Note 60
The problem in conducting information terrorism is having to
know what to do with the information collected. Many people, for
instance, might be embarrassed if the information in their
collected datasphere were opened to public view; but that does not
necessarily make them good objects for blackmail. Similarly, the
hassle created by erroneous entries in a person's files might be
significant, but threatening to put them there has only limited
coercive appeal (a person so threatened could seek to limit the
damage by requesting repeated backups of existing data to archival
media along with the demand that all incoming data must be
authenticated).
If information terrorism is to succeed, a more plausible
response than fear of compromise might be anger at the institutions
that permitted files to be mishandled. Before a systematic reign of
computer terror could bring about widespread compromise of enough
powerful individuals it would probably lead to restrictive (perhaps
welcome) rules on the way personal files are handled.
Semantic attack
The difference between a semantic attack and hacker warfare is that
the latter produces random, or even systematic, failures in
systems, and they cease to operate. A system under semantic attack
operates and will be perceived as operating correctly (otherwise
the semantic attack is a failure), but it will generate answers at
variance with reality.
The possibility of a semantic attack presumes certain
characteristics of the information systems. Systems, for instance,
may rely on sensor input to make decisions about the real world
(e.g., nuclear power system that monitors seismic activity). If the
sensors can be fooled, the systems can be tricked (e.g., shutting
down in face of a nonexistent earthquake). Safeguards against
failure might lie in, say, sensors redundant by type and
distribution, aided by a wise distribution of decisionmaking power
among humans and machines.
Future systems may try to learn from their info-sphere. A
health server might poll participating physicians to collect
histories, on the basis of which the server would constantly
compute and recompute the efficacy of drugs and protocols. A
semantic attack on this system would feed the server bad data,
perhaps discounting the efficacy of one nostrum or creating false
claims for another. Similarly, a loan server could monitor the
world's financial transactions for continuing guidelines about
which financial instruments merit trust. If banking server systems
work the way bankers do, a rush of business to a particular
institution could confer legitimacy upon the institution, and if
that rush of business were phony and the institution a Potempkin
savings and loan, the rush of legitimate business, by bytes and
wire, could result in a rapid decrementation of assets by
supporting banks. This scenario is similar to what allowed Penn
Square bank in Oklahoma to buffalo many other banks that should
have known better. In cyberspace, fraud can occur more quickly than
human oversight can detect.
Is a semantic attack a worrying prospect? Few servers like
those just described exist. By the time they will, enough thinking
should have gone on to develop appropriate safeguards, such as
digital signatures, to repel spoofing and enough built-in human
oversight to weed out data that computers accept as real but a
human eye would reject as phony.
Simula-warfare
Real combat is dirty, dull, and, yes, dangerous. Simulated conflict
is none of those. If the fidelity of the simulation is good enough
-- and it is improving every year -- the results will be a
reasonable approximation of conflict. Why not dispense with the
real thing and stick to simulated conflict? Put less
idealistically, could fighting a simulated war prove to the enemy
that it will lose?
The dissuasive aspect of simulation warfare is an extension,
in a sense, of the tendency to acquire weapons for more
demonstration than for use, the battleship being perhaps a prime
example. Had the United States possessed more atomic weapons during
World War II, it might have chosen to light the first off Tokyo
harbor for effect rather than in Hiroshima for results. The use of
single champions rather than full armies to conduct conflict has
both Biblical and Classical antecedents, even if the practice has
now fallen into disuse. The gap between these practices and
simulated conflict, with both sides agreeing to accept the result,
would be a chasm.
Unfortunately, the realities of war and the fantasies of
simulation make poor bedfellows. Environments tailor-made for
simulation are composed of individual elements, each of which can
be characterized by behavior but whose interaction is complex; for
this reason, air tunnels simulate well. In tomorrow's hide-and-seek
conflict, it will be almost impossible to characterize the
attributes of combat. Much of warfare will depend on each side's
ability to fool the other, to learn systematically from what works
well and what poorly, to disseminate the results into doctrine,
and, by so doing, to move up the sophistication of the game notch
after notch. These operations are precisely the ones least
amenable to simulation.
Needless to add, in the unlikely event that both sides own up
to the capability and number of their systems and the strategies by
which these are deployed, would the hiding or finding qualities of
these systems be honestly portrayed? Mutual simulation requires
adversaries to agree on what each side's systems can do. The reader
may be forgiven for wondering whether two sides capable of this
order of trust could be even more capable of resolving disputes
short of war.
The attractiveness of today's simulation technology is its
ability to model the battlefield from the viewpoint of every
operator. Marrying operators and complex platforms in simulation is
being promoted just when operators and their complex platforms are
shuffling off the combat stage. Information systems, and over-the-
horizon weaponry are more and more what war is about; and they are
largely self-simulating systems.
A less ridiculous version of the game -- and one that forgoes
computer simulation -- tests the hiding and finding systems in the
real world but replaces real munitions with virtual ones -- e.g.,
laser tag equivalents. Private war games and the National Training
Center do this. That no war in memory has ever been replaced by a
war game casts doubt on whether, despite great advances in
simulation, any future war will be either.
Gibson-warfare
The author confesses to having read William Gibson's
Neuromancer Note 61 and,
worse, to having seen the Disney movie "TRON." In both, heroes and
villains are transformed into virtual characters who inhabit the
innards of enormous systems and there duel with others equally
virtual, if less virtuous. What these heroes and villains are doing
inside those systems or, more to the point, why anyone would wish
to construct a network that would permit them to wage combat there
in the first place is never really clear.
Why bring up Gibson's novel and the Disney movie? Because to
judge what otherwise sober analysts choose to include as
information warfare -- such as hacker warfare or esoteric versions
of psychological warfare -- the range of what can be included in
its definition is hardly limited by reality.
The Internet and its imitators have produced virtual
equivalents of the real world's sticks and stones. Women have
complained of virtual stalkers and sexual harassers; flame wars in
the global village are as intense and maybe as violent as the
village gossip they have supplanted; agent technology, coming soon,
permits a user to launch a simulacrum into the net, armed with its
master's wants and needs, to make reservations, acquire goods, hand
over assets, and, with work, to negotiate terms for enforceable
contracts. What conceptual distance separates an agent capable of
negotiating terms from another capable of negotiating concepts,
hence, conducting a discussion? What will prevent an agent from
conducting an argument? Arguments may require the support of
allies, perhaps other agents wandering the net, who may be
otherwise engaged in booking the best Caribbean vacation but who
have spare bandwidth available for engaging in sophomoric colloquy.
Allies might then form on the other side. The face off of allies
and adversaries, of course, equals conflict and perhaps even a
disposition of goods and services that will depend on the outcome.
Thus, war, in the guise of information war, even while the
originators of the argument are fast asleep.
Possible? Actually, yes. Relevant to national security? Not
soon.
|