Cyber warfare — jamming on steroids

0
203

NormanFriedman2WORLD NAVAL DEVELOPMENTS — MAY 2014, by Norman Friedman
IN MAY, the US Government circulated a ‘wanted’ poster showing five members of a shadowy Chinese cyber-espionage unit. No one expects any of them to turn up in a US courtroom, but the object of the publicity was twofold.

keyboardFirst, it was intended to show the Chinese that the US Government takes their operations seriously, that it can and will retaliate in some unspecified way. It is as pointless to ask the Chinese (and many others) to abandon cyber-espionage as it would be to seek an international treaty barring any other kind of spying – the spies would stay in business, but some naive governments would abandon counter-espionage, and cease any spying of their own.

The second and probably much more important object was to make US companies more aware that their trade secrets are being stolen via cyber-espionage. Details of some techniques were released. For example, considerable publicity was given to the practice of ‘spear-phishing,’ in which the attacker posts what appears to be an internal company notice on its E-mail – say, the agenda of an upcoming meeting. When the enclosure is opened, the spear-phisher gains access to whatever the unsuspecting employee can obtain via his company E-mail account. To avoid such attacks, companies are being encouraged to sever connections between their internal E-mail systems and the open Internet.

In the past, US policy has been to help companies quietly, on the theory that they will be more likely to admit that they have been penetrated if that does not cause them public trouble – say, a collapse of their stock prices. Thus word of the penetration of Northern Telecom, a major technology developer, emerged only as that company went bankrupt (for reasons apparently unrelated to cyber-attack). It appears that the break-in began when several executives visited China, taking their laptops with them. The change in policy seems to have been driven by more widespread cyber-attacks, and possibly also by companies’ unwillingness to take security seriously enough.

Cyber-defense typically concentrates on schemes to block access to unauthorized people via encryption and firewalls. That might be described as an engineering solution. The question is how far such solutions can be overcome by human action. For example, the most extreme form of security is probably to restrict access to those whose fingerprints or other physical signatures (such as retina patterns) a system recognizes. Readers may remember a James Bond movie in which the criminal gained access to a nuclear storage facility by wearing contact lenses which mimicked the retina pattern of a legitimate user. As for the usual defense by password, notoriously, senior managers impatient with security fail to adopt difficult passwords, or do not change them often enough. Some have successfully demanded access to their systems via laptops or even smaller portable devices, which are either stolen or otherwise compromised. The worst published case of this type was the Indian strategic command and control system. Several laptops used by system developers were stolen, to be returned later minus their hard drives.

The Snowden case should remind us that there are always individuals who either turn bad or who can be bought. What is new is the sheer volume one individual can steal. Before Snowden, probably the worst US espionage case was that of Jay Pollard, who stole thousands of documents. That theft took time and it was physically daunting. The sheer bulk involved made Pollard vulnerable to detection. A thousand documents is probably well under a million pages, each of which might equate to a few thousand bytes. Pollard’s entire haul was probably no more than a gigabyte. Moreover, to identify the documents he requested and then copied, he or an accomplice had to spend time perusing classified catalogs. The process of requesting documents one by one doubtless slowed Pollard’s progress.

Consider Snowden. He created web-crawling software robots which searched vast archives looking for anything which might fit the specifications he laid down. These robots automatically collected material and downloaded it to Snowden. He in turn loaded his product onto thumb drives which he carried out of his office in his pocket. Unless he had been physically searched each time he left the office, no one could have noticed what he was carrying. It was not like searching someone’s brief case to see whether it contained a fat secret document. Nor did Snowden require a mass of cameras or photocopiers anywhere. He never had to return his documents to avoid being detected; he had perfect digital copies.

The current standard thumb drive, that you buy in a neighborhood copy shop, has a capacity of 32 gigabytes – in the terms above, up to 32 million pages of double-spaced copy (in reality fewer pages, because memory is now so inexpensive that computer files are relatively inefficient). The difference between digital and physical files is enormous. It just does not take all that much effort to clean out entire libraries. Moreover, the operation generally leaves no evidence.

The only saving grace, if it is one, is that by using computers we now generate (and duplicate) far more numerous files than ever before. Snowden’s cyber-crawler would collect everything fitting a specification. Some human or humans had to sift that mass of material to find what was useful. Many years ago someone jokingly commented that if we simply declassified everything, no spy would have the time to find what he really needed. Computers do simplify searching, but anyone who has used a search engine knows that before very long the engine produces far more junk than real material. Certainly classification allows a searcher to winnow what is available. For example, historical researchers know that formerly unclassified files are massive and nearly useless, whereas the formerly classified ones generally include what they want.

The other remarkable side of cyber-espionage is that the spy need never get physically near his target. The five Chinese on the FBI poster never had to leave China in order to penetrate US companies. On the Internet, there is no apparent distance between users. Someone in Shanghai might penetrate a US company in San Diego as easily as a hacker a block from its gate, and the target computer probably could not tell the difference. The FBI announcement suggests that in reality location makes just enough of a difference so that the attacker’s computer, or at least its location, can often be identified.

What should we be doing about all this? Obviously we should be conducting our own cyber-espionage campaigns, but right now foreign companies generally are not creating technology we badly want: the United States leads the world in industrial research and development. We probably should be much more aware of the consequences of successful large-scale cyber-espionage. Our new weapons and other military technology are likely to get into hostile hands much more quickly than in the past. That is inescapable.

It seems, then, that classification of information has a limited lifetime. We have to shorten development and production cycles. That is very difficult for the physical parts of our systems, but software can be quickly modified. The US Navy took an important step in that direction about fifteen years ago with its ARCI (Acoustic Rapid COTS Insertion) program. In theory, ARCI began as a way of providing submarines with better computer processors in tune with the fast development cycle offered by industry (typically 18 months, sometimes faster). Probably it was more important that new hardware made it possible to insert new software offering new capabilities (the software cycle adopted by ARCI was about twice as fast as the hardware cycle). The ARCI idea has since spread to the surface fleet and to aircraft. ARCI was conceived as a way of improving submarine capabilities without costly changes to massive sonar arrays (better signal processing was a much more potent form of upgrade), but it has gone much further.

If you think of cyber-espionage as signals intelligence on steroids, active cyber-warfare is jamming on steroids. The most prominent example was the Stuxnet virus inserted into Iranian industrial computers to destroy the centrifuges the Iranians were using to process uranium for their bomb program. The Russians used a simple form of cyber-attack (denial of service by flooding a system with requests) during their war in Georgia. Some cyber-criminals use a new kind of software which freezes a target computer unless the victim pays blackmail to receive a coded key. Presumably the same software could attack key military computer systems. Note that cyber-attack often leaves the cyber-weapon in the victim’s hands; it can be adapted to some new target. The Stuxnet virus was carefully adapted to its particular target, but presumably its design principles can be deduced and applied to some other target.

Cyber-attack (non-kinetic warfare) may seem to be an entirely new capability, but it is not too different from the potential offered in the past by successful code-breaking. Once you can read someone’s mail, there must be an enormous temptation to send misleading messages. In the past, the counter-argument, at least in the United States and the United Kingdom, was that the ability to read the enemy’s mail was so important that nothing should be one to tip him off. It is not clear that the Soviets harbored similar fears: they seem to have been far more willing to exploit the fruits of their own code-breaking efforts. It may be sobering to reflect that the Chinese learned much of their military practice from the Soviets, before the Sino-Soviet split of the early 1960s. Would that make them more willing to chance cyber-attacks against crucial US military and civilian targets?

A look back at code-breaking history is also sobering. No one whose code has been broken has, it seems, reacted to evidence that the information secured in that way is being used operationally. That goes most famously for the Germans and Enigma in the Battle of the Atlantic, but it also went for British convoy codes in that battle (the US Navy was responsible for the British decision to change codes) and for US naval messages in the era of Edward Walker, when the Soviets were reading our mail. The victim always seems to find reasons why tactical failure had some other cause; the code-makers always think their product is good enough. They are always painfully aware of the cost of changing codes or coding systems. How relevant is this history to the future of cyber-attack?

Jamming also seems analogous to cyber-attack. In the past it has often seemed a very attractive alternative to the physical destruction of, say, an incoming missile. In theory a single jammer can defeat numerous missiles, whereas a single defensive missile can be used only once. The rub is that jamming usually requires detailed knowledge of the target; it would not do, for example, to attract an enemy missile instead of repelling it. In 1968 the US Navy was investing heavily in jammers as an alternative to new defensive missiles, when someone pointed out that it knew virtually nothing about the bulk of the threat missiles it planned to neutralize. That is why the SLQ-32 countermeasures set was conceived as a minimum-cost device; it was accepted that it could not defeat all comers.

Like the Chinese, we almost certainly conduct cyber-espionage. At the least, our own efforts can help us thwart theirs. The real question is whether we should go further into the world of non-kinetic attack. As in jamming, if we are completely familiar with the enemy’s command and control system, we can predict the effect of whatever we are doing. That seems to have been the case with Stuxnet, which was designed to deal with a particular industrial control system. There is probably no simple Chinese equivalent.

What we really want is to be able to, say, turn off an enemy’s air defense system. There have been claims that such an attack preceded the war against Iraq. In the Iraqi case, it may have been relatively easy to discover the details of their system, because it was designed by foreigners we may have subverted later. A truly indigenous system would be a very different proposition – unless it turns out that cyber-espionage makes it possible to probe and thus to model it. How could we be sure that we truly understood a complex foreign command and control system well enough to be sure of the effect of our own attack? Moreover, how could those designing such an attack verify that they understood? It would be a bad joke if the attack intended to turn off someone’s air defense launched his strategic missiles instead. It may be worth our while to press this point in public, lest our enemies not think it through.

*Norman Friedman is author of The Naval Institute Guide to World Naval Weapon Systems
*Norman Friedman’s columns are reproduced by kind permission of the Editor of Proceedings the Journal of the United States Naval Institute.

LEAVE A REPLY

Please enter your comment!
Please enter your name here