U.S. Outgunned in Hacker War


WASHINGTON—The Federal Bureau of Investigation’s top cyber cop offered a grim appraisal of the nation’s efforts to keep computer hackers from plundering corporate data networks: “We’re not winning,” he said.

WSJ’s Devlin Barrett reports the FBI is struggling to combat cyberattacks by hackers. “We’re not winning,” FBI executive assistant director Shawn Henry said. AP Photo/Haraz N. Ghanbari

Shawn Henry, who is preparing to leave the FBI after more than two decades with the bureau, said in an interview that the current public and private approach to fending off hackers is “unsustainable.” Computer criminals are simply too talented and defensive measures too weak to stop them, he said.

His comments weren’t directed at specific legislation but came as Congress considers two competing measures designed to buttress the networks for critical U.S. infrastructure, such as electrical-power plants and nuclear reactors. Though few cybersecurity experts disagree on the need for security improvements, business advocates have argued that the new regulations called for in one of the bills aren’t likely to better protect computer networks.

Mr. Henry, who is leaving government to take a cybersecurity job with an undisclosed firm in Washington, said companies need to make major changes in the way they use computer networks to avoid further damage to national security and the economy. Too many companies, from major multinationals to small start-ups, fail to recognize the financial and legal risks they are taking—or the costs they may have already suffered unknowingly—by operating vulnerable networks, he said.

Associated Press

‘You never get ahead, never become secure, never have a reasonable expectation of privacy or security,’ says Shawn Henry, executive assistant director of the FBI.

“I don’t see how we ever come out of this without changes in technology or changes in behavior, because with the status quo, it’s an unsustainable model. Unsustainable in that you never get ahead, never become secure, never have a reasonable expectation of privacy or security,” Mr. Henry said.

James A. Lewis, a senior fellow on cybersecurity at the Center for Strategic and International Studies, said that as gloomy as Mr. Henry’s assessment may sound, “I am actually a little bit gloomier. I think we’ve lost the opening battle [with hackers].” Mr. Lewis said he didn’t believe there was a single secure, unclassified computer network in the U.S.

“There’s a kind of willful desire not to admit how bad things are, both in government and certainly in the private sector, so I could see how [Mr. Henry] would be frustrated,” he added.

High-profile hacking victims have included Sony Corp., SNE -0.76% which said last year that hackers had accessed personal information on 24.6 million customers on one of its online game services as part of a broader attack on the company that compromised data on more than 100 million accounts. Nasdaq OMX Group Inc., NDAQ +0.31% which operates the Nasdaq Stock Market, also acknowledged last year that hackers had breached a part of its network called Directors Desk, a service for company boards to communicate and share documents. HBGary Federal, a cybersecurity firm, was infiltrated by the hacking collective called Anonymous, which stole tens of thousands of internal emails from the company.

Mr. Henry has played a key role in expanding the FBI’s cybersecurity capabilities. In 2002, when the FBI reorganized to put more of its resources toward protecting computer networks, it handled nearly 1,500 hacking cases. Eight years later, that caseload had grown to more than 2,500.

Mr. Henry said FBI agents are increasingly coming across data stolen from companies whose executives had no idea their systems had been accessed.

“We have found their data in the middle of other investigations,” he said. “They are shocked and, in many cases, they’ve been breached for many months, in some cases years, which means that an adversary had full visibility into everything occurring on that network, potentially.”

Mr. Henry said that while many company executives recognize the severity of the problem, many others do not, and that has frustrated him. But even when companies build up their defenses, their systems are still penetrated, he said. “We’ve been playing defense for a long time. …You can only build a fence so high, and what we’ve found is that the offense outpaces the defense, and the offense is better than the defense,” he said.

Testimony Monday before a government commission assessing Chinese computer capabilities underscored the dangers. Richard Bejtlich, chief security officer at Mandiant, a computer-security company, said that in cases handled by his firm where intrusions were traced back to Chinese hackers, 94% of the targeted companies didn’t realize they had been breached until someone else told them. The median number of days between the start of an intrusion and its detection was 416, or more than a year, he added.

In one such incident in 2010, a group of Chinese hackers breached the computer defenses of the U.S. Chamber of Commerce, a major business lobbying group, and gained access to everything stored on its systems, including information about its three million members, according to several people familiar with the matter.

In the congressional debate over cybersecurity legislation, the Chamber of Commerce has argued for a voluntary, non-regulatory approach to cybersecurity that would encourage more cooperation and information-sharing between government and business.

Matthew Eggers, a senior director at the Chamber, said the group “is urging policy makers to change the ‘status quo’ by rallying our efforts around a targeted and effective information-sharing bill that would get the support of multiple stakeholders and come equipped with ample protections for the business community.”

The FBI’s Mr. Henry said there are some things companies need to change to create more secure computer networks. He said their most valuable data should be kept off the network altogether. He cited the recent case of a hack on an unidentified company in which he said 10 years worth of research and development, valued at more than $1 billion, was stolen by hackers.

He added that companies need to do more than just react to intrusions. “In many cases, the skills of the adversaries are so substantial that they just leap right over the fence, and you don’t ever hear an alarm go off,” he said. Companies “need to be hunting inside the perimeter of their network,” he added.

Companies also need to get their entire leadership, from the chief executive to the general counsel to the chief financial officer, involved in developing a cybersecurity strategy, Mr. Henry said. “If leadership doesn’t say, ‘This is important, let’s sit down and come up with a plan right now in our organization; let’s have a strategy,’ then it’s never going to happen, and that is a frustrating thing for me,” he said.

Write to Devlin Barrett at devlin.barrett@wsj.com

A version of this article appeared Mar. 28, 2012, on page B1 in some U.S. editions of The Wall Street Journal, with the headline: U.S. Outgunned in Hacker War.


New Layer of Genetic Information Discovered | www.ucsf.edu

By Jason Bardi on March 28, 2012

A "Found" Ribosome

Represented here by a tomato and a rope, ribosomes are central to all life on Earth because they help translate genetic information into proteins. Image by Gene-Wei Li/UCSF

A hidden and never before recognized layer of information in the genetic code has been uncovered by a team of scientists at the University of California, San Francisco (UCSF) thanks to a technique developed at UCSF called ribosome profiling, which enables the measurement of gene activity inside living cells — including the speed with which proteins are made.

By measuring the rate of protein production in bacteria, the team discovered that slight genetic alterations could have a dramatic effect. This was true even for seemingly insignificant genetic changes known as “silent mutations,” which swap out a single DNA letter without changing the ultimate gene product. To their surprise, the scientists found these changes can slow the protein production process to one-tenth of its normal speed or less.

As described today in the journal Nature, the speed change is caused by information contained in what are known as redundant codons — small pieces of DNA that form part of the genetic code. They were called “redundant” because they were previously thought to contain duplicative rather than unique instructions.

This new discovery challenges half a century of fundamental assumptions in biology. It may also help speed up the industrial production of proteins, which is crucial for making biofuels and biological drugs used to treat many common diseases, ranging from diabetes to cancer.

“The genetic code has been thought to be redundant, but redundant codons are clearly not identical,” said Jonathan Weissman, PhD, a Howard Hughes Medical Institute Investigator in the UCSF School of Medicine Department of Cellular and Molecular Pharmacology.

“We didn’t understand much about the rules,” he added, but the new work suggests nature selects among redundant codons based on genetic speed as well as genetic meaning.

Similarly, a person texting a message to a friend might opt to type, “NP” instead of “No problem.” They both mean the same thing, but one is faster to thumb than the other.

How Ribosome Profiling Works

The work addresses an observation scientists have long made that the process protein synthesis, so essential to all living organisms on Earth, is not smooth and uniform, but rather proceeds in fits and starts. Some unknown mechanism seemed to control the speed with which proteins are made, but nobody knew what it was.

Ribosome structure

The structure of a ribosome. Image by Dale Muzzey/UCSF

To find out, Weissman and UCSF postdoctoral researcher Gene-Wei Li, PhD, drew upon a broader past effort by Weissman and his colleagues to develop a novel laboratory technique called “ribosome profiling,” which allows scientists to examine universally which genes are active in a cell and how fast they are being translated into proteins.

Ribosome profiling takes account of gene activity by pilfering from a cell all the molecular machines known as ribosomes. Typical bacterial cells are filled with hundreds of thousands of these ribosomes, and human cells have even more. They play a key role in life by translating genetic messages into proteins. Isolating them and pulling out all their genetic material allows scientists to see what proteins a cell is making and where they are in the process.

Weissman and Li were able to use this technique to measure the rate of protein synthesis by looking statistically at all the genes being expressed in a bacterial cell.

They found that proteins made from genes containing particular sequences (referred to technically as Shine-Dalgarno sequences) were produced more slowly than identical proteins made from genes with different but redundant codons. They showed that they could introduce pauses into protein production by introducing such sequences into genes.

What the scientists hypothesize is that the pausing exists as part of a regulatory mechanism that ensures proper checks — so that cells don’t produce proteins at the wrong time or in the wrong abundance.

A Primer on DNA Codons

All life on earth relies on the storage of genetic information in DNA (or in the case of some viruses, RNA) and the expression of that DNA into proteins to build the components of cells and carry out all life’s genetic instructions.

Every living cell in every tissue inside every organism on Earth is constantly expressing genes and translating them into proteins—from our earliest to our dying days. A significant amount of the energy we burn fuels nothing more than this fundamental process.

The genetic code is basically a universal set of instructions for translating DNA into proteins. DNA genes are composed of four types of molecules, known as bases or nucleotides (often represented by the four letters A, G, T and C). But proteins are strings of 20 different types of amino acids.

To code for all 20 amino acids, the genetic code calls for genes to be expressed by reading groups of three letters of DNA at a time for every one amino acid in a protein. These triplets of DNA letters are called codons. But because there are 64 possible ways to arrange three bases of DNA together — and only 20 amino acids used by life — the number of codons exceeds the demand. So several of these 64 codons code for the same amino acid.

Scientists have known about this redundancy for 50 years, but in recent years, as more and more genomes from creatures as diverse as domestic dogs to wild rice have been decoded, scientists have come to appreciate that not all redundant codons are equal.

Many organisms have a clear preference for one type of codon over another, even though the end result is the same. This begged the question the new research answered: if redundant codons do the same thing, why would nature prefer one to the other?

The article, “The anti-Shine-Dalgarno sequence drives translational pausing and codon choice in bacteria,” by Gene-Wei Li, Eugene Oh, and Jonathan S. Weissman, was published by the journal Nature on March 28. 

This work was supported by the Helen Hay Whitney Foundation and by the Howard Hughes Medical Institute.

UCSF is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care.

The ‘living’ micro-robot that could detect diseases in humans | KurzweilAI


Cyberplasm Vehicle

A tiny prototype robot that functions like a living creature is being developed that one day could be safely used to pinpoint diseases within the human body.

Called “Cyberplasm,” it will combine advanced microelectronics with latest research in biomimicry (technology inspired by nature). The aim is for Cyberplasm to have an electronic nervous system and “eye” and “nose” sensors derived from mammalian cells, as well as artificial muscles that use glucose as an energy source to propel it.

The intention is to engineer and integrate robot components that respond to light and chemicals in the same way as biological systems. This is a completely innovative way of pushing robotics forward.

Sea Lamprey

Sea lamprey mouth, close-up (credit: Great Lakes Fishery Commission)

Cyberplasm is being developed over the next few years as part of an international collaboration funded by the Engineering and Physical Sciences Research Council (EPSRC) in the UK and the National Science Foundation (NSF) in the USA.

The UK-based work is taking place at Newcastle University. The project originated from a “sandpit” (idea gathering session) on synthetic biology jointly funded by the two organizations.

Mimicking the sea lamprey

Cyberplasm will be designed to mimic key functions of the sea lamprey, a creature found mainly in the Atlantic Ocean. It is believed this approach will enable the micro-robot to be extremely sensitive and responsive to the environment it is put into. Future uses could include the ability to swim unobtrusively through the human body to detect a whole range of diseases.

The sea lamprey has a very primitive nervous system, which is easier to mimic than more sophisticated nervous systems. This, together with the fact that it swims, made the sea lamprey the best candidate for the project team to base Cyberplasm on.

Lamprey Mouths

Sea lamprey mouths (credit: Great Lakes Fishery Commission)

Once it is developed, the Cyberplasm prototype will be less than 1cm long. Future versions could potentially be less than 1mm long or even built at the nanoscale.

“Nothing matches a living creature’s natural ability to see and smell its environment and therefore to collect data on what’s going on around it,” says bioengineer Dr Daniel Frankel of Newcastle University, who is leading the UK-based work.

Micro-robot design

Cyberplasm’s sensors are being developed to respond to external stimuli by converting them into electronic impulses that are sent to an electronic “brain” equipped with sophisticated microchips. This brain will then send electronic messages to artificial muscles telling them how to contract and relax, enabling the robot to navigate its way safely using an undulating motion.

Similarly, data on the chemical make-up of the robot’s surroundings can be collected and stored via these systems for later recovery by the robot’s operators.

Cyberplasm could also represent the first step on the road to important advances in advanced prosthetics; living muscle tissue might be engineered to contract and relax in response to stimulation from light waves or electronic signals.

“We’re currently developing and testing Cyberplasm’s individual components,” says Daniel Frankel. “We hope to get to the assembly stage within a couple of years. We believe Cyberplasm could start being used in real-world situations within five years”.

Free Wireless Broadband for the Masses – Technology Review

When you think of basic human rights, access to wireless broadband Internet probably isn’t at the top of the list. But a new company backed by a Skype cofounder disagrees, and plans to bring free mobile broadband to the U.S. later this year under the slogan “The Internet is a right, not a privilege.”

Called FreedomPop, the service will give users roughly a gigabyte of free high-speed mobile Internet access per month on Clearwire’s WiMAX network and forthcoming LTE network. It will offer other low-cost prepaid plans that provide access to more data.

FreedomPop vice president of marketing Tony Miller gave few specific details about the company’s offerings and how it plans to make money—and won’t yet name executives or founders—but says he expects the service to roll out in the U.S. sometime between July and September and to eventually branch out to other countries as well.

FreedomPop’s arrival coincides with the rapid rise in smart-phone users and rollout of 4G networks as wireless carriers try to keep up with the growing demand for mobile data. The company is not the only one that sees an opportunity to launch a free 4G service: NetZero recently rolled out its own free and low-cost plans. But while NetZero offers 200 megabytes of free wireless data per month, FreedomPop will offer about five times that amount—more than most data users currently consume in a month.

“In our minds, the access piece is already a commodity we’re looking to further commoditize, in the same way Skype did with voice,” Miller says.

Miller says the company’s founders are friends with Skype cofounder Niklas Zennstrom, who has long wanted to work on a startup related to free Internet access. Zennstrom is a backer and an advisor, Miller says, but he is not an active manager at the company.

Miller says that, similar to Skype, FreedomPop will follow a “freemium” model where users receive some aspects of the service for free and must pay for more. After users surpass their monthly allotment, they will be charged a fee for going over that allotment (Miller says the overage charges will be “cheap”—probably about a penny per megabyte, though maybe a bit lower for prepaid customers—since FreedomPop wants to encourage use).

Turing’s Enduring Importance – Technology Review

When Alan Turing was born 100 years ago, on June 23, 1912, a computer was not a thing—it was a person. Computers, most of whom were women, were hired to perform repetitive calculations for hours on end. The practice dated back to the 1750s, when Alexis-Claude ­Clairaut recruited two fellow astronomers to help him plot the orbit of Halley’s comet. ­Clairaut’s approach was to slice time into segments and, using Newton’s laws, calculate the changes to the comet’s position as it passed Jupiter and Saturn. The team worked for five months, repeating the process again and again as they slowly plotted the course of the celestial bodies.

Today we call this process dynamic simulation; Clairaut’s contemporaries called it an abomination. They desired a science of fundamental laws and beautiful equations, not tables and tables of numbers. Still, his team made a close prediction of the perihelion of Halley’s comet. Over the following century and a half, computational methods came to dominate astronomy and engineering.

By the time Turing entered King’s College in 1931, human computers had been employed for a wide variety of purposes—and often they were assisted by calculating machines. Punch cards were used to control looms and tabulate the results of the American census. Telephone calls were switched using numbers dialed on a ring and interpreted by series of 10-step relays. Cash registers were ubiquitous. A “millionaire” was not just a very rich person—it was also a mechanical calculator that could multiply and divide with astonishing speed.

All these machines were fundamentally limited. They weren’t just slower, less reliable, and dramatically poorer in memory than today’s computers. Crucially, the calculating and switching machines of the 1930s—and those that would be introduced for many years to come—were each built for a specific purpose. Some of the machines could perform manipulations with math, some could even follow a changeable sequence of instructions, but each machine had a finite repertoire of useful operations. The machines were not general-purpose. They were not programmable.

Meanwhile, mathematics was in trouble.

In the early 1920s the great German mathematician David Hilbert had proposed formalizing all of mathematics in terms of a small number of axioms and a set of consistent proofs. Hilbert envisioned a technique that could be used to validate arbitrary mathematical statements—to take a statement such as “x + y = 3 and x – y = 3” and determine whether it was true or false. This technique wouldn’t rely on insight or inspiration on the part of the mathematician; it had to be repeatable, teachable, and straightforward enough to be followed by a computer (in Hilbert’s sense of the word). Such a statement-­proving system would be powerful stuff indeed, for many aspects of the physical world can readily be described as a set of equations. If one were able to apply a repeatable procedure to find out whether a mathematical statement was true or false, then fundamental truths about physics, chemistry, biology—even human society—would be discoverable not through experiments in the lab but by mathematicians at a blackboard.

But in 1931, an Austrian logician named Kurt Gödel presented his devastating incompleteness theorem. It showed that for any useful system of mathematics, it is possible to create statements that are true but cannot be proved. Then came Turing, who drove the final stake through Hilbert’s project—and in so doing, set the path for the future of computing.

As Turing showed, the issue is not just that some mathematical statements are unprovable; in fact, no method can be devised that can determine in all cases whether a given statement is provable or not. That is, any statement on the blackboard might be true, might be false, might be unprovable … and it is frequently impossible to determine which. Math was fundamentally limited—not by the human mind but by the nature of math itself.

The brilliant, astonishing thing was the way Turing went about his proof. He invented a logical formalism that described how a human computer, taught to follow a complex set of mathematical operations, would actually carry them out. Turing didn’t understand how human memory worked, so he modeled it as a long tape that could move back and forth and on which symbols could be written, erased, and read. He didn’t know how human learning worked, so he modeled it as a set of rules that the human would follow depending on the symbol currently before her and some kind of internal “state of mind.” Turing described the process in such exact detail that ultimately, a human computer wasn’t even needed to execute it—a machine could do it instead. Turing called this theoretical entity the “automatic machine” or a-machine; today we call it a Turing machine.

In a 1936 paper, Turing proved that the a-machine could solve any computing problem capable of being described as a sequence of mathematical steps. What’s more, he showed that one a-machine could simulate another a-machine. What gave the a-machine this power was that its tape could store both data and instructions. In the words of science historian George Dyson, the tape held both “numbers that mean things” and “numbers that do things.”

Turing’s work was transformative. It made clear to the designers of early electronic computers that calculating machines didn’t need a huge inventory of fancy instructions or operations—all they needed were a few registers that were always available (the “state of mind”) and a memory store that could hold both data and code. The designers could proceed in the mathematical certainty that the machines they were building would be capable of solving any problem the humans could program.

These insights provided the mathematical formulation for today’s digital computers, though it was John von Neumann who took up Turing’s ideas and is credited with the machines’ design. Von Neumann’s design had a central core that fetched both instructions and data from memory, performed mathematical operations, stored the results, and then repeated. The machine could also query the contents of multiple locations in memory as necessary. What we now call the von Neumann architecture is at the heart of every microprocessor and mainframe on the planet. It is dramatically more efficient than the a-machine, but mathematically, it’s the same.

Incidentally, this essential feature of computers helps explain why cybersecurity is one of the most troubling problems of the modern age. For one thing, Turing showed that all a-machines are equivalent to one another, which is what makes it possible for an attacker to take over a target computer and make it run a program of the attacker’s choosing. Also, because it’s not always possible to discern what can be proved, a Turing machine cannot—no matter how much memory, speed, or time it has—evaluate another Turing machine’s design and reliably determine whether or not the second machine, upon being given some input, will ever finish its computations. This makes perfect virus detection impossible. It’s impossible for a program to evaluate a previously unseen piece of software and determine whether it is malicious without actually running it. The program might be benign. Or it may run for years before it wipes the user’s files. There is no way to know for sure without running the program.

In 1938 Turing began working with the British government and ultimately helped design a series of machines to crack the codes used by the Germans in World War II. The best source for that story is Andrew ­Hodges’s biography Alan Turing: The Enigma. Unfortunately, some details about ­Turing’s wartime work were not declassified until 2000, 17 years after Hodges’s book (and nearly 50 years after Turing committed suicide). As a result, his full contributions have not been well told.

Many histories of computing give the impression that it was a straightforward set of engineering decisions to use punch cards, then relays, then tubes, and finally transistors to build computing machines. But it wasn’t. General-purpose machines required Turing’s fundamental insight that data and code can be represented the same way. And keep in mind that all of today’s computers were developed with the help of slower computers, which in turn were designed with slower computers still. If Turing had not made his discovery when he did, the computer revolution might have been delayed by decades.

TR contributing editor Simson L. Garfinkel is an associate professor of Computer Science at the Naval Postgraduate School. His views do not represent the official policy of the United States government or the Department of Defense.

Drones in American airspace need to be regulated


The drone threat — in the U.S.

Drone proliferation raises an issue that has received too little attention: the threat that they could be used to carry out terrorist attacks.

  • Share180
The Switchblade

The Switchblade is a self-guided cruise missile designed to fit into a soldiers rucksack. (AeroVironment / March 26, 2012)

By John Villasenor

March 27, 2012

President Obama signed a sweeping aviation bill in February that will open American airspace to “unmanned aircraft systems,” more commonly known as drones. Much of the recent discussion about the coming era of domestic drones, which will include those operated by companies and individuals, has been focused on privacy questions. However, drone proliferation also raises another issue that has received far less attention: the threat that they could be used to carry out terrorist attacks.

The technology exists to build drones that fit into a backpack and are equipped with a video camera and a warhead so they can be flown, cruise missile style, into a target. In fact, in September 2011 it was announced that theU.S. Armyhad signed a nearly $5-million contract with a California company, AeroVironment Inc., for the purchase of its Switchblade drones. A Switchblade launches from a tube roughly 2 feet long, sprouts wings immediately after exiting the tube and is then controlled by an operator who looks into a shoe-box-shaped viewer displaying video from the drone. It is equipped with an electric motor that is quiet even when running, and that can be switched off to enable a completely silent glide in the final moments of an approach.

Although reasonable people can disagree on how long it would take terrorists to build or acquire weaponized drones that can be guided by video into a target, there’s really no dispute that it is a question of when and not if. The day will come when such drones are available to almost anyone who wants them badly enough.

In fact, there is ample evidence that terrorist groups have already experimented with drones. As far back as the mid-1990s — practically ancient history in drone terms — the Japanese Aum Shinrikyo sect that carried out the sarin gas attack in the Tokyo subway reportedly considered drones. So too have Al Qaeda and the Colombian insurgent group FARC.

Nations with a record of close ties to terrorists are another concern. Iran unveiled a drone in August 2010 that President Mahmoud Ahmadinejad managed to describe as an “ambassador of death” and a “message of peace and friendship” in the same sentence.

So what can we do to reduce the risk? One good place to start is the “model aircraft” provision in the new aviation law, which allows hobbyists to operate drones weighing up to 55 pounds with essentially no governmental oversight. The law allows recreational drones to be operated in accordance with “community-based” safety guidelines established by a “nationwide community-based organization.” The inclusion of this language was a lobbying victory for model airplane enthusiasts. But is it really in the broader national interest?

It is not. One of the hallmarks of an effective national antiterrorism policy is consistency. The hobbyist exception is glaringly inconsistent with our overall approach to antiterrorism. By what logic, for example, do we prevent airline passengers from taking 8-ounce plastic water bottles through security checkpoints, while permitting anyone who so desires to operate a 50-pound, video-guided drone, no questions asked?

The overwhelming majority of the people in the model airplane and drone hobbyist community would never consider carrying out a terrorist attack. Yet the same could be said for the overwhelming majority of airline passengers, all of whom are subject to the same rules about what can be taken through airport security checkpoints.

Given the realities of the world we live in, it doesn’t seem unreasonable to require all civilian U.S. operators of drones capable of carrying a significant payload to obtain a license. A useful model can be found in fishing licenses, which provide an inexpensive, non-burdensome way for government agencies to know who is fishing.

A licensing program obviously wouldn’t eliminate the threat of drone terrorism. After all, terrorists won’t necessarily feel compelled to get a license. But the federal government has a legitimate national security interest in monitoring domestic drone use. Today, its ability to do so is inadequate. A licensing program would help plug a critical gap in the government’s knowledge regarding who should — and shouldn’t — be operating drones.

In September 2001, the United States was caught by surprise. Few of us had pondered the possibility that commercial airliners could be turned into weapons of terrorism. With drones, we know now — before any attack on U.S. soil has occurred — that they could be used for terrorism. There is no excuse for not doing our best to make sure that it never happens.

John Villasenor is a nonresident senior fellow at the Brookings Institution and a professor of electrical engineering at UCLA.

Copyright © 2012, Los Angeles Times

  • Share180

SanClemJoeD at 6:05 PM March 27, 2012

I’m more concerned about the ACTUAL damage being done to the US by drone politicians — droning around the Congress, collecting paychecks while doing nothing but buzzing their constant drone, “Noooooooooooooooooooooooooooooooooo …”  

Pasquino Marforio at 2:43 PM March 27, 2012

Now that Obama opened American airspace to drones, it’s not terroist attacks we have to worry about.

We have to worry about Obama targeting and killing American citizens, without trial, without a court, without congressional authorization.

He’s killing Americans in foreign countries.  Now he wants to fly drones here.

affableman at 1:44 PM March 27, 2012

We need a bit more checks and balances as regards national security.  We had that under the FISA act and the FISA court but it was weakened in the name of “patriotism”.

If not a national security court then we need an independent body to decide if, when and how drones are to be deployed in the US.

It’s not only government run amok that I fear, it’s corporations as well.  Just look at Facebook and others.  Big Brother didn’t turn out to be the government, it turned out to be big business…

The Principality of Sealand – Become a Lord, Lady, Baron or Baroness

The History Of Sealand

History image

During World War II, the United Kingdom decided to establish a number of military bases, the purpose of which was to defend England against German air raids. These sea forts housed enough troops to man and maintain artillery designed to shoot down German aircraft and missiles. They were situated along the east coast of England on the edge of the English territorial waters. One of these bases, consisting of concrete and steel construction, was the famous royal Read more