Microsoft Tech to Control Computers With a Flex of a Finger

In the future, Microsoft apparently believes, people may simply twitch their fingers or arms to control a computer, game console or mobile device.

Microsoft applied for a patent on electromyography (EMG) controlled computing on Thursday, suggesting that a future smart wristwatch or armband might simply detect a user’s muscle movements and interpret them as gestures or commands. The “Wearable Electromyography-Based Controller” could also use a network of small sensors attached to the body, all communicating wirelessly with a central hub.

Patents can typically take years to approve, and Microsoft’s approach doesn’t mean that the PC of tomorrow will replace a mouse or keyboard with a network of sensors stuck to the bottom via adhesive tape. Microsoft also showed off a prototype of an EMG controller in 2010, and has filed complementary EMG controller patents as well as a patent covering the gestures used to control them.

Biometric Control

Researchers have long looked for alternative methods of controlling devices biometrically, with varying degrees of success. Microsoft first treated the human body as just another input device when it launched the Kinect sensor, which tracks a user’s face and body via an onboard camera. Computing via brainwaves has also been proposed as an alternative method of input. Finally, EMG-controlled devices, such as prosthetics, have been talked about for some time. Still, all three methods have their challenges.

But EMG-based computing does imply several interesting possibilities: the ability to type without a keyboard; wiggling a finger, rather than an arm, to provide fine-grained Kinect controls; or new ways to control “waldos” and other robotic appendages. Microsoft even suggests that a glove-based version of the EMG controller might be used to automatically translate American Sign Language into written or spoken English or other languages. That’s pretty cool.

Microsoft’s patent application claims that an EMG sensor is a “universal” method of controlling any computing device, such as a television or light sensor. That may be true, although something like voice commands could also control just about anything. However, it does offer some sneaky, secret-agent possibilities:

“Further, it should also be appreciated that the control and interface capabilities provided by the Wearable Electromyography-Based Controller are potentially invisible in the sense that a user wearing one or more such controllers can remotely interact with various devices without anyone else being able to see or hear any overt actions by the user to indicate that the user is interacting with such devices,” the patent application says.

How Does It Work?

The human skeleton is connected via tendons. When the brain tells a tendon to contract, motor neurons fire, transmitting tiny electrical signals down to the muscle fiber. An EMG sensor can read the electrical impulses these so-called “motor units” produce. What Microsoft’s patent proposes to do is to interpret these electrical signals without the need for a physical interface.

These sensors could be built into any number of devices or articles of clothing: watches or armbands, anklets, potentially even a Spandex undergarment.

The problem, and one Microsoft’s application goes into great lengths to compensate for, is calibration. Fat, skin and other muscles interfere with the minute electrical signals the sensors need to detect, so one of two things must occur: either a single sensor must be well placed, or a network of sensors needs to interact with each other to interpret the signals. The most likely possibility is that the sensor would need to provide some sort of feedback: either a tone to indicate that the sensor is properly aligned, a voice signal (“Keep turning! Keep turning! Got it!”) or some other way of notifying the user. Microsoft also suggests a variety of other methods, including using RF signals to position the sensors or even applying minute electrical shocks so that the sensors can calibrate themselves.

The EMG sensors would, most likely, connect wirelessly to a central hub or controller, which would serve as the central point for communicating with external devices. Microsoft suggests that this hub could be something like a smartphone, or an integrated device like a wristwatch, with the “watch” portion serving as the hub, and sensors built into the band itself – a truly “smart watch.”

Twitch gestures

You don’t always think about this when using a keyboard or mouse, but a physical device provides either haptic, tactile or visual feedback. Twitching a muscle to surreptitiously control an electronic device may or may not be properly recognized, or the user may simply not know if the command was accepted.

Microsoft’s armband – the preferred EMG controller, based on the number of times this example is used – would include a ring of vibrating elements around the edge of each armband, which would buzz the user when a command was accepted. The system could be programmed for any number of muscle movements or twitch gestures, defined as “a finger movement of any type,” Microsoft says. “A gesture may also be for a specific type of movement, for example, lift, press, bend, etc.”

And yes, Microsoft even suggests that the system be turned off and on through a standardized movement, such as squeezing both hands simultaneously. And that’s good news, or else “talking with your hands” might take on a whole new meaning.

 

Images courtesy of Shutterstock.

FDA OK’s ingestible sensor chip | KurzweilAI

FDA OK’s ingestible sensor chip

Chip tracks adherence to oral medications, can report to caregiver
July 31, 2012

[+]

Ingestible sensor chip for electronically confirming adherence to oral medications. (a) A closer view of a sensor chip; (b) Sensor attached directly to a tablet. (c) Sensor co-encapsulated with a drug product using a sensor-enabled capsule carrier. (Credit: Proteus Digital Health)

Proteus Digital Health, Inc. announced Monday that the U.S. Food and Drug Administration (FDA) has cleared its ingestible sensor for marketing as a medical device.

The ingestible sensor (formally referred to as the Ingestion Event Marker or IEM) is part of the Proteus digital health feedback system, an integrated, end-to-end personal health management system designed to help improve patients’ health habits and connections to caregivers.

“The FDA validation represents a major milestone in digital medicine,” said Dr. Eric Topol, professor of genomics at The Scripps Research Institute and author of The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Healthcare. Directly digitizing pills, for the first time, in conjunction with our wireless infrastructure, may prove to be the new standard for influencing medication adherence and significantly aid chronic disease management,”

[+]

Wearable health monitor communicates with ingested edible sensors to collect physiologic data such as activity and heart rate. (a) Placement of a wearable health monitor on the body. (b) First-generation clinical version of a wearable health monitor. (c) Second-generation clinical version. (Credit: Proteus Digital Health)

The Proteus ingestible sensor can be integrated into an inert pill or other ingested products, such as pharmaceuticals. Once the ingestible sensor reaches the stomach, it is powered by contact with stomach fluid and communicates a unique signal that determines identity and timing of ingestion.

This information is transferred through the user’s body tissue to a patch worn on the skin that detects the signal and marks the precise time an ingestible sensor has been taken. Additional physiologic and behavioral metrics collected by the patch include heart rate, body position and activity.

The patch relays information to a mobile phone application. With the patient’s consent, the information is accessible by caregivers and clinicians, helping individuals to develop and sustain healthy habits, families to make better health choices, and clinicians to provide more effective, data-driven care.

The Strange Neuroscience of Immortality – The Chronicle Review – The Chronicle of Higher Education

By Evan R. Goldstein

Cambridge, Mass.

The Strange Neuroscience of Immortality 2

Illustrations by Harry Campbell for The Chronicle Review

In the basement of the Northwest Science Building here at Harvard University, a locked door is marked with a pink and yellow sign: “Caution: Radioactive Material.” Inside researchers buzz around wearing dour expressions and plastic gloves. Among them is Kenneth Hayworth. He’s tall and gaunt, dressed in dark-blue jeans, a blue polo shirt, and gray running shoes. He looks like someone who sleeps little and eats less.

Hayworth has spent much of the past few years in a windowless room carving brains into very thin slices. He is by all accounts a curious man, known for casually saying things like, “The human race is on a beeline to mind uploading: We will preserve a brain, slice it up, simulate it on a computer, and hook it up to a robot body.” He wants that brain to be his brain. He wants his 100 billion neurons and more than 100 trillion synapses to be encased in a block of transparent, amber-colored resin—before he dies of natural causes.

Why? Ken Hayworth believes that he can live forever.

But first he has to die.

“If your body stops functioning, it starts to eat itself,” he explains to me one drab morning this spring, “so you have to shut down the enzymes that destroy the tissue.” If all goes according to plan, he says cheerfully, “I’ll be a perfect fossil.” Then one day, not too long from now, his consciousness will be revived on a computer. By 2110, Hayworth predicts, mind uploading—the transfer of a biological brain to a silicon-based operating system—will be as common as laser eye surgery is today.

It’s the kind of scheme you expect to encounter in science fiction, not an Ivy League laboratory. But little is conventional about Hayworth, 41, a veteran of NASA’s Jet Propulsion Laboratory and a self-described “outlandishly futuristic thinker.” While a graduate student at the University of Southern California, he built a machine in his garage that changed the way brain tissue is cut and imaged in electron microscopes. The combination of technical smarts and entrepreneurial gumption earned him a grant from the McKnight Endowment Fund for Neuroscience, a subsidiary of the McKnight Foundation, and an invitation to Harvard, where he stayed, on a postdoctoral fellowship, until April.

To understand why Hayworth wants to plastinate his own brain you have to understand his field—connectomics, a new branch of neuroscience. A connectome is a complete map of a brain’s neural circuitry. Some scientists believe that human connectomes will one day explain consciousness, memory, emotion, even diseases like autism, schizophrenia, and Alzheimer’s—the cures for which might be akin to repairing a wiring error. In 2010 the National Institutes of Health established the Human Connectome Project, a $40-million, multi-institution effort to study the field’s medical potential.

Among some connectomics scholars, there is a grand theory: We are our connectomes. Our unique selves—the way we think, act, feel—is etched into the wiring of our brains. Unlike genomes, which never change, connectomes are forever being molded and remolded by life experience. Sebastian Seung, a professor of computational neuroscience at the Massachusetts Institute of Technology and a prominent proponent of the grand theory, describes the connectome as the place where “nature meets nurture.”

Hayworth takes this theory a few steps further. He looks at the growth of connectomics—especially advances in brain preservation, tissue imaging, and computer simulations of neural networks—and sees something else: a cure for death. In a new paper in the International Journal of Machine Consciousness, he argues that mind uploading is an “enormous engineering challenge” but one that can be accomplished without “radically new science and technologies.”

“There are those who say that death is just part of the human condition, so we should embrace it. ‘I’m not one of those people'”

That is not a prevailing view. Many scholars regard Hayworth’s belief in immortality as, at best, an eccentric diversion, too silly to take seriously. “I’m going to pretend you didn’t ask me that,” J. Anthony Movshon, a professor of neural science and psychology at New York University, snapped when I raised the subject.

But to Hayworth, science is about overturning expectations: “If 100 years ago someone said that we’d have satellites in orbit and little boxes on our desks that can communicate across the world, they would have sounded very outlandish.” One hundred years from now, he believes, our descendants will not understand how so many of us failed for so long to embrace immortality. In an unpublished essay, “Killed by Bad Philosophy,” he writes, “Our grandchildren will say that we died not because of heart disease, cancer, or stroke, but instead that we died pathetically out of ignorance and superstition”—by which he means the belief that there is something fundamentally unknowable about consciousness, and that therefore it can never be replicated on a computer.

Hayworth knows he’s courting ridicule. Talk of immortality has long been banished to the margins of intellectual life, to specialized circles on the Internet and places like Scottsdale, Ariz., home of the Alcor Life Extension Foundation, a focal point of the cryonics movement. (Hayworth has been a member of Alcor, if a skeptical one, since the mid-1990s.) In the popular mind, the quest to defeat death has become the stuff of late-night comedy (if you don’t know about Ted Williams’s head, Google it), not serious science.

So where does that leave Hayworth, an iconoclast with legitimate research credentials? Academe is an uneasy fit. His ideas are taboo and often ignored. (Just try getting a grant to study mind uploading.) Harvard has distanced itself from Hayworth, and a colleague at the Howard Hughes Medical Institute’s Janelia Farm Research Campus, in Ashburn, Va.—a center of connectomics scholarship, where Hayworth recently started as a senior scientist—told him that his interest in brain preservation and mind uploading is “a significant negative,” to the point that it delayed his hiring.

But Hayworth seems unfazed, confident in the march of progress. “We’ve had a lot of breakthroughs—genomics, space flight—but those are trivial in comparison to mind uploading,” he told me recently. “This will be earth-shattering because it will open up possibilities we’ve never dreamed of.” Perhaps sensing my skepticism, he added, “Other neuroscientists will come around when they see the massive amounts of connectome data that we’re generating, and they’ll say, ‘Wow, the future has arrived.'”

***

Connectomics is a new way of looking at an old idea. Since the mid-19th century, scientists have known that the brain comprises a dense web of neurons. Only recently, however, have they been able to get a detailed glimpse. The view is daunting. A piece of human brain tissue the size of a thimble contains around 50 million neurons and close to a trillion synapses. Scientists compare the task of tracing each connection to untangling a heaping plate of microscopically thin spaghetti.

In 1986, researchers did manage to map the nervous system of a millimeter-long soil worm known as C. elegans. Though the creature has only 302 neurons and 7,000 synapses, the project took a dozen years. (The lead scientist, Sydney Brenner, who won a Nobel Prize in Physiology or Medicine in 2002, is also at Janelia Farm.) C. elegans’s remains the only connectome ever completed. According to one projection, if the same techniques were used to map just one cubic millimeter of human cortex, it could take a million person-years.

In 2010, Jeff Lichtman, a professor of molecular and cellular biology at Harvard and a leading light in connectomics, and Narayanan Kasthuri, also of Harvard, published a small paper full of big numbers. Based on their estimates, a human connectome would generate one trillion gigabytes of raw data. By comparison, the entire Human Genome Project requires only a few gigabytes. A human connectome would be the most complicated map the world has ever seen.

The Strange Neuroscience of Immortality 1

Yet it could be a reality before the end of the century, if not sooner, thanks to new technologies that “automate the process of seeing smaller,” as Sebastian Seung puts it in his new book, Connectome: How The Brain’s Wiring Makes Us Who We Are (Houghton Mifflin Harcourt). “Neuroscience has not yet been able to deliver on the idea of understanding the brain as a bunch of neurons because our tools have been too crude,” he explains in an interview. “But now there’s a new optimism that we can deliver on that promise.”

One source of optimism resides on a counter in a small room in the corner of the Harvard lab. It’s about the size of a sewing machine, and it’s called an ultramicrotome. Such devices have been in use for decades. This one, however, is tricked out with enhancements that Hayworth began developing years ago in his garage.

Perched on a stool, he talks me through how it works. A tiny diamond blade shaves tissue samples into slices as thin as 30 nanometers—more than a thousandth as thin as a human hair. The slices are then brought to another part of the lab and imaged in an electron microscope. Stack up a few hundred or thousands of these pictures and you get a high-resolution, three-dimensional view of a neural network—the building blocks of a connectome.

Ultramicrotome slices have traditionally been collected manually, which was slow going and error-prone. Hayworth transformed the process. He shows me how slices are now automatically affixed to a spool of white, carbon-coated tape. Along with Richard Schalek, a colleague at Harvard’s Center for Brain Science, Hayworth started a company, Synaptoscopics, to spread this innovation to other labs.

Thus far, tape-collection devices have been installed at several institutions, including Columbia, MIT, Harvard Medical School, and the Albert Einstein College of Medicine, at Yeshiva University. Hayworth and Schalek recently traveled to Vienna to meet with engineers from the microscope maker Leica Microsystems, which is interested in adding Hayworth’s device to its product line.

The idea for the machine came to Hayworth when he was working at the Jet Propulsion Laboratory. It was his first job after graduating from the University of California at Los Angeles with a degree in computer science. He designed gyroscopes to orient spacecraft. Most of his dozen or so patents—he can’t recall the exact number—originated at NASA. Don’t be too impressed, he says: “I’ve made zero money on all of them so far.”

Hayworth left the space agency in 2003 to begin graduate studies at the University of Southern California. “It was immediately clear to everyone in the department that Ken is an extraordinary engineer, scientist, and deep, creative thinker,” says Irving Biederman, a professor of neuroscience there. But when Hayworth shopped his design for an automated brain slicer among the faculty, no one was interested. So he joined Biederman’s lab, conducting fMRI research on the human visual system. At night and on weekends, he tinkered in his garage. Six months and $10,000 of his own money later, he had a crude but functional prototype, which Biederman took to calling “the apple peeler.”

Hayworth put up a slapdash Web site with photos and, as he puts it, “a grand vision of where I wanted this to go.” He also sent a blind e-mail to John Fiala, then a research assistant professor of neuroscience at Boston University. Fiala had written a paper in 2002 calling for fresh approaches to imaging entire brains—fly, mouse, eventually human. It made a big impression on Hayworth. Fiala passed a link to Hayworth’s Web site to Jeff Lichtman, at Harvard. Not long after, out of the blue, Hayworth received a call. “It was crazy,” he says. “Jeff wanted to fly me out to Harvard to give a talk.”

It didn’t go well. “Ken’s ideas met with skepticism and even derision,” recalls Fiala, who was at the meeting. But Lichtman flew Hayworth out a second time to meet with a group of electron-microscope specialists. “I’d never even used an electron microscope,” Hayworth says with a snort. “I’d read all the papers and tried to figure out what was required, but I had no leg to stand on.” He shakes his head. “It was intimidating.”

Hayworth says that Lichtman remained skeptical but agreed to jointly apply for a grant from the McKnight Endowment Fund for Neuroscience. In 2005 they received $200,000 to further develop the brain-slicing prototype. More funds soon followed. Hayworth began to split his time between Los Angeles and Boston, between USC and Harvard, between cognitive neuroscience and connectomics. His wife and two young sons remained in Los Angeles. “Ken did the equivalent of two dissertations in two different fields,” says Biederman.

Hayworth received his Ph.D. in 2009 and joined Lichtman’s lab full time. Discussing their collaboration with The New York Times a few years ago, Lichtman said, “Ken reminded me of Scotty from Star Trek. I just kept asking for more juice—pushing him like a psychopath to slice thinner and thinner.” (Lichtman did not respond to e-mails requesting an interview.)

You have to wonder: Why didn’t Fiala dismiss Hayworth as a naïve crank? After all, at the time, Hayworth was an unknown grad student working out of his garage. Asked to explain, Fiala, who has since left academe, quotes Eric Kandel’s autobiography, In Search of Memory, in which the Nobel-winning neuroscientist attributes his own early success to plucking the “low-hanging fruit.” As Fiala told me, “If Ken achieves a comparable amount of success, it will be because he went after the highest fruit in the tree at the start of his career.” To put it another way, Hayworth was too audacious to ignore.

***

After a tour of the Harvard lab, Hayworth and I end up in a quiet room. Just a laptop, another ultramicrotome, and some tools strewn across two workbenches. Hayworth closes the door and pulls up two chairs. We talk about the one-bedroom apartment he rents near the campus, his Roman Catholic upbringing and later embrace of atheism, and his hope to one day visit Mars.

Then he leans in close. “I’m pissed at the human condition. We have a very short life span. Maybe there are strong people who say, ‘That’s just the human condition, we should embrace it.’ I’m not one of those people.”

He goes on: “You’re going to become old and frail. At some point you’ll be so old and so frail that you’ll stop caring. You’ll say”—his voice rises a register—”‘Death, I don’t know why I didn’t embrace you a long time ago.'” He takes a deep breath. “I want to give people the option to hit pause. It’s not suicide,” he says, stressing each syllable. “It’s a pause. You can tell your family members, ‘I’m pretty sure I’ll see you on the other side.’ That’s the difference.” He thrusts a finger into the air. “This isn’t cryonics, where maybe you have a .001 percent chance of surviving. We’ve got a good scientific case for brain preservation and mind uploading.”

That case is deeply speculative. Here’s how Hayworth envisions his own brain-preservation procedure. Before becoming “very sick or very old,” he’ll opt for an “early ‘retirement’ to the future,” he writes. There will be a send-off party with friends and family, followed by a trip to the hospital. “I’m not going in for some back-alley situation. We need to get the science right to convince the medical community. It’s a very clear dividing line: I will not advocate any technique until we have good proof that it works.”

After Hayworth is placed under anesthesia, a cocktail of toxic chemicals will be perfused through his still-functioning vascular system, fixing every protein and lipid in his brain into place, preventing decay, and killing him instantly. Then he will be injected with heavy-metal staining solutions to make his cell membranes visible under a microscope. All of the water will then be drained from his brain and spinal cord, replaced by pure plastic resin. Every neuron and synapse in his central nervous system will be protected down to the nanometer level, Hayworth says, “the most perfectly preserved fossil imaginable.”

His plastic-embedded brain will eventually be cut into strips, perhaps using a machine like the one he invented, and then imaged in an electron microscope. His physical brain will be destroyed, but in its place will be a precise map of his connectome. In 100 years or so, he says, scientists will be able to determine the function of each neuron and synapse and build a computer simulation of his mind. And because the plastination process will have preserved his spinal nerves, he’s hopeful that his computer-generated mind can be connected to a robot body.

“This is not something everyone would want to do,” Hayworth allows. “But it’s something everyone should have the right to do.”

***

Hayworth might sound arrogant, but he doesn’t come across that way in person. His demeanor is a strange mix of intellectual hubris and personal modesty. Indeed, he has a winningly wry, self-deprecating charm. He sometimes prefaces his out-there statements with a disclaimer: “This is going to sound grandiose, I apologize.” He jokes that his ideas seem like they were cooked up late at night on Art Bell’s radio show.

It’s a funny line that suggests a serious question: Should we take Hayworth seriously? Mainstream science, it seems, does not. But the skepticism of his colleagues isn’t what most frustrates him. It’s their indifference. “People should be skeptical,” he says, “but why aren’t we firing papers back and forth arguing about why brain preservation or mind uploading won’t work?” He drops his head in his hands. “This is what shakes me the most. I’m a big believer in the scientific process—peer review, grant procedures—and to see these questions go unpursued is just, is just … ” He trails off. “Something is wrong with this situation.”

A few years ago, in an effort to pry open the scientific mind, Hayworth founded the Brain Preservation Foundation. Its central mission is to promote research into whole-brain preservation and ensure that any breakthroughs are legally available. The foundation has published a Brain Preservation Bill of Rights on its Web site. “It is our individual unalienable right to choose death, or to choose the possibility of further life for our memories or identity, as desired,” the document declares.

“Under the appropriate conditions, it must also be our right to choose to undergo an uncertain medical procedure which may indeed shorten our life, but which we believe has the possibility of greatly extending it, in quality as well as in duration.” In a footnote, Hayworth likens the struggle to legalize brain preservation to the battle for abortion rights.

The foundation’s most significant initiative is a cash prize—currently $106,720, as donations are solicited—for the first individual or team to preserve the connectome of a large mammal. Announcing the prize in Cryonics magazine, published by Alcor, Hayworth challenged his generation of scientists to “re-evaluate what is possible, to move beyond the expectations of their parents and grandparents and look at the problem with a fresh perspective.”

For now there are two top contenders for the prize: 21st Century Medicine, a cryobiology company in California, and Shawn Mikula, a postdoc at the Max Planck Institute for Medical Research, in Heidelberg, Germany, a major center of connectomics scholarship. A dependable brain-preservation protocol is possible within five years, Hayworth says. “We might have a whole mouse brain preserved very soon.”

Current methods of preserving brain tissue, an intensely fragile substance, top out at around one cubic millimeter—far, far short of an entire human brain. At the Harvard lab, Hayworth and his colleagues use a technique that has been around for decades. They cut open the chest of a live mouse and insert a needle into the left ventricle. A series of solutions and chemicals are injected into the mouse’s vasculature. The operation is well established but still unpredictable. “When we do brain-tissue perfusions, there is a stack of five mice that have gone bad,” Hayworth says. “They just didn’t work for one reason or another.”

“Visioneers have ideas that stand out there as something to be looked at, maybe shot down, proven or disproven, but they are part of the process of staking out where the frontier of science is.”

The advisory board of the Brain Preservation Foundation includes several prominent thinkers. Among them are Sebastian Seung; Olaf Sporns, a neuroscientist at the University of Indiana at Bloomington, who, in 2005, gave connectomics its name; Gregory Stock, a former director of UCLA’s Program on Medicine, Technology and Society; David Eagleman, a neuroscientist at Baylor College of Medicine; and Michael Shermer, founding publisher of Skeptic magazine, a keep-them-honest quarterly that tries to distinguish science from pseudoscience.

Some board members insist that they don’t share all of Hayworth’s views. Sporns, for example, says he joined the group because there is a need in neuroscience for better brain-preservation techniques. About mind uploading, however, he’s dubious. “It’s a fundamentally flawed idea,” he says, running through a litany of technical objections. But after a few minutes, Sporns takes a different tack: “Science has tremendous self-correcting mechanisms. Truly crazy ideas never go far, but unconventional ideas do sometimes push forward the boundaries of knowledge. So I salute Ken’s courage and hope he continues to push the envelope.”

W. Patrick McCray, a historian of science at the University of California at Santa Barbara, has a name for people like Hayworth: visioneers. In a forthcoming book from Princeton University Press, McCray describes visioneers as technology-minded, entrepreneurial futurists who propose radical, even heretical ideas. What distinguishes a visioneer from your average arm-waving crank—or politician—hollering about moon colonies? Expertise and credibility, says McCray. “When a skilled physicist offers a detailed design and shows you the numbers, you might conclude that space colonies are not economically or politically possible, but maybe they’re technically possible.”

He adds: “Visioneers have ideas that stand out there as something to be looked at, maybe shot down, proven or disproven, but they are part of the process of staking out where the frontier of science is.”

These days Hayworth has a more quotidian concern on his mind: money. The Brain Preservation Foundation has no headquarters, operating budget, or endowment, just a handful of volunteers and a sticker on Hayworth’s home mailbox. A few months ago, the IRS granted the foundation nonprofit status. The process took longer than expected. “They seemed confused by the concept,” Hayworth explains with a grin. Donations have not been rolling in.

He has tried to rally deep-pocketed allies to the cause. Before creating the foundation he met with Peter Diamandis, founder of the X Prize Foundation, which offers cash to entrepreneurs who achieve big goals, like building a mobile device that would allow people to diagnose their own diseases. Hayworth, who tried to persuade him to establish a brain-preservation X Prize, came away with the impression that the issue is “too hot” for Diamandis’s corporate sponsors. More recently Hayworth met with representatives of Peter Thiel, a founder of PayPal and biotech entrepreneur. Hayworth was told that Thiel is interested in his ideas, but thus far not much has come of it.

“This is not a cheap endeavor,” Hayworth says, noting the strain on his personal finances. “It’s a bottomless pit.” (Most of the foundation’s prize-money fund comes from an anonymous $100,000 pledge.) But given his druthers, he adds, “I would take every bit of money I have and can get my hands on and throw it at some science project.”

***

“Mind uploading is part of the zeitgeist,” says Sebastian Seung. “People have become believers in virtual worlds because of their experience with computers. That makes them more willing to consider far-out ideas.” We’re in Seung’s lab at MIT. He’s dressed in black sneakers and a white T-shirt with a sentence from his book, Connectome, emblazoned across the chest: “The neuron is my second favorite cell.” (Seung’s favorite cell: sperm.)

Connectome comes adorned with praise—”page-turner,” “path-breaking neuroscientist”—from big-name scholars like Michael S. Gazzaniga and Steven Strogatz. The book is full of bold speculations: that mapping connectomes might cure mental disorders, improve cognition, explain how memories form. Connectomes, Seung writes, will “dominate our thinking about what it means to be human.”

The last two chapters—”To Freeze or to Pickle?” and “Save As …”—are devoted to what he calls the “logical extreme” of connectomics: cryonics and mind uploading. “There is only one truly interesting problem in science and technology,” Seung writes, “and that is immortality.”

His tone, in the book and in conversation, is that of an open-minded skeptic. Of brain preservation, he says simply, “it’s possible” but not imminent. As for immortality, he’s quite sure that he’ll die, just as we all will. The discussion about these issues has reached an impasse, he explains. Until someone dead is brought back to life, “it’s just your word against mine, a philosophical debate.” But connectomics can provide a way forward, he says. “We can’t prove immortality is possible, but we can disprove it. And once you provide the potential for an idea to be disproved, it can become part of scientific discourse.”

Seung proposes a two-part test. First, is it true that we are our connectomes? Second, does cryonics or chemical brain preservation keep the connectome intact? If either statement is false, then freezing or uploading can’t work. If both statements are true, immortality isn’t in the offing, he cautions, but it’s at least plausible. “Some colleagues may think this is all kind of crazy,” he says, “but these questions can be addressed in an intellectually rigorous way.”

For now, however, “I am my connectome” is an untestable hypothesis, and some within the field bristle at the notion. “There is a relationship between aspects of connectivity and personality and behavior, but I am not my connectome,” says Olaf Sporns. “If I had a complete map of your circuits, I wouldn’t be able to read it like a book.”

The speculative frenzy in connectomics reminds him of the buzz about genomics in the 1980s and 90s: “People thought it would explain who you are, but it didn’t turn out that way.” Genomics did revolutionize science, however, and connectomics might do the same. “It’s difficult to imagine doing biology without knowledge of the genome,” says Sporns, who is writing a book about connectomics for MIT Press. “We have a similar need in neuroscience for a foundation to ask better questions. The connectome could be that foundation.”

J. Anthony Movshon, of NYU, takes a dimmer view. More than 25 years after the C. elegans connectome was completed, he says, we have only a faint understanding of the worm’s nervous system. “We know it has sensory neurons that drive the muscles and tell the worm to move this way or that. And we’ve discovered that some chemicals cause one response and other chemicals cause the opposite response. Yet the same circuit carries both signals.” He scoffs, “How can the connectome explain that?”

Movshon, who has debated Seung in public, sums his argument up like this: “Our brains are not the pattern of connections they contain, but the signals that pass along those connections.”

The Baylor neuroscientist David Eagleman offers an analogy. Suppose that an alien creates a perfect street map of Manhattan. Would that explain how Manhattan functions and what makes it unique? To answer those questions, he says, the alien would need to see what’s happening inside those buildings, how people are interacting. Dropping the analogy, Eagleman says, “Maybe we need to know the states of the individual proteins, their exact spatial distributions, how they articulate with neighboring proteins, and so on.” To understand the brain well enough to copy it, neurons might be insufficient.

“Neuroscience is obsessed with neurons because our best technology allows us to measure them,” Eagleman says. “But each individual neuron is in fact as complicated as a city, with millions of proteins inside of it, trafficking and interacting in extraordinarily complex biochemical cascades.”

***

Hayworth left Harvard in April. He says he was lured to Janelia Farm by a new approach to brain imaging that uses a focused ion beam instead of a diamond blade. “This technology could, with an Apollo or Manhattan Project budget, map an entire human brain,” he says. “It’s capable of a resolution for mind uploading.” The immediate goal is more modest. Hayworth and his colleagues want to find the connectome of a fly.

Asked if he’d rather have stayed on at Jeff Lichtman’s lab at Harvard, Hayworth says there was no discussion of moving him into a permanent role. “It’s not easy to transition directly from a postdoc into an academic position at Harvard.” Even so, his departure raises a question: Did his unorthodox views diminish his job prospects? He doesn’t think so.

“I tried to not get Jeff in trouble,” he says, insisting that he kept his brain-preservation work separate from his official duties at the lab. But there was the potential for awkwardness. Last year Hayworth agreed to participate in a Discovery Channel show about immortality. The interview had to take place in Los Angeles because Lichtman would not allow Hayworth to be filmed on the campus or be identified with Harvard.

Hayworth says that he is uninterested in being a public figure, but that someone needs to be out front pushing these ideas. “It is not yet possible for a scientist to say that one of the end goals of connectomics is mind uploading, and one of the possible applications of chemical brain preservation is as a medical technique to preserve people after they die,” he tells me. “I’m willing to put my career on the line to create space for dialogue and research on this issue.”

My conversations with Hayworth took place over several months, and I was struck by how his optimism often gave way to despair. “I’ve become jaded about whether brain preservation will happen in my lifetime,” he told me at one point. “I see how much pushback I get. Even most neuroscientists seem to believe that there is something magical about consciousness—that if the brain stops, the magic leaves, and if the magic leaves, you can’t bring the magic back.”

I asked him if the scope of his ambitions ever gives him pause. If he could achieve immortality, might it usher in a new set of problems, problems that we can’t even imagine?

“Here’s what could happen,” he said. “We’re going to understand how the brain works like we now understand how a computer works. At some point, we might realize that the stuff we hold onto as human beings—the idea of the self, the role of mortality, the meaning of existence—is fundamentally wrong.”

Lowering his voice, he continued. “It may be that we learn so much that we lose part of our humanity because we know too much.” He thought this over for a long moment. “I try not to go too far down that road,” he said at last, “because I feel that I would go mad.”

Evan R. Goldstein is managing editor of The Chronicle Review.

The End of Chinese Manufacturing and Rebirth of U.S. Industry – Forbes

Tesla Motors robotics manufacturing

There is great concern about China’s real-estate and infrastructure bubbles.  But these are just short-term challenges that China may be able to spend its way out of. The real threat to China’s economy is bigger and longer term: its manufacturing bubble.

By offering subsidies, cheap labor, and lax regulations and rigging its currency, China was able to seduce American companies to relocate their manufacturing operations there. Millions of American jobs moved to China, and manufacturing became the underpinning of China’s growth and prosperity. But rising labor costs, concerns over government-sponsored I.P. theft, and production time lags are already causing companies such as Dow Chemicals, Caterpillar, GE, and Ford to start moving some manufacturing back to the U.S. from China. Google recently announced that its Nexus Q streaming media player would be made in the U.S., and this put pressure on Apple to start following suit.

But rising costs and political pressure aren’t what’s going to rapidly change the equation. The disruption will come from a set of technologies that are advancing at exponential rates and converging.

These technologies include robotics, artificial intelligence (AI), 3D printing, and nanotechnology. These have been moving slowly so far, but are now beginning to advance exponentially just as computing does.  Witness how computing has advanced to the point at which the smart phones we carry in our pockets have more processing power than the super computers of the ’60s—and how the Internet, which also has its origins in the ’60s, went on an exponential growth path about 15 years ago and rapidly changed the way we work, shop, and communicate.  That’s what lies ahead for these new technologies.

The robots of today aren’t the Androids or Cylons that we used to see in science-fiction movies, but specialized electro-mechanical devices that are controlled by software and remote controls. As computers become more powerful, so do the abilities of these devices. Robots are now capable of performing surgery, milking cows, doing military reconnaissance and combat, and flying fighter jets. And DIY’ers are lending a helping hand. There are dozens of startups, such as Willow Garage, iRobot, and 9th Sense, selling robot-development kits for university students and open-source communities. They are creating ever more-sophisticated robots and new applications for these. Watch this video of the autonomous flying robots that University of Pennsylvania professor Vijay Kumar created with his students, for example.

The factory assembly that the Chinese are performing is child’s play for the next generation of robots—which will soon become cheaper than human labor. Indeed, one of China’s largest manufacturers, Taiwan-based Foxconn Technology Group, announced last August that it plans to install one million robots within three years to do the work that its workers in China presently do. It found Chinese labor to be too expensive and demanding. The world’s most advanced car, the Tesla Model S, is also being manufactured in Silicon Valley, which is one of the most expensive places in the country. Tesla can afford this because it is using robots to do the assembly.

Then there is artificial intelligence (AI)—software that makes computers do things that, if humans did them, we would call intelligent. We left AI for dead after the hype it created in the ‘80s, but it is alive and kicking—and advancing rapidly. It is powering all sorts of technologies. This is the technology that IBM’s Deep Blue computer used in beating chess grandmaster Garry Kasparov in 1997and that enabled IBM’s Watson to beat TV-show Jeopardy champions in 2011. AI is making it possible to develop self-driving cars, voice-recognition systems such as Apple’s Siri, and the face-recognition software Facebook recently acquired. AI technologies are also finding their way into manufacturing and will allow us to design our own products at home with the aid of AI-powered design assistants.

How will we turn these designs into products? By “printing” them at home or at modern-day Kinko’s: shared public manufacturing facilities such as TechShop, a membership-based manufacturing workshop, using new manufacturing technologies that are now on the horizon.

A type of manufacturing called “additive manufacturing” is making it possible to cost-effectively “print” products.  In conventional manufacturing, parts are produced by humans using power-driven machine tools, such as saws, lathes, milling machines, and drill presses, to physically remove material to obtain the shape desired. This is a cumbersome process that becomes more difficult and time-consuming with increasing complexity. In other words, the more complex the product you want to create, the more labor is required and the greater the effort.

In additive manufacturing, parts are produced by melting successive layers of materials based on 3D models—adding materials rather than subtracting them. The “3D printers” that produce these use powered metal, droplets of plastic, and other materials—much like the toner cartridges that go into laser printers.  This allows the creation of objects without any sort of tools or fixtures. The process doesn’t produce any waste material, and there is no additional cost for complexity. Just as, in using laser printers, a page filled with graphics doesn’t cost much more than one with text, in using a 3D printer, we can print sophisticated 3D structures for about the cost of a brick.

3D printers can already create physical mechanical devices, medical implants, jewelry, and even clothing. The cheapest 3D printers, which print rudimentary objects, currently sell for between $500 and $1000. Soon, we will have printers for this price that can print toys and household goods. By the end of this decade, we will see 3D printers doing the small-scale production of previously labor-intensive crafts and goods. It is entirely conceivable that in the next decade we start 3D-printing buildings and electronics.

Super-fast Google Fiber for Kansas City | KurzweilAI

[+]

(Credit: Google)

Google has announced Google Fiber, to be installed first in Kansas City.

Google Fiber is 100 times faster than today’s average broadband.

Imagine: instantaneous sharing; truly global education; medical appointments with 3D imaging; even new industries that we haven’t even dreamed of, powered by a gig.

Google has divided Kansas City into small communities called “fiberhoods.” To get service, each fiberhood needs a critical mass of their residents to pre-register. The fiberhoods with the highest pre-registration percentage will get Google Fiber first.

Households in Kansas City can pre-register for the next six weeks, and they can rally their neighbors to pre-register, too. Once the pre-registration period is over, residents of the qualified fiberhoods will be able to choose between three different packages (including TV).

While high speed technology exists, the average Internet speed in the U.S. is still only 5.8 megabits per second (Mbps) — slightly faster than the maximum speed available 16 years ago when residential broadband was first introduced.

“Access speeds have simply not kept pace with the phenomenal increases in computing power and storage capacity that’s spurred innovation over the last decade, and that’s a challenge we’re excited to work on,” says the Official Google Blog.