Cray unveils Cray XC30 supercomputer, capable of scaling to 100 petaflops | KurzweilAI

Cray XC30 supercomputer (credit: Cray Inc.)

Cray Inc. has launched the Cray XC30 supercomputer, previously code-named “Cascade,” designed to scale high performance computing (HPC) workloads of more than 100 petaflops, with more than one million cores.

Cray did not specify whether the 100 petaflops was Rpeak or Rmax, or when a 100 petaflops installation might be planned.

China’s Guangzhou Supercomputing Center also recently announced the development of a supercomputer capable of 100 petaflops peak performance: the Tianhe-2 supercomputer, due to launch in 2015.

Developed in conjunction with the U.S. Defense Advanced Research Projects Agency, the Cray XC30 combines the new Aries interconnect, Intel Xeon processors, Cray’s fully-integrated software environment, and innovative power and cooling technologies.

cray_cascade_aries_chip

Die shot of the Aries interconnect chip (credit: Cray Inc.)

Several leading HPC centers have signed contracts to purchase Cray XC30 supercomputers, including:

  • The Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland
  • The Pawsey Centre in Perth, Australia, owned by CSIRO and operated by iVEC
  • The Finnish IT Center for Science Ltd. (CSC)
  • The Department of Energy’s National Energy Research Scientific Computing Center (NERSC) in Berkeley, Calif.
  • The Academic Center for Computing and Media Studies (ACCMS) at Kyoto University in Kyoto, Japan
  • The University of Stuttgart’s High Performance Computing Center Stuttgart (HLRS) in Germany

The Cray XC30 will utilize the Intel Xeon processors E5-2600 product family. Future versions of the Cray XC family of supercomputers will be available with the new Intel Xeon Phi coprocessors and NVIDIA Tesla GPUs based on the next-generation NVIDIA Kepler GPU computing architecture.

Early shipments of the Cray XC30 are starting now, and systems are expected to be widely available in first quarter of 2013.

“Cray is a leader in the high-end of the supercomputing industry, and the Cray XC30 system promises to continue the Company’s strong standing in the market for designing, building and installing leadership-class supercomputers, such as the ‘Titan’ system at Oak Ridge National Laboratory and the ‘Blue Waters’ supercomputer at the University of Illinois’ National Center for Supercomputing Applications,” said Earl Joseph, IDC program vice president for HPC. “The Cray XC30 supercomputer also advances Cray’s Adaptive Supercomputing vision, which aims to boost application performance for their customers by exploiting hybrid processing.”

The Cray XC30 supercomputer is made possible in part by Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing Systems program.

 

 

Are you elderly and having memory or concentration problems? | KurzweilAI

800px-Medicine_Drugs.svg

(Credit: Wikimedia Commons)

They might be caused by common medications used to treat insomnia, anxiety, itching or allergies, according to Dr. Cara Tannenbaum, Research Chair at the Institut universitaire de gériatrie de Montréal (IUGM, Montreal Geriatric University Institute) and Associate Professor of Medicine and Pharmacy at the University of Montreal (UdeM).

Up to 90 percent of people over the age of 65 take at least one prescription medication. Eighteen percent of people in this age group complain of memory problems and are found to have mild cognitive deficits. Research suggests there may be a link between the two.

Dr. Tannenbaum recently led a team of international researchers to investigate which medications are most likely to affect amnestic (memory) or non-amnestic (attention, concentration, performance) brain functions. After analyzing the results from 162 experiments on medications with potential to bind to cholinergic, histamine, GABAergic or opioid receptors in the brain, Dr. Tannenbaum concluded that the episodic use of several medications can cause amnestic or non-amnestic deficits.

The 68 trials on benzodiazepines (which are often used to treat anxiety and insomnia) that were analyzed showed that these drugs consistently lead to impairments in memory and concentration, with a clear dose-response relationship. The 12 tests on antihistamines and the 15 tests on tricyclic antidepressants showed deficits in attention and information processing. Dr. Tannenbaum’s findings support the recommendation issued in the Revised Beers Criteria published last spring 2012 by the American Geriatrics Society that all sleeping pills, first-generation antihistamines, and tricyclic antidepressants should be avoided at all costs in seniors. (See list below.)

However, “despite the known risks, it may be better for some patients to continue their medication instead of having to live with intolerable symptoms,” says Dr. Tannenbaum. “Each individual has a right to make an informed choice based on preference and a thorough understanding of the effects the medications may have on their memory and function.”

Research summary

MEDLINE and EMBASE were searched for randomized, double-blind, placebo-controlled trials of adults without underlying central nervous system disorders who underwent detailed neuropsychological testing prior to and after oral administration of drugs affecting cholinergic, histaminergic, GABAergic or opioid receptor pathways. Seventy-eight studies were identified, reporting 162 trials testing medication from the four targeted drug classes. Two investigators independently appraised study quality and extracted relevant data on the occurrence of amnestic, non-amnestic or combined cognitive deficits induced by each drug class. Only trials using validated neuropsychological tests were included. Quality of the evidence for each drug class was assessed based on consistency of results across trials and the presence of a dose-response gradient. This research was conducted in collaboration with researchers at the University of Sydney, the University of Calgary and the University of Iowa College of Public Health.

Drugs to avoid

At KurzweilAI’s request, Dr. Tannenbaum provided the following list of the most dangerous drugs shown to affect memory (generic names):

Benzodiazepines and non-benzodiazepine sedative hypnotics:

Midazolam
Trazolam
Temazepam
Oxazepam
Lorazepam
Alprazolam
Clonazepam
Diazepam
Florazepam
Clorazepam
Zolpidem
Zopiclone
Zaleplon

Tricyclic antidepressants

Amitriptyline
Imipramine

First-generation antihistamines

Hydroxyzine
Diphenhydramine
Tripoline
Promethazine

Intel working on 48-core chip for smartphones, tablets – Computerworld

Computerworld – Intel researchers are working on a 48-core processor for smartphones and tablets, but it could be five to 10 years before it hits the market.

“If we’re going to have this technology in five to 10 years, we could finally do things that take way too much processing power today,” said Patrick Moorhead, an analyst with Moor Insights and Strategy. “This could really open up our concept of what is a computer… The phone would be smart enough to not just be a computer but it could be my computer.”

Enric Herrero, a research scientist at Intel Labs in Barcelona, said the lab is working on finding new ways to use and manage many cores in mobile devices.

Today, some small mobile devices use multi-core chips. However, those multi-cores might be dual- or quad-core CPUs working with a few GPUs. Having a 48-core chip in a small mobile device would open up a whole new world of possibilities.

At this point, researchers are working to see how to best use so many cores for one device.

“Typically a processor with one core would do jobs one after another,” Herrero told Computerworld. “With multiple cores, they can divide the work among them.”

He explained that with many cores, someone could, for instance, be encrypting an email while also working on other power-intensive apps at the same time. It could be done today, but the operations might drag because they’d have to share resources.

Tanausu Ramirez, another Intel research scientist working on the 48-core chip, said that if someone was, for example, watching a high-definition video, a 48-core chip would be able to use different cores to decode different video frames at the same time, giving the user a more seamless video experience.

Ramirez also said that instead of one core working at near top capacity and using a lot of energy, many cores could run in parallel on different projects and use less energy.

“The chip also can take the energy and split it up and distribute it between different applications,” he added.

Justin Rattner, Intel’s CTO, told Computerworld that a 48-core chip for small mobile devices could hit the market “much sooner” than the researchers’ 10-year prediction.

“I think the desire to move to more natural interfaces to make the interaction much more human-like is really going to drive the computational requirements,” he said. “Having large numbers of cores to generate very high performance levels is the most energy efficient way to deliver those performance levels.”

Rattner said functions such as speech recognition and augmented reality will push the need for more computational power.

“If it’s doing speech recognition or computer vision… that’s very computational intensive,” he added. “It’s just not practical to just take sound and pictures and send it up to the cloud and expect that some server is going to perform those tasks. So a lot of that will be pushed out to the client devices.”

Rob Enderle, an analyst with the Enderle Group, said being able to have different device functions, as well as apps all running on their own cores would be a great advance.

Text about this image

Intel researchers at the company’s lab in Barcelona are using a prototype of a single chip cloud computer to figure out how best to use a many-core chip in smartphones and tablets. (Photo: Sharon Gaudin / Computerworld)

window.cmcb[“idge-21380335635_1351876531”]();

World’s largest offshore wind farm generates first power » London Array – Harnessing the power of offshore wind

World’s largest offshore wind farm generates first power

DONG Energy, E.ON and Masdar today (29/10/12) announced that the first power had been produced at the London Array Offshore Wind Farm.

The 630MW scheme, located in the Thames Estuary, will be the world’s largest offshore wind farm.  The development has been under construction since March 2011 and 151 of the 175 turbines have now been installed, with construction on scheudle to be finished by the end of the year.

The 175 turbines will produce enough power to supply over 470,000 UK homes with electricity.

Laura Sandys, MP Thanet South, said: “Locally we are extremely pleased that London Array is now producing energy for homes across the South East. We are very proud that London Array is based in Thanet and that we are host to the largest offshore wind farm in the world.  To have a world class wind farm maintained from Ramsgate is great for the local community and local business. We very much hope that the company will be able to realise its plans to develop Phase Two adding an additional 240MW”

Benj Sykes, Wind UK Country Manager at DONG Energy, said: “With its 630 MW the London Array project will be the first of the next generation of larger offshore wind farms and we are pleased to have reached first power. Being able to efficiently develop large offshore wind farms and harvest the scale advantages in both construction and operation is an important element in our continuous efforts to bring down costs of energy of offshore wind.”

Dr. Tony Cocker, CEO of E.ON UK, said: “This is not only a very important milestone for the London Array development but also a major landmark for the global renewables sector.  We firmly believe that electricity from renewable sources has a vital part to play in helping us to deliver energy in a way that is sustainable, affordable and secure and this is why we are aiming to reduce the costs of offshore wind by 40% by 2015.”

Dr. Sultan Al Jaber, CEO of Masdar, said: “The London Array offshore wind project is a landmark achievement for Masdar, its partners and the United Kingdom. We are proud to be making a significant contribution to the UK’s renewable energy portfolio and targets. The London Array development is an example of the true potential and commercial viability of renewable energy. It is also a model of the collaboration and action required to implement large-scale clean energy projects in an effort to sustainable meet our growing energy demands.”

London Array is being built around 20km off the coasts of Kent and Essex. The wind farm will be installed on a 245km2 site and will be built in two phases. Phase One will cover 90km2 and include 175 turbines with a combined capacity of 630MW. The consortium plans to complete the first phase by the end of 2012. If approved, the second phase will add enough capacity to bring the total to 870MW.

 

The project consortium partners have the following shareholdings: DONG Energy owns 50%, E.ON has 30% and Masdar has a 20% stake.

Preserving the self for later emulation: what brain features do we need? | KurzweilAI

(Credit: iStockphoto)

Let me propose to you four interesting statements about the future:

1. As I argue in this video, chemical brain preservation is a technology that may soon be validated to inexpensively preserve the key features of our memories and identity at our biological death.

2. If either chemical or cryogenic brain preservation can be validated to reliably store retrievable and useful individual mental information, these medical procedures should be made available in all societies as an option at biological death.

3. If computational neuroscience, microscopy, scanning, and robotics technologies continue to improve at their historical rates, preserved memories and identity may be affordably reanimated by being “uploaded“ into computer simulations, beginning well before the end of this century.

4. In all societies where a significant minority (let’s say 100,000 people) have done brain preservation at biological death, significant positive social change will result in those societies today, regardless of how much information is eventually recovered from preserved brains.

These are all extraordinary claims, each requiring strong evidence. Many questions must be answered before we can believe any of them. Yet I provisionally believe all four of these statements, and that is why I co-founded the Brain Preservation Foundation in 2010 with the neuroscientist Ken Hayworth. BPF is a 501c3 noprofit, chartered to put the emerging science of brain preservation under the microscope. Check us out, and join our newsletter if you’d like to stay updated on our efforts.

I’ll occasionally review and report evidence and arguments relevant to the statements above, try to explain why I’m optimistic about these technologies, and to enlist your help in pushing forward their validation or falsification as fast as feasible. If validated, I’ll be pitching to you for help in getting brain preservation access and affordability to the world as fast and affordably as possible. To these ends, thank you for any frank and constructive feedback you can leave in the comments.

In this post, I’d like to try to provisionally answer a question relevant to the first three statements above:

To preserve the self for later emulation in a computer simulation, what brain features do we need?

We can distinguish three distinct information processing layers in the brain[1]:

  1. Electrical Activity (“Sensation, Thought, and Consciousness”)
    These brain features are stored from milliseconds to seconds, in electrical circuits.
  2. Short-term Chemical Activity (Short-and Intermediate-term Learning — “Synapse I”)
    These brain features are stored from seconds to a few days in our neural synapses (synaptome), by temporary molecular changes made to preexisting neural signaling proteins and synapses.
  3. Long-term Molecular Changes (Long-term Learning — “Nucleus and Synapse II”)
    These are stored from years to a lifetime in our neuron’s connectome, nucleus (epigenome) and synaptome, by permanent molecular changes to neural DNA, the synthesis of new neural proteins and receptors in existing synapses, and the creation of new synapses.

At present, it is a reasonable assumption that only the third layer, where long-term durable molecular changes occur, must be preserved for later memory and identity reanimation. The following overview of each of these layers should help explain this assumption.

1. Electrical Activity (“Sensation, Thought, and Consciousness”)

Our electrical brain includes short-distance ionic diffusion in and between neurons and their supporting cells (i.e., calcium wave communication in astrocytes), action potentials (how neurons send signals from their dendrites to their synapses), synaptic potentials (how signals cross the gaps between neurons), circuits (loops and networks) and synchrony (neurons that fire in unison, though they are widely separated). Electrical features operate at very fast timescales, from milliseconds to a few seconds, and are variable (not exact), volatile, and easily disrupted.

neuralsynchrony2

Neural synchrony — our leading model of higher perception and consciousness (credit: Daniel Senkowski et al./Trends in Neurosciences)

These features certainly feel very important to us. They include our sensations (sensory memory) and current thoughts (commonly called “short-term” memory by neuroscientists).

Recurrent loops, special electrical circuits that cycle back on themselves, hold our current thoughts (when you rehearse some information to avoid forgetting it, you are literally keeping it “in the loop”). Neural synchrony creates our conscious perceptions, and when it happens in the self-modeling areas of our brain, it gives us self-aware consciousness.

Yet electrical features are also fleeting. When you sleep, or are knocked unconscious, or are given an anesthetic, your consciousness disappears, only to be “rebooted” later, from more stable parts of your brain. Our memories aren’t even recalled with precision but are rather recreated, as volatile electrical processes, from these molecular long-term stores, in ways easily influenced by our mental state and cognitive priming (what else is on our mind). That’s why eyewitness testimony is so variable and unreliable.

The electrical features of our self are thus like the “foam” on the top of the wave of our long-term memories and personality. They make us unique for a moment, as they hold only our most immediate thinking processes[2]. Amazingly, people who undergo special surgeries that stop their heart, and some who drown in very cold water, can have no detectable EEG (electrical patterns) for more than thirty minutes, and their brains successfully reboot after rewarming them.

Essentially, these individuals are recovering from clinical brain death. Not only do they not have consciousness during this period, they have no unconscious thoughts. Yet because their deeper layers aren’t too disrupted, they can restart their electrical activities.

rhythmsofthebrain1

An excellent book about neural spikes, loops, and synchrony is Rhythms of the Brain, Gyorgy Buzsaki, 2006. It explains the emergent properties and integrative functions of these “highest order” electrical features of our brain. My late mentor at UCSD, Francis Crick, and his Caltech collaborator, Christof Koch, call this topic the search for the Neural Correlates of Consciousness.

It’s a great phrase. Consciousness is not a mystery we’ll never solve, but according to a number of neuroscientists it is a physical process of neural synchrony, in particular regions of your brain. These brief, rhythmic synchronizations share information between groups of neurons in distant regions of the brain by tightening up (“binding”) their interdependent sequences of action potentials.

The synchronizations are controlled by the inhibitory neurons in our brain, which use the GABA neurotransmitter. Disrupt gamma synch, as with anesthesia, and you take away consciousness. Give a drug like zolpidem, which activates GABA neurons and increases gamma synch, to patients who are in persistent vegetative state, and you wake 60% of them up from their comas, to varying degrees!

Wikipedia doesn’t yet have a good explanation of the gamma synchrony model of consciousness, but they will in a few more years. Laura Colgin at Kavli has found two reliable gamma synch mechanisms in rat hippocampus. She speculates that slow gamma makes stored memories available to current consciousness, and fast gamma integrates sensations to create conscious perceptions. Though neuroscientists don’t yet all agree on the details, many have found neural correlates of sensations, thoughts, emotions, and consciousness in the electrical features of our brains. These features, in conjunction with the short-term chemical changes we will describe next, represent the moment-by-moment updates to our long-term memory, self, and intelligence.

2. Short-term Chemical Activity (Short- and Intermediate-term Learning — “Synapse I”)

Short-term chemical activity is the next layer down. It involves all our short- and intermediate term learning and memory, everything beyond our sensations, current thoughts, and consciousness, but not including our long-term memories. We can call this layer “Synapse I.”

As your electrical experiences and thoughts race around the various circuits in your head, you make a number of short-term learning changes in your neural networks to capture, for the moment, what you’ve learned. These involve changes to preexisting proteins in your preexisting synapses (communication junctions), changes that last for minutes (short-term) to days (intermediate-term).

These are changes in both the mechanics of neurotransmitter release and short-term facilitation (strengthening) or depression (weakening) of synaptic effectiveness. Synapses are modified by the precise timing and frequency of electrical signals (action potentials) received by the postsynaptic neuron, a process called spike-timing dependent plasticity.

There are short-term changes in signaling molecules (neurotransmitters, cAMP, Ca++, CamKII, PKA, MAPK), and membrane receptors (NMDA). Phosphorylation states (chemical tags) are altered on some of these molecules, and a temporary equilibrium between kinases (enzymes that add phosphates to key molecules) and phosphatases (enzymes that take them away) is established in the synapse.

[Note: On Oct 15, 2012, Ye et. al. showed in Aplysia how precise spatiotemporal signaling in the synapse involving PKA holds short-term memories in synaptic electrochemical networks, and the interaction of PKA and MAPK holds intermediate-term memories in these networks, in a process called synaptic facilitation.

If the short- or intermediate-term learning or memory is to become long-term, communication with the cell nucleus must now occur, and new membrane proteins and synapses are then built, involving new or altered circuits in the connectome. If not, the new memory dies out[3].

Every night, when we sleep, our short- and intermediate-term brain writes important parts of its experiences to our long-term memory, building durable new synaptic connections, where this learning can now stay with us for years to life, in a process called memory consolidation. This process moves a subset of our recent learning and memories, apparently the most relevant parts, from temporary spatiotemporal signaling states to permanent new synaptic structures, anchored to the cytoskeleton of each neuron.

We can think of these new proteins, synapses, and circuits established in neural synapses and nuclei in a way that is very roughly like DNA, as they are long-term stable structures, encoded in a partly digital form, that will endure all the flux and variability of the biochemistry within each neuron, over a lifetime. It is these unique synaptic and epigenetic networks that we must preserve, scan, and upload in creating neural emulations, as we will discuss.

Long-term memory formation happens best when we are in slow wave (deep and dreamless) sleep, which we get in cycles during the night (and especially well if our sleeping room is dark and quiet) and also during a good nap (a great way to “lock in” what you’ve learned, after a demanding learning period that will naturally make you sleepy).

Neural dendrites, cell body, action potential, and synapses (credit: Gallant’s Biology Stuff)

All our neurons work in circuits, and strengthen or weaken their connections based on chemical and electrical activity, in a process called Hebbian learning. Just like your muscles, which come in two sets that oppose each other around every joint, neural circuits are both excitatory and inhibitory at many decision points in the network.

Perhaps most important decision points are the cell bodies of each neuron, where the nucleus is. The electrochemical current from all the dendrites (“roots”) of each neuron flows toward its cell body, and action potentials (current waves) flow from the cell body to its synapses (“branches”), along the single axon (“trunk”) of each neuron.

Glutamate is the main neurotransmitter we use to send excitatory current from a synapse to the dendrite of the next neuron in a circuit (the postsynaptic neuron). Glutaminergic synapses are thus called “positive” in sign, and they promote electrical activity throughout the brain. GABA is the main neurotransmitter we use to let inhibitory current leak out of a postsynaptic dendrite. GABAergic synapses are thus called “negative” in sign, and they depress circuits throughout the brain.

Each neuron sums the net result of the positive and negative inputs it receives from its dendrites, over milliseconds to seconds. If the current exceeds that neuron’s threshold, it sends an action potential (depolarizing electrochemical signal) to all its synapses. As the brain learns, our synapses enlarge or shrink, giving them greater or lesser excitatory or inhibitory effect, and we may grow more or lose our synapses. With few exceptions, each neuron also uses just one type of neurotransmitter (eg., glutamate or GABA), or the same small set of neurotransmitters, at all its synapses.

The architecture of memory, thought, emotion, and consciousness may thus be reducible to a surprisingly simple set of algorithms, connections, weights, signaling molecules and electrical features in each neuron, working together in a massively parallel way to create computational networks that are far more complex than the individual parts.

hippocampusfrontallobes

Hippocampus and frontal lobes (credit: NIH)

In higher animals, the neurons in our hippocampi (two c-shaped organs in each hemisphere of our brain), and the connections they make to the rest of our cerebral cortex (especially to our frontal cortex), store all kinds of episodic (experiential) and declarative (fact-based) information, all from our last few days of life.

At the same time, neurons in our cerebellum (a more primitive, “little brain” at the base of our skull) store procedural learning and memory (how to move our bodies in space).

Experiments with rats and primates tell us that each hippocampus makes perhaps tens of thousands of new neurons every day, from neural stem cells. Other than for repair after certain kinds of injury, no other part of the adult brain is able to use stem cells in detectable numbers, as far as we know.

The rest of our brain is postmitotic (unable to use cell division to maintain its structure), as neuroscientists learned in an elegant experiment in 2006. Our neurons must be maintained by our immune and repair systems, and as they die via natural aging, or kill themselves in apoptosis, memories start to die.

hippocampal-dendrites1

Hippocampal dendritic spines (credit: Fiala & Harris/J Am Med Inform Assoc)

Our hippocampal neurons have the very tough job of temporarily holding, in their uniquely dense synapses, and via their connections to the rest of the cortex, much of the new information we have learned over the last day or two, during our entire adult life.

Here is a picture of a computer reconstruction of a small section of ten columns of synapse-rich “spiny dendrites,” from the CA1 (input) region of the hippocampus. CA1 contains areas like place cells, imprinted genetically with detailed maps of 3D space.

Like the digestive cells lining our gut, and the skin cells at our fingertips, certain hippocampal neurons appear to get worn out on a regular basis by this demanding short-term memory holding function, and so some neuroscientists think new ones must regularly grow and mature to replace them.

People whose hippocampi are both surgically removed, like the memory disorder patient Henry Moliason, who had this done at the age of 27, can’t update their long-term episodic and declarative memories. H.M.’s long-term memory was mostly “frozen” at 27. He could occasionally add bits of new information to long-term memories of the same type he’d built before the surgery, and he could learn new procedural (spatial and muscle) memories in his cerebellum, but he had no cerebral knowledge that he’d added these memories.

H.M.’s amazing life suggests that if the brain preservation process damaged the hippocampus, but not the rest of our brain, we’d come back without our most recent experiences (retrograde amnesia), but our older memories and personality would still be intact. Ted Berger at USC managed to build a simple version of an artificial electronic hippocampus for mice in 2005, so there’s a good reason to believe that this part of our brain, though important, isn’t irreplaceable.

As long as you could install an artificial hippocampus in the computer emulation constructed from your scanned brain, you’d be back in business as a learning organism, with only some of your more recent memories and learning erased. This all helps us understand that what Daniel Dennett would call our center of narrative gravity, our most unique self, is our long-term memory.

The fact that only special areas of our hippocampus can add new cells during life exposes a harsh reality about our biological brains. We are all born with a very large but fixed long-term memory capacity, and this capacity gets increasingly used up, pruned and potentiated, the older we get. Anyone over 40, like myself, knows they are considerably less flexible at learning new things than they were at 20.

It’s far easier for older people to add more twigs to branches of knowledge we’ve previously built in our “tree of experience” than to form new branches. We can do it, but gets progressively tougher and slower the older we get.

finland-phenomenon-g

This means, if we want to be lifelong learners in a world of accelerating technological and job change, it is critical to get an early education that is as categorically complete (global, cosmopolitan, and scientific), moral (socially good, positive sum) and evidence-based as possible.

Our children need the best mental scaffolds they can get early on, or they’ll spend the rest of their lives trying to prune away harmful and untrue thoughts and beliefs acquired in their youth. Psychologists have long known that it is much easier to add increasing specificity to a neural network than it is to unlearn (depress) any branch, once it’s built. We need to be careful about what we allow into our memory palaces.

That said, children also benefit greatly from freedom, early on in life, to study what they themselves desire to learn, and to have a good degree of control over learning outcomes and style. This freedom, and appropriate rewards for effort of any kind, induce them to build intricate mental specializations in areas they are personally passionate about.

For those who want to know how to implement a 50/50 balance of broad, state-mandated learning in future-critical STEM fields, analytical thinking, and civics (the “hilt of the sword”), and a personalized program of student-directed specialized learning, creativity, and play in the other half of the time (cutting into and mastering whatever they can convince their teachers is worth studying, or the “blade of the sword”), I strongly recommend The Finland Phenomenon, 2010 .

This film, and to a lesser extent Tony Wagner’s book Creating Innovators, 2012, demonstrate key elements of the future of learning for enlightened societies, in my opinion. It may take 20 years for the evidence to be incontrovertible and for this model to be implemented in the US, but you can give it to your child now, if you find it appealing.

cybertwin

MyCyberTwin – Virtual Assistants Will Be Useful for Many of Us By the Early 2020′s

It is also liberating to realize that while our biological brains are less able to learn fundamentally new things as they age, all the digital technologies we use, technologies which will bring our emulations back at an affordable price later this century, will continue to get exponentially more powerful every year.

Most of us don’t realize this, but everyone who uses a social network, email, or any other technology to capture things they say, see, and write about is also creating a digital simulation of themselves. By 2020 we’ll all be talking to and with our best search engines in complex sentences (the conversational interface), and shortly thereafter, we’ll all be able to use simple software agents, cybertwins, which will have crude maps of our interests and personality, so they can serve us better.

Computational linguists know that if you capture what a person says for just two years, we are so repetitive about what we care about that a cybertwin could whisper into our ear the word that natural language processing algorithms predict we want when we are having a senior moment, and they’ll be right most of the time.

That’s how repetitive we are, and how good web search will be by 2020. As I wrote in 2005, people who don’t run cybertwins will be much less productive, so they’ll be very popular, even though they’ll bring lots of new social problems in their first generation.

These simulations won’t be turned off by our loved ones when we die. Our children will use them to interact with a simulation of us, and to keep the best of our thoughts, experiences and personalities accessible to them. Teaching our children and ourselves to be digital natives and digital activists is thus a very important way for us to build an ever more capable cybertwin, even as our biological self naturally slows down and simplifies (prunes away branches of knowledge and memories we once had ready access to) with advancing age.

Now we arrive at our truest self, the part we care most about preserving and sharing with our loved ones and society. It is this self that I expect will later merge with the cybertwin that many of us will leave behind, as strange as that might sound.

3. Long-term Molecular Changes (Long-term Learning – “Nucleus and Synapse II”)

neuronsynapse

Experience-based learning (credit: Graham Paterson/Children’s Hospital Boston)

The production of long-term memory, personality, and identity requires all the short-term synaptic changes above, plus permanent molecular changes in the neuron’s Nucleus (DNA and its histones, or wrapping proteins), and the permanent creation of new cellular proteins, synapses, and circuits (Synapse II).

Here’s a brief summary of our understanding of the process[4]:

Nucleus (“Genome, Transcriptome, and Epigenome”)

1. Retrograde transport and signaling from the synapse to the nucleus
2. Activation of nuclear transcription factors and induction of gene expression
3. Chromatin alteration and epigenetic changes in gene expression (gene-protein networks)

Synapse II (“Connectome and Synaptome”)

4. Synaptic capture of new gene products, local protein synthesis, and seeding of new synaptic sites
5. Permanent synaptic changes, activation of preexisting silent synapses, formation of new synapses.

We used several “-ome” words above. Let us briefly consider each. They are very roughly ordered below in terms of their likely contribution to our unique self, from least to most important:

The Genome. These are inherited genes and gene regulatory networks that control instinctual behaviors. Our genome includes the unique alleles we received from our parents. It is easy to preserve, as it is the same in all cells. With one tissue sample we can create a clone later, either physically, or far more likely, in a computer simulation. But this clone has only our inherited uniqueness. We’ll need contributions from the next four “omes” to add our life memories and learning to the emulation.

The Transcriptome. This is the set of proteins made (transcribed) by cells. While proteomics (another “ome” word) is in its infancy, scientists estimate each of our cells has the DNA to express ~20,000 basic protein types. Each type can be further modified after creation by adding or removing chemical tags like phosphate, methyl, ubuquitin, and other small molecules, so that more than a million protein subtypes may exist in a typical human body.

Fortunately, each of our ~220 cell types only uses around 5,000 of these 20,000, and perhaps less than 2,000 of the 5,000 are unique to each cell type. Neurons and glia, the cell types we are most interested in, may use just a few hundred protein types to store our higher learning and memory in the nucleus and synapses. The other proteins are there to keep all of our cells alive, which is a critical precondition to being able to store long-term memories in a special subset of neural structures.

All this suggests the proteomics of memory and identity, and of later memory and identity reconstruction from scanned brains, are not impossibly complex, but rather highly challenging, fascinating and eventually solvable problems.

The Epigenome. These are learning-based changes in gene-protein networks that happen in the nucleus of each neuron, mostly during the life of the organism. The Dutch famine of 1944 and the Överkalix study in Sweden tell us that some epigenetic changes can be inherited in humans, so we all should seek good nutrition and avoid toxin exposure, as we may pass some of that to our children in the form of compromised and undermethylated epigenomes.

But there is a lot more to the epigenome story still to be uncovered, as this 2011 article on epigenetic regulation in learning and memory in Drosophila makes clear. Our epigenome is a gene-regulatory layer that involves chemical changes, mostly methylation, to DNA and to the histone proteins that wrap and expose DNA in the cell nucleus. These changes determine how DNA, RNA, and protein are expressed in the nucleus, and they may affect how the cell body integrates incoming electrical signals and manages its synapses.

The Connectome. This is a map of our neural cell types, and how they connect. Our connectomes and much of our dendrite structure is very similar in all of us. This shared developmental structure makes it easy for us to communicate as collectives, for ideas or “memes” to jump from brain to brain. Yet with 100 billion neurons making an average of 1,000 connections to other neurons, and most of these not being developmentally controlled, we’ve got the ability to make 100 trillion connections, the large majority of which will be unique to each individual.

The Synaptome. These are key features of the ~1,000 synapses that each neuron makes to others. They are the particular long-term molecular features that determine the strength and type of each synapse, its signaling states and electrical properties, as we’ve described them above. The synaptome is the weight and type of the 100 trillion connections described above, and this information may be the most important “recording” of our unique self.

ortunately, because memories are stored in a highly redundant, distributed, and associative manner in our synaptic connections, our synaptome is to some degree fault tolerant to cell death. Both artificial and biological neural networks experience graceful degradation (partial recall, incremental death) of higher memories as individual neurons die.

We also know the molecular code of long term memory is fault tolerant to the noise, deformations, and chaos of wet biology. The feedback loops between the electrical and gene-protein network subsystems interact somehow to stabilize long term memories in a special subset of durable molecular changes, in spite of all the other biochemistry furiously going on to keep the cell alive.

wetwarebray

I am sure the distinguished futurist and technologist Ray Kurzweil will have a lot more to say about these topics in his next book, How to Create a Mind, which comes out next month. You can preorder a copy here.

To understand how these subsystems interact in a living organism, let’s start in as simple a model organism as we can find, single-celled animals, organisms that don’t even have nervous systems as we know them. Wetware, Dennis Bray, 2009 is a great tour of these animals.

Single-celled eukaryotes like Stentor, Paramecium, and Amoeba do complex information processing, and hold short-term memories in their chemical networks.

In 2008, we learned that Amoeba remember and anticipate cold shocks, for example. These networks include the cell’s genome, epigenome, cellular proteins, cytoskeleton, receptors, and cell membrane. They are true computational networks, with both neural-network like and Boolean logic properties.

Genes and proteins integrate signals from other genes and proteins, and selectively switch and transmit signals, just like neurons do. The genes in each cell, via RNA, determine which proteins are made, when and where.

Most protein changes are part of the short term computation being done in a cell, but a special few will lead to lasting changes in the epigenome and the cytoskeleton and receptors in and on the surface of the cell. These long-term changes are the ones we care most about, as they store the cell’s unique memory and identity.

paramecium

Single-celled animal (credit: Anthony Horth)

Until computational neuroscience[5] can predictively model how the gene-protein networks in a Paramecium allows these animals to evaluate options, assign priorities, regulate their moment-by-moment computational attention, continually vary strategies for chasing prey and avoiding toxins, and chemically store their representations, habituations, and memories in an intracellular environment, all without a proper nervous system, the field will be missing its Rosetta Stone.

Electrical waves exist in these single-celled animals, but with the exception of mitochondrial energy production, they are of the most primitive, diffusion-based kind. All the considerable intelligence in these animals is coursing, moment by moment, through their gene-protein networks.

In multicellular organisms with neurons, the cytoskeleton and receptors have specialized into the synaptome, the pre-and post-synaptic molecular modification of our synapses, including phosphorylation of switching proteins like calmodulin kinase II. While there are over 50 known neuromodulators and 14 neurotransmitters in our brain, only six neurotransmitters have been regularly implicated in long term learning and memory in our synaptome.

It is these and their partner molecules in the synapse and nucleus that are probably most important to understand and model to crack the long-term memory code.

openworm-nervous-system-3d-green

C. elegans connectome (credit: OpenWorm.org)

Fortunately, even with our very partial molecular and functional maps today we have still managed to work out some basics of neural network interaction in very small neural ensembles, like the somatogastric nervous system (~30 neurons) in lobsters.

We’ve even created early maps of very small whole-animal neural systems, like the nematode worm C. elegans, with its 302 neurons and ~6,000 synapses. We mapped the C. elegans connectome in 1986, but we still know just pieces of its synaptome and transcriptome, and even less about its epigenome.

Fabio Piano et. al. give us an overview of the state of C. elegans gene-protein network knowledge in 2006. Note their subtitle is “A Beginning.” Jeff Kaufman has recently summarized the very early status today of whole brain emulation in nematodes.

David Dalrymple in Ed Boyden’s lab at MIT is working on C. elegans simulation, and he is optimistic about new tools in neural state recording, optogenetics, and viral tagging for characterizing each neuron’s function. As Derya Unmatz reports in a blog post that sounds like science fiction, Sharad Ramanathan et. al. at Harvard can now take control of C. elegans locomotion by firing precisely targeted lasers at individual neurons in an optogenetically modified worm’s brain, controlling its chemotactic behavior and convincing it that food is nearby.

A small international collaboration exists to emulate the C. elegans nervous system, called OpenWorm. There’s even a Whole (Human) Brain Emulation Roadmap, started in 2007 by Anders Sandberg and Nick Bostrom at Oxford, and a few other visionary folks in biology, computer science, and philosophy. These important projects are quite early and extremely underfunded at present. The biggest problem today is getting more funded people working on them.

To emulate how C. elegans, Drosophila, Aplysia, Danio, Mus, and other neural networks actually work, and to begin to extract even crude and partial memories from the scanned brains of any of these and other model organisms, we’ll need a better understanding of behavioral plasticity, and the way the synapse, the nucleus, and neuromodulators bias the pattern generators in neural circuits into a particular set of behavioral patterns.

This may require not only better neural circuit maps, but better maps of several still partly-hidden intracellular systems involved in long-term memory formation: gene regulatory networks, the transcriptome, and the epigenome[6]. There are gene-protein networks controlling human neural development, neural evolution, and our long-term learning and memory. A special few of these regulatory networks, their proteins, and the epigenomic changes these networks store during a lifetime of human learning may be as important as the synapse, if not more, in determining how our brain encodes and stores useful information about the world.

A great textbook on gene regulatory networks is The Regulatory Genome: Gene Regulatory Networks in Development and Evolution, Eric Davidson, 2006. It will amaze you how much Davidson’s group has learned about these networks, primarily by studying the evolutionary development of one simple organism, the sea-urchin, over several decades.

Last month, Isabelle Peter and others in Davidson’s group at Caltech published the first highly predictive model of how these networks control all the steps in sea urchin embryo development over the first 30 hours of its life. 50 genes are involved, and their regulatory interactions can be fully described in Boolean logic.

Now they want to model all of development, and some of the networks controlling its variational processes. Consider the magnitude of their achievement: Davidson et. al. have reduced an incredibly complex biochemical process down to a far simpler algorithm. This is what must happen in long-term memory, if we are to use scanned brains to abstract the key subsets of molecular structures that reliably encode it in our neurons.

microsss2

Protein microarrays — an exciting new tool (credit: Eye-Research.org)

Neural proteomics and the transcriptome are entering an exciting new phase as we use DNA and RNA microarrays, and now protein microarrays to catalog neural transcriptomes and compare them to other types of human cells, and to other primate and mammal neurons.

In August, Genevieve Konopka and colleagues published an exciting paper comparing human, chimpanzee, and rhesus monkey neural transcriptomes. We’re finding genes and proteins unique to particular areas in human brains, especially our frontal lobes. We’re building our first maps of the critical differences in the gene and protein regulatory networks that allowed us to wake up, make tools, and walk out of Africa less than two million years ago.

roadmapepigenomicsproject3

Epigenome (methylated DNA and modified histones) cartoon (credit: RoadmapEpigenomics.org)

We recently learned that what was long called “Junk” DNA, the 98% of each cell’s non-exonic DNA (DNA that doesn’t code directly for proteins), participates at various levels in gene regulatory networks, and through epigenomics these networks can change to some degree over the life of the cell. We’re learning now to map gene-protein interactions in these networks, including epigenomic changes, using tools like Chromatin ImmunoPrecipitation and sequencing (ChIP-seq).

Unfortunately, this work is also seriously underfunded. We’ve known about the importance of the epigenome for over a decade. Epigenomic changes can be inherited (watch what you do with your body, as your kids will inherit a record of some of your bad or good life habits in their epigenome!), and thus record unique learning in each cell over its lifetime, in ways we are still uncovering.

The NIH started a Roadmap Epigenomics Project for mapping the human epigenome in 2008, but the funding is a pittance, roughly $40 million a year. There is also a global collaborative research database, ENCODE, for sharing what is presently known about all the functional elements in the human genome. We give it roughly $20M/year, barely life support.

There are also various Human Proteome Projects under way, but no one seems to be funding any of these seriously, either. None of the politicians or key philanthropists who could make the Human Proteome and Epigenome into national research priorities have proposed any big initiatives, as far as I know. Even our science documentaries don’t adequately convey the promise of these fields. The scientific community is tooling along as best it can in spite of the fact that the public still hasn’t gotten the clue on how much better medicine would be in ten years if we were spending a whole lot more money on this right now.

Recall by contrast the Human Genome Project, which began with fanfare in 1990 and was rough draft completed in 2000, for $3 billion, a price gladly paid by the U.S. and four other motivated nations. The Human Genome Project was, to put it in proper perspective, our planet’s Moon Shot in the 1990’s, our species latest great leap into “inner space.”

As those who’ve read my Race to Inner Space post know, I think understanding the machinery of life and intelligence, and nanotechnology in general, is a destination far, far more valuable to us than outer and human scale (as opposed to cell and molecule-scale) space. We need an international Human Proteome and Epigenome Project race. With good funding and leadership, we might nail our first good maps of the neural gene-protein interaction layer in a decade. With business-as-usual, it will likely take much longer.

As we learn the languages of gene regulatory networks, the transcriptome, and the epigenome in coming years, we should learn how to influence these networks in many powerful ways. Do you think the trillion dollar global pharmaceutical industry is big now? Wait for the therapeutics that may start to arrive in the late 2020s, as we begin to learn how to intervene in these networks.

I think it is only when we have good maps of these gene-protein networks that we can finally expect medical advances like better learning and memory formation, elimination of a vast range of diseases including cancer and Alzheimer’s, immune system boosting, aging reduction (epigenomics repair), and perhaps even the uncovering of genetically latent skills like tissue regeneration and hibernation.

We are not talking about gene modification (inserting new genes in the germline, or in an adult), but rather about improving dysfunctional gene network regulation, and learning how to assay and minimize important parts of the network dysregulation that goes wrong in each of us as we get older and get various diseases.

ken-hayworth

Ken Hayworth

There’s a nice analogy here, pointed out by my Brain Preservation Foundation co-founder, Ken Hayworth. The Human Genome Project gave the world affordable gene sequencing in the mid-2000’s, and ten years later, we are beginning to see the major fruits: the uncovering the previously hidden worlds of gene regulation networks, the transcriptome, and the epigenome.

Likewise, the Human Connectome Project and the still-unfunded Human Proteome and Epigenome Projects could get us affordable neural circuit tracing and functional gene regulatory network modeling in the late 2010s.

Just as the Human Genome Project showed us we had a lot fewer genes than we thought (~21,000 rather than 100,000) the Human Epigenome Project may tell us that our gene regulatory networks are simpler than we currently think, and that of the ~5,000 proteins in a typical cell, there are just a handful that matter to our long-term self.

With luck, the remaining hidden layers of the neural transcriptome and epigenome will be functionally understood in the late 2020s. In that exciting time, our ability to understand memory and learning, to read memories from the scanned brains of model organisms, and to build biologically-inspired computer models, will all be greatly enhanced.

So to answer our original question, we need to find out if both chemical preservation and cryopreservation will preserve the connectome, the synaptome, and any long-term memory-related changes in the epigenome in a living brain.

Our Brain Preservation Technology Prize, which focuses on the connectome and many but not all features of the synaptome, is an important start down this road. As we understand better what molecular features in the synaptome and epigenome need to be preserved to capture and later retrieve memories, we’ll also need to find out if either chemical or cryopreservation, or ideally both, will reliably preserve those structures at the end of our biological lives, and whether it will be possible for future scanning algorithms to repair any damage done by the preservation process.

We’re too early to answer such questions today, but it is encouraging to remember that long-term memory is a very redundant, resilient and distributed system. Extensive neural destruction can occur in brains via Alzheimer’s, stroke, and other diseases before our memories are substantially erased and cognitive reserve is no longer available.

histomethods

Sixty years of histology practice tells us that good perfusion of special chemical fixatives such formaldehyde and glutaraldehyde at death will immediately preserve everything we can see by electron microscopy in neurons.

A great book on how this works is John Kiernan’s Histological and Histochemical Methods: Theory and Practice, 4th Ed., 2008. Kiernan has been publishing since 1964, and is a leader in the theory and practice of chemical fixation. There are even a few published fixation methods for whole mice brains.

Here’s a 2005 paper by Kenneth Eichenbaum et.al. demonstrating a whole brain fixation technique that claims “complete preservation of cellular ultrastructure,” “artifact-free brain fixation” and “no signs of cellular necrosis” in an entire mouse brain.

Presumably these methods also protect DNA methylation and histone modification in the epigenome, the phosphorylation of dendritic proteins like CamKII, the anchoring of AMPA receptors in the synapse, and any other elements of long-term memory formation. Presumably these molecules are protected today for years just by aldehyde fixation, if kept at low temperature (4 degrees).

Companies like Biomatrica have even developed ways to store human and bacterial DNA and RNA at room temperature for years. Long term storage of whole brain connectomes, synaptomes and epigenomes at room temperature, an ideal outcome for simplicity and affordability, may work today via additional chemical fixation steps like osmium tetroxide, a process that crosslinks fats and cell membranes, and plastination, a process that draws all the water out of a preserved brain and replaces it with resin.

But all this remains to be proven. If you know of experts who have done work in this area who would be willing to help BPF write position papers on these topics, and who can envision research projects that will answer these questions more definitively, please let me know, in the comments or by email at johnsmart{at}gmail{dot}com. Thanks.

Footnotes:

1. There is a much older layer of unique learning in each of us that is also important, the intelligent behaviors that gene networks have recorded in each of us over evolutionary time, as instinctual programs, and the unique assortment and variants of genes we each received at birth.

Such networks determine our inherited neural programs, instincts and behaviors that are executed mostly unthinkingly and robustly, and during which other forms of learning, like short-term learning, often does not even occur. To preserve this layer we just need a DNA sample of the preserved person, and that particular uniqueness can be incorporated in any future emulation, assuming future computers are up to the task.

2. Some scientists working on brain emulation, like BPF Advisor Randal Koene, suspect that measuring and modeling the brain’s electrical processes, a topic called Computational Neurophysiology, will give us powerful new insights into artificial intelligence. There are new tools emerging for in situ functional recording of electrical features of the neuron.

These may be critical to establish the “reference class” of normal electrical responses, for each type of neuron and neural architecture, the class of electrical representations of information. But if the model I’ve presented here is correct, we won’t need to record any electrical features of individual brains in order to successfully reanimate them later. We’ll see.

3. In Aplysia (sea slug), the sensory neuron neurotransmitter serotonin (5-HT) binds to postsynaptic receptors, activates adenylyl cyclase (AC) in the cell to make the second messenger cAMP, causing a short-term facilitation (STF) in strength of the sensory to motor neuron connection. More of the excitatory neurotransmitter glutamate is released by the neuron to its follower motor cells, and Aplysia pulls away harder from its shock.

The neuron is also sensitized: K+ channels are depressed, more Ca++ enters the presynaptic terminal, and the action potential spike broadens. Kinases and phosphatases (phosphate adding and removing enzymes) including cAMP-dependent PK, PKA, PKC, and CamKII control duration and strength of these changes. In facilitation, the spike broadens temporarily, as both pre- and post-synaptic Ca++ and CamKII make molecular changes that temporarily strengthen the electrical signal across the synapse.

In short-term depression (STD), the same mechanism temporarily weakens the signal. If water is gently shot at Aplysia’s gills ten times in a row, it temporarily learns not withdraw them, via synaptic depression of motor circuits. This short-term memory lasts for ten minutes, and involves a short-term reduction in the number of glutamate vesicles that are docked at presynaptic release sites in sensory neurons (undocked vesicles can’t be immediately used).

Repeat this training four times and the slug will turn this into an intermediate-term memory, making chemical and electrical changes in the synapse that now last for three weeks. Again, all this involves changes only to preexisting proteins and synaptic connections in neurons.

mechanismsofmemory3

4. In rat and human hippocampus, the primary excitatory neurotransmitter is glutamate. This causes Ca++ influx through NMDA receptors at postsynaptic membranes, and activation of CamKII, PKC, and MAPK. Permanent synaptic changes (Early LTP) include increased insertion of AMPA receptors in the membrane, and phosphorylation of proteins to change the properties of the channel.

These receptors are anchored to the neural cytoskeleton, so they have reliable long term effects. Later LTP involves recruitment of pre- and postsynaptic molecules to create new synaptic sites. A few key gene-regulatory networks are involved, with transcriptional and translational control at both the nucleus and the synapse, and control molecules including BDNF, mTOR, CREB, and CPEB.

We’ve recently found a memory encoding master control gene, Npas4, that encodes nuclear transcription factors (the copying of other genes into messenger RNA) which interact with hippocampal neurons to encode episodic memory. When Npas4 is knocked out of mice, they can’t learn. We’ve found RNA binding proteins like Orb2, that bind to genes involved in long-term memory.

A great and reasonably current text on the molecular basis of memory and learning is Mechanisms of Memory, David Sweatt, 2009. We’re still figuring out the epigenomic regulation that occurs in long-term learning and memory, so you’ll need to go to journals for most of that story, like this 2011 PloS Biology paper on epigenetic regulation of learning and memory in Drosophila.

The full size of the memory puzzle is becoming clearer every day. Now we just need to fund the work to complete it. We sure could use this knowledge in all kinds of good ways today, if we had it. Here’s a cartoon of long-term memory formation in both Aplysia and rat hippocampus, from Mechanisms of Memory(Vol 4., David Sweatt, p. 14):

longtermmemoryformation

Long-term memory formation in Aplysia and rat hippocampus, from Learning and Memory, John Byrne (Ed.), 2008 (Vol 4., David Sweatt, p. 14)

5. Computational Neuroscience seeks to model brain function at multiple spatial-temporal scales. The brain uses a vast range of different schemes for representation and manipulation of information, and it passes some of this information from one system to another all the time.

Consider the way neurons integrate signals from the receptors at their dendrites, the timing and shape of their action potentials, the way synapses interact with postsynaptic dendrites from other neurons, how neurons encode and store associative memory, specialize for perceiving and storing certain types of information (edge detection, grandmother cells), do inference and other calculations, work in functional subunits like cortical columns, and organize receptive fields. It all seems formidably complex, but useful simplifications exist, as we’ve described above.

6. Most folks in the neural emulation community don’t talk much about modeling gene regulatory networks or the epigenome and its interaction with the synaptome, and I think that’s their loss. Some focus only on easier stuff to see, like electrical features, and assume that might be enough to get a predictive model.

But I think that’s like looking for your keys under the streetlights when they are in the shadows. If spikes, loops, and synchrony are a network layer that has grown on top of cell morphology and gene-protein networks, the way single-celled animals eventually grew neurons, we may learn surprisingly little by measuring and modeling electrical features.

Attempting to do so may be like trying to infer the structure of hidden layers in a very large neural network [genome, epigenome, connectome, synaptome, and electrical features] by analyzing just the input/output layer, electrical features. We need all the hidden layers if we expect to have enough computational complexity to predictively characterize learning, memory, and behavior.

Boeing Non-kinetic Missile Records 1st Operational Test Flight

CHAMP high-powered microwaves degrade or destroy electronic targets without collateral damage
Boeing Non-kinetic Missile Records 1st Operational Test Flight

These images are available for editorial use by news media.

HILL AIR FORCE BASE, Utah, Oct. 22, 2012 — A recent weapons flight test in the Utah desert may change future warfare after the missile successfully defeated electronic targets with little to no collateral damage.

Boeing [NYSE: BA] and the U.S. Air Force Research Laboratory (AFRL) Directed Energy Directorate, Kirtland Air Force Base, N.M., successfully tested the Counter-electronics High-powered Microwave Advanced Missile Project (CHAMP) during a flight over the Utah Test and Training Range that was monitored from Hill Air Force Base.

CHAMP, which renders electronic targets useless, is a non-kinetic alternative to traditional explosive weapons that use the energy of motion to defeat a target.

During the test, the CHAMP missile navigated a pre-programmed flight plan and emitted bursts of high-powered energy, effectively knocking out the target’s data and electronic subsystems. CHAMP allows for selective high-frequency radio wave strikes against numerous targets during a single mission.

“This technology marks a new era in modern-day warfare,” said Keith Coleman, CHAMP program manager for Boeing Phantom Works. “In the near future, this technology may be used to render an enemy’s electronic and data systems useless even before the first troops or aircraft arrive.”

CHAMP is a multiyear, joint capability technology demonstration that includes ground and flight tests.

A unit of The Boeing Company, Boeing Defense, Space & Security is one of the world’s largest defense, space and security businesses specializing in innovative and capabilities-driven customer solutions, and the world’s largest and most versatile manufacturer of military aircraft. Headquartered in St. Louis, Boeing Defense, Space & Security is a $32 billion business with 61,000 employees worldwide. Follow us on Twitter: @BoeingDefense.

# # #

A related image is available at boeing.mediaroom.com.
A video feature on CHAMP is available at www.boeing.com/bds.

Contact:

Randy Jackson
Phantom Works
Office: 314-232-7906
Mobile: 314-435-7588
randy.jackson@boeing.com

Deborah VanNierop
Phantom Works
Office: 314-232-1624
Mobile: 210-454-2656
deborah.a.vannierop@boeing.com