Computational Neuroscience: Theoretical Insights into Brain Function (Progress in Brain Research)

Furthermore, depending on the state of the cell, the same neuron can compute several quite different functions of its inputs. The "state" Adams referred to can be regulated both by chemical modulators impinging on the cell as well as by the cell's past history. To demonstrate another catalog of reasons why the brain-as-digital-computer metaphor breaks down, Adams described three types of electrical activity that distinguish nerve cells and their connections from passive wires. One type of activity, synaptic activity, occurs at the connections between neurons.

A second type, called the action potential, represents a discrete logical pulse that travels in a kind of chain reaction along a linear path down the cell's axon see Box 9. The third type of activity, subthreshold activity, provides the links between the other two.

Each type of electrical activity is caused by "special protein molecules, called ion channels, scattered throughout the membrane of the nerve cells," said Adams. These molecules act as a sort of tiny molecular faucet, he explained, which, when turned on, "allows a stream of ions to enter or leave the cell Figure 9. The molecular faucet causing the action potential is a sodium channel.

About ELSC

When this channel is open, sodium streams into the cell," making its voltage positive. This voltage change causes sodium faucets further down the line to open in turn, leading to a positive voltage pulse that travels along the neuron's axon to the synapses that represent that cell's connections to other neurons in the brain.

This view of "the action potential," he said, "was established 30 years ago and is the best-founded theory in neurobiology. The other types of activity, synaptic and subthreshold, "are more complex and less well understood," he said, but it is clear that they, too, involve ion channels. These ion channels are of two types, which either allow calcium to stream into the cell or allow potassium to leave it. Adams has explored the mechanisms in both types of channel.

Calcium entering a neuron can trigger a variety of important effects, causing chemical transmitters to be released to the synapses, or triggering potassium channels to open that initiate other subthreshold activity, or producing the long-term changes at synapses that underlie memory and learning. The subcellular processes that govern whether—and how rapidly—a neuron will fire occur on a time scale of milliseconds but can be visualized using a scanning laser microscope. One of these channels quickly terminates the sodium inrush that is fueling the action potential.

The other one responds much more slowly, making it more difficult for the cell to fire spikes in quick succession. Yet other types of potassium channels are triggered to open, not by calcium, but rather in response to subthreshold voltage changes. Called "M" and "D," these channels can either delay or temporarily prevent. Where does the brain's network of neurons come from? Is it specified by DNA, or does it grow into a unique albeit human pattern that depends on each person's early experiences?

These alternative ways of framing the question echo a debate that has raged throughout the philosophical history of science since early in the 19th century. Often labelled nature vs. How much the DNA can be responsible for the brain's wiring has an inherent limit. Since there are many more synaptic connections—"choice points"—in a human brain network during its early development than there are genes on human chromosomes, at the very least genes for wiring must specify not synapse-by-synapse connections, but larger patterns. The evolution of the nervous system over millions of years began with but a single cell sensing phenomena near its edge, and progressed to the major step that proved to be the pivotal event leading to the modern human brain—the development of the cerebral cortex.

Koch described the cortex as a "highly convoluted, 2-millimeter-thick sheet of neurons. If unfolded, it would be roughly the size of a medium pizza, about square centimeters.

Computational neuroscience : theoretical insights into brain function

A cross section of this sheet reveals up to six layers, in each of which certain types of neurons predominate. The density of neurons per square millimeter of cortical surface Koch put at about , Although there are many different types of neurons, they all function similarly. And while certain areas of the brain and certain networks of neurons have been identified with particular functions, it is the basic activity of each of the brain's 10 billion neurons that constitutes the working brain.

  • Find a copy in the library.
  • Looking for other ways to read this?.
  • Error (Forbidden);

Neurons either fire or they do not, and an electrical circuit is thereby completed or interrupted. Thus, how a single neuron works constitutes the basic foundation of neuroscience. The cell body or soma seems to be the ultimate destination of the brain's electrical signals, although the signals first arrive at the thousands of dendrites that branch off from each cell's soma.

The nerve cell sends its signals through a different branch, called an axon. Only one main axon comes from each cell body, although it may branch many times into collaterals in order to reach all of the cells to which it is connected for the purpose of sending its electrical signal.

Electricity throughout the nervous system flows in only one direction: A small electric current begins at the soma and travels down the axon, branching without any loss of signal as the axon branches. The axon that is carrying the electric charge can run directly to a target cell's soma, but most often it is the dendrites branching from the target cell that meet the transmitting axon and pick up its signal.

These dendrites then communicate the signal down to their own soma, and a computation inside the target cell takes place.

The Vision | The Edmond & Lily Safra Center for Brain Sciences

Only when the sum of the inputs from the hundreds or thousands of the target cell's dendrites achieves a threshold inherent to that particular cell will the cell fire. These inputs, or spikes, are bursts of current that vary only in their pulse frequency, not their amplitude. Each time a given nerve cell fires, its signal is transmitted down its axon to the same array of cells. Every one of the 10 15 connections in the nervous system is known as a synapse, whether joining axon to dendrite or axon to cell body. One final generalization about the brain's electricity: First, it can emit a more frequent pulse.

Second, when the action potential reaches the synaptic connections that the axon makes with other cells, it can either excite the target cell's electrical activity or inhibit it. Every cell has its own inherent threshold, which must be achieved in order for the cell to spike and send its message to all of the other cells to which it is connected. Whether that threshold is reached depends on the cumulative summation of all of the electrical signals—both excitatory and inhibitory—that a cell receives in a finite amount of time.

The fluid surrounding a neuron resembles dilute seawater, with an abundance of sodium and chloride ions, and dashes of calcium and magnesium. The cytoplasm inside the cell is rich in potassium ions, and many charged organic molecules. The neuron membrane is a selective barrier separating these two milieu. In the resting state only potassium can trickle through, which as it escapes creates a negative charge on the cytoplasm.

The potential energy represented by this difference in polarity for each cell is referred to as its resting potential and can be measured at about a tenth of a volt. Twenty such cells match the electrical energy in a size-D flashlight battery. On closer inspection, the cell membrane actually consists of myriad channels, each of which has a molecular structure configured to. When the nerve is in its resting state, most of these gates are closed.

The nerve is triggered to fire when a critical number of its excitatory synapses receive neurotransmitter signals that tip the balance of the cell's interior charge, causing the gates on the sodium channels to open. Sodium ions near these gates rush in for simple chemical reasons, as their single positive charge left by the missing outer-shell electron is attracted to the negative ions inside the cell.

This event always begins near the cell body.

Computational Neuroscience: Theoretical Insights into Brain Function

What distinguishes the collection of models and systems called neural networks from "the enchanted loom" of neurons in the human brain? An insightful critique of one way of thinking about a problem leads to another, better understanding. Adams has explored the mechanisms in both types of channel. In fact this is the definition of a computer algorithm, according to Churchland. This other consideration says that, in order to do this problem at all, on any machine or nervous system, I need to incorporate something like smoothness. Learn more about our exciting upcoming events!

However, as soon as the inrushing sodium ions make the polarity more positive in that local region, nearby sodium gates further down the axon are likewise triggered to open. This process of depolarization, which constitutes the electrical action potential, is self-perpetuating. No matter how long the journey down the axon—and in some nerve cells it can be more than a meter—the signal strength is maintained. Even when a damaged region of the cell interrupts the chemical chain reaction, the signal resumes its full power as it continues down the axon beyond the compromised region.

The duration of the event is only about 0. The foregoing sketch of the chemistry explains the excitatory phase and why cells do fire. Conversely, many synapses are inhibitory and instead of decreasing the cell's negative polarity actually increase it, in part by opening chloride gates. This explains, in part, why cells do not fire. This picture of how neurons fire leads to the explanation of why a given cell will begin the action-potential chain reaction. Only when its polarity changes sufficiently to open enough of the right kind of gates to depolarize it further will the process continue.

The necessary threshold is achieved by a summation of impulses, which occur for either or both of two reasons: The result is the same in either case: Once begun, this sequence continues throughout the axon and all of its branches, and thereby transforms the receiving nerve cell into a transmitting one, whose inherent electrical signal continues to all of the other cells connected to it.

Thus, from one neuron to another, a signal is transmitted and the path of the electricity defines a circuit. Adams and his colleagues have shown that many of these calcium and potassium channels are regulated by "neuromodulators," chemicals released by active synapses. Working energetically from the bottom up, "the cellular or molecular neurobiologist may appear to be preoccupied by irrelevant details" that shed no light on high-level functions, Adams conceded.

Nevertheless," he asserted, "it is clear that unless such details are just right, disaster will ensue. Adams concluded by giving "our current view of the brain" as seen from the perspective of an experimental biophysicist: The complexity of the brain's structure and its neurochemical firing could well stop a neural net modeler in his or her tracks.

This very pragmatic question actually provides a lens to examine the development of neuroscience throughout the computational era since the late s, for the advent of computers ignited a heated controversy fueled by the similarities—and the differences—between brains and computers, between thinking and information processing. Adams believes that neuroscience is destined for such skirmishes and growing pains because—relative to the other sciences—it is very young.

Throughout this history, an underlying issue persists: Much of the criticism of computer models throughout the computational era has included the general complaint that models do not reflect the brain's complexity fairly, and current neural networkers are sensitive to this issue. Their conclusion was that "even with the remarkable increase in computer power during the last decade, modeling networks of neurons that incorporate all the known biophysical details is out of the question.

Simplifying assumptions are unavoidable" Koch and Segev, , p. They believe that simplifications are the essence of all scientific models. In particular, the brain's complexity beggars duplication in a model, even while it begs for a metaphorical, analogical treatment that might yield further scientific insight. Metaphors have laced the history of neuroscience, from Sherrington's enchanted loom to British philosopher Gilbert Ryle's ghost in the machine, a construct often employed in the effort to embody emergent properties of mind by those little concerned with the corporeal nature of the ghost that produced them.

But Koch and his colleagues "hope that realistic models of the cerebral cortex do not need to incorporate explicit information about such minutiae as the various ionic channels [that Adams is studying]. Boyle's gas law, where no mention is made of the 10 23 or so molecules making up the gas. Writing with Sejnowski and the Canadian neurophilosopher Patricia Churchland, Koch has made the point that scientists have already imposed a certain intellectual structure onto the study of the brain, one common in scientific inquiry: In neuroscience, this has meant the study of systems as small as the ionic channels on the molecular scale, measured in nanometers or as large as working systems made up of networks and the maps they contain on the systems level, which may extend over 10 centimeters.

Spanning this many orders of magnitude requires that the modeler adapt his or her viewpoint to the scale in question. Evidence suggests that the algorithms that are uncovered are likely to bridge adjacent levels. Further, an algorithm may be driven by the modeler's approach. Implementing a specific computational task at a particular level is not necessarily the same as asking what function is performed at that level. The former focuses on the physical substrate that is accomplishing the computation, the latter on what functional role the system is accomplishing at that level.

In either case, one can look "up" toward emergent brain states, or "down" toward the biophysics of the working brain. Regardless of level or viewpoint, it is the computer—both as a tool and as a concept—that often enables the modeler to proceed. Shortly after the early computers began to crunch their arrays of numbers and Turing's insights began to provoke theorists, Norbert Wiener's book Cybernetics was published, and the debate was joined over minds vs.

Most of the current neural net modelers developed their outlooks under the paradigm suggested above; to wit, the brain's complexity is best approached not by searching for some abstract unifying algorithm that will provide a comprehensive theory, but rather by devising models inspired by how the brain works—in particular, how it is wired together, and what happens at the synapses.

Pagels included this movement, called connectionism, among his new "sciences of complexity": He dated the schism between what he called computationalists and connectionists back to the advent of the computer though these terms have only assumed their current meaning within the last decade. At the bottom of any given revival of this debate was often a core belief as to whether the brain's structure was essential to thought—whether, as the connectionists believed, intelligence is a property of the design of a network.

Related Topics

Computationalists wanted to believe that their increasingly powerful serial computers could manipulate symbols with a dexterity and subtlety that would be indistinguishable from those associated with humans, hence the appeal of the Turing test for intelligence and the birth of the term and the field artificial intelligence.

Pitts in demonstrated that networks of simple McCullough and Pitts neurons are Turing-universal: Canadian neuroscientist Donald Hebb in produced a major study on learning and memory that suggested neurons in the brain actually change—strengthening through repeated use—and therefore a network configuration could "learn," that is, be enhanced for future use. Throughout the s the competition for funding and converts continued between those who thought fidelity to the brain's architecture was essential for successful neural net models and those who believed artificial intelligence need not be so shackled.

In , in Frank Rosenblatt's Principles of Neurodynamics , a neural net model conceptualizing something called perceptrons curried much favor and attention Allman, When Marvin Minsky and Seymour Papert published a repudiation of perceptrons in , the artificial intelligence community forged ahead of the neural net modelers. The Turing machine idea empowered the development of increasingly abstract models of thought, where the black box of the mind was considered more a badge of honor than a concession to ignorance of the functioning nervous system.

It was nearly a decade later before it was widely recognized that the valid criticism of perceptrons did not generalize to more sophisticated neural net models. Meanwhile one of Koch's colleagues, computer scientist Tomaso Poggio, at the Massachusetts Institute of Technology, was collaborating with David Marr to develop a very compelling model of stereoscopic depth perception that was clearly connectionist in spirit. Although he was advocating the top-down approach, his procedures were "so tied to biology" that he helped to establish what was to become a more unified approach.

Although always concerned about the details of neurobiology, Marr was strongly influenced by the theory of computation as it developed during the heyday of artificial intelligence. According to Church-land et al. Marr's approach represented an improvement on a pure artifical intelligence approach, but his theory has not proven durable as a way of analyzing how the brain functions.

As Sejnowski and Churchland have pointed out, "'When we measure Marr's three levels of analysis against levels of organization in the nervous system, the fit is poor and confusing'" quoted in Churchland et al. His scheme can be seen as another in a long line of conceptualizations too literally influenced by a too limited view of the computer. As a metaphor, it provides a convenient means of classification. As a way to understand the brain, it fails.

Nobelist and pioneering molecular biologist Francis Crick, together with Koch, has written about how this limited view has constrained the development of more fruitful models Crick and Koch, It is a painful business to try to translate the various boxes and labels, 'files,' 'CPU,' 'character buffer,' and so on occurring in psychological models—each with its special methods of processing—into the language of neuronal activity and interaction," they pointed out.

In the history of thinking about the mind, metaphors have often been useful, but at times misleadingly. Some of the distinctions between brains and computers: The computer has its own version of a "brain," a central processing unit CPU where the programmer's instructions are codified and carried out. Since the entire process of programming relies on logic, as does the architecture of the circuitry where the addresses can be. In fact this is the definition of a computer algorithm, according to Churchland.

Then take that and do this, and so on. This is exactly what computers happen to be great at," said Koch. The computer's CPU uses a set of logical instructions to reply to any problem it was programmed to anticipate, essentially mapping out a step-by-step path through the myriad addresses to where the desired answer ultimately resides. The address for a particular answer usually is created by a series of operations or steps that the algorithm specifies, and some part of the algorithm further provides instructions to retrieve it for the microprocessor to work with.

A von Neumann computer may itself be specified by an algorithm—as Turing proved—but its great use as a scientific tool resides in its powerful hardware. Scientists trying to model perception eschew the serial computer as a blueprint, favoring instead network designs more like that of the brain itself. Perhaps the most significant difference between the two is how literal and precise the serial computer is. By its very definition, that literalness forbids the solution of problems where no precise answer exists.

Sejnowski has collaborated on ideas and model networks with many of the leaders in the new movement, among them Churchland. A practicing academician and philosopher, Church-land decided to attend medical school to better appreciate the misgivings working neuroscientists have developed about theories that seem to leave the functioning anatomy behind. In her book Neurophilosophy: The neurons in the network interact with each other, and the system as a whole evolves to an answer.

Then, introspectively, we say to ourselves: I've decided'" Allman, , p. Critiques like Crick's of the misleading influence the serial computer has had on thinking about the brain do not generalize about other types of computers, nor about the usefulness of the von Neumann machines as a tool for neural networkers. Koch and Segev wrote that in fact "computers are the conditio sine qua non for studying the behavior of the model[s being developed] for all but the most trivial cases" Koch and Segev, , p. They elaborated on the. In fact, over the last decades we have witnessed a profound change in the nature of the scientific enterprise" p.

The tradition in Western science, they explained, has been a cycle that runs from hypothesis to prediction, to experimental test and analysis, a cycle that is repeated over and again and that has led, for example, "to the spectacular successes of physics and astronomy. The best theories, for instance Maxwell's equations in electrodynamics, have been founded on simple principles that can be relatively simply expressed and solved" p.

Find a copy online

As firmly stated by Pagels in his treatise, and as echoed throughout most of the sessions at the Frontiers symposium, "the traditional Baconian dyad of theory and experiment must be modified to include computation ," said Koch and Segev. This new triad of theory, computation and experiment leads in turn to a new [methodological] cycle: Another of the traditional ways of thinking about brain science has been influenced by the conceptual advances now embodied in computational neuroscience.

Earlier the brain-mind debate was framed in terms of the sorts of evidence each side preferred to use. Your request to send this item has been completed. Citations are based on reference standards. However, formatting rules can vary widely between applications and fields of interest or study. The specific requirements or preferences of your reviewing publisher, classroom teacher, institution or organization should be applied.

The E-mail Address es field is required. Please enter recipient e-mail address es. The E-mail Address es you entered is are not in a valid format. Please re-enter recipient e-mail address es. You may send this item to up to five recipients. The name field is required. Please enter your name.

The Vision

The E-mail message field is required. Please enter the message. Please verify that you are not a robot. Would you also like to submit a review for this item? You already recently rated this item. Your rating has been recorded. Write a review Rate this item: Preview this item Preview this item. Progress in brain research , v. Find a copy online Links to this item Table of contents sciencedirect. In addition, funding will be set aside to encourage active visiting faculty and exchange programs.

This allocation will ensure that the Center remains informed about the latest developments in the field and realizes its potential of becoming an internationally renowned hub for innovative research. The administrative, fiscal, and academic independence of the Center will help attract the finest researchers, ensuring its continued excellence in research and education. Research at the new Center will focus on five broad areas of inquiry, each differing in the level of investigation and associated research tools but all aimed at uncovering the mechanisms by which the brain generates behavior and cognition.

Research ranges from the level of genes, molecules and single neurons, to research examining advanced behaviors such as thinking, decision-making and emotions. In between, teams of scientists conduct studies of neuronal circuits groups of interconnected neurons and of interconnected brain areas and structures, which contain many neuronal circuits acting in coordination.

Specific behaviors, such as a sensations and movements, emerge when many neuronal circuits act in coordination.

Teams of experts in computational neuroscience apply theoretical approaches and use experimental data to test and construct theoretical models of the brain. These models are critical because they make specific sets of predictions about the brain that can then be tested in experimental settings. The five divisions below are largely conceptual and do not entail administrative boundaries between the groups.

What makes the new Center unique is not only the cutting-edge research within each topic, but the interaction and communication between these areas of research. For example, experiments on neuronal circuits will inform research at the level of single neurons, motor function, cognitive neuroscience, and computational neuroscience.

Since a deeper understanding of the brain requires a knowledge of its working at all levels—from the single molecule up to higher-order emergent properties like consciousness—what is truly needed is a comprehensive scientific program that unifies research approaches.

  • Neuroscience links.
  • Frau Thomas Mann: Das Leben der Katharina Pringsheim (German Edition)!
  • Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, Second Edition (Au!

Research groups in this area will study the properties of molecules and neurons to better understand how they generate structures and function. Researchers in molecular genetics will examine how the information contained in genes affects neuronal connections and functional interactions between neurons. A greater knowledge of how genes, molecules and neurons form specialized neuronal structures such as synaptic connections between neurons , as well as a greater knowledge of how these structures operate in the neuronal circuit, is critical to our understanding of brain function, both in health and disease.

These research groups will also work together to develop new cellular and molecular tools that will allow them to improve future studies of neuronal circuits in the living brain. Importantly, research in this area will contribute to the uncovering of cellular mechanisms at work in normal and pathological cases such as epilepsy, spinal cord injury and tremors. The Center plans two new appointments in these areas. Scientists are only just beginning to understand the incredible dynamic and plastic nature of the brain.

For example, in the brain of a blind person, the area that is normally dedicated to vision will not go to waste; rather, it may be dedicated to another function such as audition. Research in this area will examine the plasticity, structure, and development of neuronal circuits. How are new neuronal processes integrated into existing circuitry, and what is the process by which new neurons formed? How are neuronal circuits modulated by chemicals and hormones in the brain? A deeper understanding of brain plasticity will have important applications for cognition, learning and disease.

Researchers will use molecular genetic, electrophysiological and modern imaging techniques, including two-photon microscopy and tools that allow stimulation of specific cells in pursuit of answers to these fundamental problems Research groups will also quantitatively characterize the neuronal circuits and how they are anatomically connected to each other. It is anticipated that work in this area will revolutionize traditional brain anatomy The development of novel electronic and electro-optic probes and interfaces for the brain is growing rapidly and holds great promise.

This research will be the nucleus for fruitful collaborations between the Center and the recently established School of Engineering and Center for Nanotechnology at the Hebrew University. Studies of neuronal circuits often focus on their interactions at a cellular level. Within a larger framework, however, neuronal circuits cause complete behaviors, such as feeling a sensation or generating an action.

Researchers in this area will study how neurons and neuronal circuits interact in sensory, motor and cognitive functions. For example, researchers will examine the learning and control of movements as well as reward-based behavioral decisions. Other groups will investigate neuronal circuits underlying sensory processing auditory, olfactory, and visual using a variety of physiological, optical and behavioral tools.

Stanford Seminar - Towards theories of single-trial high dimensional neural data analysis

The emergence of powerful molecular genetic tools calls for a substantial investment in the study of neuronal circuits of rodents and simpler animals, using modern multi-electrode arrays, telemetry, two-photon imaging, and molecular genetic tools, combined with novel and quantitative behavioral paradigms. Cognitive neuroscience will be a core area of activity that bridges the gap between animals studies and studies of human perception and cognition. The research will cover perception, including multi-sensory perception and plasticity in normal and blind humans, as well as attention and memory.

Researchers will conduct experiments that relate to perception and learning disabilities, such as investigating visual and auditory processing in normal as compared to dyslexic individuals. Additional studies will examine rationality and decision-making in both humans and animals and the brain mechanisms of language and speech processing.

This group will use a broad range of technologies for monitoring human brain activity such as fMRI, MEG, TMS, and high resolution EEG as well as single neuron recordings in human patients and will develop sophisticated methods for the characterization of cognitive and behavioral phenomena.