1

Is This the Most Interesting Idea in All of Science?

 3 years ago
source link: https://join.substack.com/p/is-this-the-most-interesting-idea
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Is This the Most Interesting Idea in All of Science?

Neurons might contain something incredible within them.

Randy Gallistel is a remarkable scientist who argues that neurons contain something incredible within them: an accessible-to-computation mechanism that allows the brain to store numbers in memory and then to retrieve these numbers from memory.

This would sound outlandish to most mainstream neuroscientists.

But Gallistel points to a fascinating experiment (that was done with a ferret) to support the idea of intraneuronal memory-storage. This ferret-experiment demands engagement/response from the scientific community. Surely the blackout on this ferret-experiment cannot last forever.

See below my interview with Dr. Gallistel. I edited the interview to make it more conversational, and I added hyperlinks.

1) What are the most exciting things that you’re currently engaged in?

I’m trying to redirect neuroscientists’ search for the physical basis of memory.

Discovering how the things we have learned are encoded in our brains is as important to neuroscience as the discovery of the physical basis of heredity (DNA) has been to biology. It will transform the field beyond recognition. It is the key to an eventual understanding of how brains compute.

2) What are the most exciting things that you know about that others are engaged in? 

One of my principal collaborators, Peter Balsam, has done a series of experiments on basic learning (Pavlovian conditioning). These experiments confirm what a well-known former-colleague, Bob Rescorla (recently deceased), showed more than 50 years ago. Which is that temporal-pairing has nothing whatsoever to do with associative learning.

Right now, almost all neuroscientists base their search—for the physical basis of memory (the engram)—on the assumption that temporal-pairing causes learning. They are dedicated to this assumption—even though, as Rescorla pointed out 50 years ago, experimental attempts to define temporal-pairing have always failed. This failure is as striking now as it was 50 years ago. Anything that gets neuroscientists to abandon the idea that temporal-pairing is a useful scientific concept is a step toward discovering the physical basis of memory.

3) How does the “ferret experiment” work, what were the results, and why are the results significant?

The ferret-experiment shows that the measuring of—and then storage of—a maximally-simple experiential-fact (the duration of the interval between two simple events) occurs within a single huge cell (neuron) in the cerebellum. It also shows that subsequent single-spike input to this cell triggers the reading-out of this memory into a simple behavior: an appropriately-timed blink.

The blink occurs in response to a simple stimulus (a touch on the paw) that warns of a soon-to-occur threat to the eye (a shock to the skin near the eye). The interval-duration between the warning-signal and the shock itself has been stored in the engram inside this huge cell in the cerebellum—as a result, the brain can time the blink so that the eye is closed (hence protected) at the moment when the predicted threat occurs.

Johansson has also identified the first molecular stage in a sequence of molecular events inside this huge neuron. Somewhere in that sequence is the molecular substance that encodes the duration of that interval. It performs the same function as the memory-registers in a conventional computer.

Biological molecules are tiny interconnected machines. Johansson identified the first in a sequence of these machines. The sequence must lead to the engram. Fredrik Johansson has done for molecular biologists what Ariadne did for Theseus when she handed him the ball-of-thread. Theseus used the ball-of-thread to find his way through the labyrinth to the Minotaur and back out again.

Each neuron contains billions of (almost) incomprehensibly-tiny molecular machines. Molecular biologists have developed an astonishing array of techniques for visualizing/manipulating the actions of these little machines. These techniques will allow molecular biologists to follow the machines inside this huge neuron to the engram—to the tiny machine that encodes the experience-gleaned facts so that these learned/remembered facts can inform later behavior. 

4) How hard has it been to shine the spotlight on the ferret-experiment?

Very hard.

5) What explains the difficulty?

The difficulty is due to the mental energy required to climb out of intuitive Aristotelean energy-pits. The current approach to the engram is founded on an intuition that dates back to Aristotle.

The intuition is that memory consists of associations—conductive connections—between primitive sensations (color, texture, shape, smell, etc.). This intuition was the foundation of behaviorist psychology, which dominated theory/experiment in psychology during the first half of the previous century. The associative theory of learning is still widely taught in introductory psychology courses/textbooks. And it’s been given new life by the grossly-misleading hype surrounding deep learning.

Neuroscientists embrace Aristotle’s theory. But the most basic fact about memory is that memory is full of learned facts (time, distance, duration, probability, numerosity) that are unrelated to simple sensory-experience. And Aristotle’s theory makes no attempt to explain this basic fact.

This Aristotelian theory directs neuroscientific experiment/theorizing about learning/memory. It would be a far-reaching transformation to abandon this theory and instead focus on how brains encode maximally-simple abstract facts.

In the history of most sciences (physics, chemistry, physiology), the most important stage was when scientists finally abandoned the intuitive-but-useless conceptions that Aristotle left us with. Aristotle’s highly-intuitive natural science became the foundation of medieval philosophy/science. (There was no distinction between philosophy and “natural science” in the Middle Ages.)

In the early history of almost all sciences, it’s striking how difficult it was for thinkers to abandon these intuitively-appealing concepts in favor of much less-intuitive conceptions. Consider caloric theory.

6) But what exactly is so hard to understand?

Mainstream brain-scientists have no idea how brains could store a number, retrieve that number and other relevant numbers, operate on pairs of number arithmetically, and return the results to memory. That whole way of thinking (the computational theory of mind) is not part of their conception regarding how the brain works.

And a memory-code is not part of their conception either. It’s like how biochemists had no conception of a code that was realized in molecular structure until the revelation from Watson, Crick, and Franklin.

7) How long will it take the ferret-experiment to break through? 

Too early to tell. I’m doing my best to get molecular biologists to realize the immense opportunity that Johansson’s discovery opens up for them.

8) We know the first step on the way to the actual storage-site of the number, but how hard will it be to follow the “ball of thread” to the storage-site? 

Molecular biologists have repeatedly accomplished astonishing things. So I think they can do it. I judge that it can be done with the tools molecular biologists already have.

It won’t be easy. It could take 10 years—maybe 20—once molecular biologists really start to work on it.

9) What do we know about the storage-site itself and about the mechanism that allows the number to be transmitted to—and retrieved from—that storage-site? 

Nothing.

10) Are both the storage-site and the mechanism equally mysterious? 

11) Is the idea that every neuron in the human brain has a storage-site, or just some of them? 

Every neuron.

12) It’s totally open as to what this storage-site will actually be or actually look like, correct?

With one caveat: whatever it looks like, it has to be apparent that its form gives it the functional properties of the polypeptides (the class of molecules that DNA belongs to). That is, it has to be clear how the substance in question (or combination of substances) can be made to store “information” (in the scientific sense).

And this information-preservation function should not take much energy, because brains store tremendous quantities of information. Thumb-drives store huge quantities of information without any energy-input required—you don’t have to plug your thumb-drive in at night. Likewise, the engram must be able to preserve the information inscribed in it with little or no energy-input.

13) What is known about the system that brings the information to this storage-site and then brings the information out of this storage-site? 

Nothing.

Except the first step in the series of steps that does this. The receptor-molecule that Johansson demonstrated was critical. That is the equivalent of the ball-of-thread Ariadne gave Theseus.

The ball did not tell Theseus the route through the labyrinth. But it gave him the means to find the route.

14) Ultimately, the idea is that there is some little storage-site in the cell that can store numbers, correct? 

The idea is that there are numerous storage-sites—synthesized on demand, as new information comes in. Storage at the level of molecular structure would enable megabytes of information to be stored in a single cell.

Following Ariadne’s thread to find one such site would finally enable us to identify a physically-realized memory in a brain. That would be the start of a profound revolution.

We know from modern computer-technology that a machine that can store/process numbers can encode anything that we know how to encode. A modern computing-machine stores many things that we do not think of as being in any way numerical. When you dig down to the physical-memory level, it’s all numbers—e.g., images are just arrays of numbers.

15) Other than the ferret-experiment, which experiments are being neglected?

There’s a vast experimental literature that shows that that so-called “associative” learning does not depend on Aristotelian associations—it depends on stored facts.

It also shows that the facts about time (durations, times of day) are among the most important facts determining how rodents behave in neuroscientists’ experiments. This is hard for neuroscientists to digest because they’re committed to the Aristotelean idea that there is nothing in the mind that was not first in the senses.

The problem is that there are no sensory receptors for times of day and for interval-durations. A duration doesn’t feel like anything—it’s ineffable. But we learn new durations every day. Every time you travel to some new location, you learn how long the trip took (whether you know it or not). And you also learn the time of day at which you made the trip (even if you don’t wear a wristwatch).

For decades, neuroscientists resisted the overwhelming behavioral evidence about clocks in the brain. Some of my fellow graduate-students in behavioral neuroscience at Yale in the mid-1960s heaped scorn on this very suggestion when I tried to call their attention to the behavioral evidence for it. (This evidence came mostly from zoologists, particularly German zoologists who studied bees.)

There were relatively few attempts to theorize about the material form that the hypothesized clocks might take. In those few attempts, a circuit-level structure was posited. In these theories, the clock depended on neuronal interactions mediated by “spikes” (electrical signals). Neuroscientists still assume very strongly that learning/memory only arise due to neuronal interactions.

But—finally—Seymour Benzer (a physicist-turned-geneticist) and his graduate student discovered one of the most important genes for directing the construction of the circadian clock (the brain-clock that tells the time of day). The circadian clock is now understood to be one of those multi-component intraneuronal molecular machines. Chapters on the circadian clock appeared in neuroscience-textbooks, and the clock-skeptics fell silent.

It’s basically impossible to get neuroscientists to pay any attention to what behavioral scientists have discovered in the last 50 years about the simplest forms of “associative” learning. Information-theory plays no role in the search for the engram. And no role in computational neuroscientists’ efforts to model Pavlovian/instrumental learning.

But the lessons from molecular biology should not be ignored—“information” (in the sense that physicists and communications-engineers use the term) was discovered to be the foundation of life.

The search for the engram doesn’t include the notion of a code. But the notion of a code is at the core of information-theory and molecular biology.

And computational theorists never attempt to specify what the code might be that makes it possible to store a fact. They theorize about the material substance of the engram, but they don’t ask what code might enable that substance to encode a simple fact. A simple fact like how long it takes to boil an egg.

We rarely think about simple quantitative facts, but our knowledge of them constantly shapes our everyday behavior. These facts are simple because you can represent them with single numbers.

The logical/arithmetic manipulation of numbers is the foundation of any effective computing-machine. Psychologists and other cognitive scientists have come to understand in the last 50 years that brains are dazzlingly-good computational machines. This transformative insight is called the computational theory of mind.

The key to an effective computational machine is its memory—the place where the facts are stored. No memory can store a fact without a code. And all codes are written in numbers (as communications-engineers understand). It’s numbers all the way down in any computing-machine.

16) Would mainstream neuroscientists raise their eyebrows at the idea that numbers are somehow stored inside cells and retrieved from inside cells? 

Most of them would think it’s about the craziest, stupidest, and most implausible idea they ever heard suggested.

Despite the fact that they all know that the polynucleotides that are abundant inside every cell can store huge amounts of information at negligible energetic cost. They know that. But they don’t think that it’s relevant to thinking about how the brain works.

Not a few neuroscientists think that the scientific concept of information is irrelevant to neuroscience. This school of thought thinks that the concept of information has no useful role to play in understanding how brains do what they do.

17) How does the ferret-experiment determine for certain that numbers—which are the essence of computation—are actually (somehow) stored inside individual cells? 

It depends on what one takes to be “for certain”. That’s very hard to achieve in science.

The experiment shows that the engram for the interval-duration is inside that big neuron. (I cannot elaborate further on this part of the argument without getting technical.)

In a computing-machine, a number is understood to be a physically-realized symbol (it’s what we call a “numeral” when we write such a symbol on paper, or carve such a symbol in stone, or enter such a symbol into computer-memory).

The symbol must stand for an empirical quantity—e.g., how many people will come for dinner or how much wine we will want to serve.

And the symbol must also be subject to arithmetic manipulation by the fundamental operations of arithmetic.

We know that the engram for duration is processed arithmetically. The blink-latency is proportional to the remembered interval-duration. That implies two operations from arithmetic: subtraction and the ordering-operation. You need to subtract the currently-elapsed interval from the remembered interval. And you need the ordering-operation (“≥”) to ensure that the blink isn’t triggered if the elapsed time is not close to the target time—i.e.,if the subtraction-result is greater-than-or-equal-to (“≥”) some threshold-value.

And we know that subjects don’t usually respond to the warning-signal the first few times they observe that the warning-signal predicts the threat at a predictable interval. How many observations does it take before they respond? That depends on the signal’s informativeness—the ratio that you get when you divide the average inter-threat interval by the warning-to-threat interval. The greater this ratio, the more information the warning conveys and the sooner subjects respond. This well-established behavioral fact implies that the subject’s brain represents both intervals and also represents the ratio. And because one of the intervals is an average, the brain must compute the average of a widely-varying sequence of inter-threat intervals. That’s one of Peter Balsam’s unappreciated discoveries. Averaging requires addition and also division. “Taking a ratio” is another name for dividing.

If something walks like a duck and talks like a duck, it’s a duck. If a physical entity represents quantities and is ordered, added, subtracted, multiplied, and divided—then it’s a number.

You could think of the bit-patterns in (one physical form of) computer-memory as just a series of charged/uncharged capacitors. But to think of a bit-pattern that way is to miss the whole point. Conceptually, that sequence of charged/uncharged capacitors is just as much a numeral as, say, “3” or “IV” are when you write them down on a sheet of paper in order to add, subtract, or multiply them.

18) How does the cell know when to send information to the storage-site or when to extract information from the storage-site? 

The machinery for addressing (finding/reading) stored memories is itself one of the (many) mysteries. But if we could actually find a simple physically-realized memory, then we could begin to find the answers to these other closely-related mysteries.

The analogy to the history of molecular biology after Watson, Crick, and Franklin is very instructive. DNA’s structure made two things obvious to any observer.

First, the broad outlines of how cells could copy the information stored in the structure. Conventional biochemistry pre-1953 had no answer to offer to that mystery. DNA’s structure prompted one to imagine what the requisite biochemistry might look like. (Molecular biologists have spent the last 60+ years working out the many, many details in the very complex story about how the copying works.)

Second, that there must be something like a code. Several physicists/mathematicians immediately offered hypotheses about the “code”. At first, “code” was in scare-quotes because one fairly-obvious suggestion was that the structure offered templates on which proteins were synthesized. One can argue about whether a template is really a code. But it fairly soon became clear that DNA was not a template. Over a span of almost 20 years, it slowly emerged that DNA-structure constituted a code in the most unarguable sense of the term—a completely-arbitrary mapping between the code’s letters (nucleotides), the words in the code (strings of three nucleotides), and the protein-constituents (amino acids) that the words refer to. Some code-words even turned out to be punctuation—they indicated where the reading of the code is supposed to start and stop.

It also emerged that there were molecular machines whose function was to read the sentences in the code and carry those sentences to other machines that actually assemble the proteins. The protein-assembling machines use special code-translating molecules. One end of the code-translating molecule recognizes the word that codes for a given protein-element (amino acid)—the other end grabs that amino acid. It was shown that one could change the code if one built one of these translating-molecules and added on to it a different element-grabber that had a given word-recognizing end. It was also revealed that there were other molecular machines whose job it was to edit and correct the translations.

It turned out that there is extraordinarily elaborate/diverse/complex molecular machinery to translate the coded information into proteins—huge molecules with many working-parts. No one could have begun to guess any of the details about this machinery, even in the immediate aftermath of the DNA-revelation. It has taken more than 50 years of massive scientific effort to reveal the molecular machinery that reads genes and builds proteins. But proteins are themselves merely the bricks that (mostly) build cellular-level machinery. And cells are merely the bricks that build tissues, and tissues are merely the stuff that builds organs, and organs are merely the constituents of animals/plants. So even when we understand how the gene-stored information is translated into proteins, the huge question remains how we get from the proteins to complexly-structured multicellular organisms. That part of the story is still being worked out. There remain huge mysteries there—although enormous progress has been made.

I think the above history tells us what to expect when it comes to finding the physical basis of memory. The key is to first find the machinery that actually stores the information. That is the low-hanging fruit. That will be the simple part of the story—just as DNA is the simple, easy-to-understand part of molecular biology.

Knowing the engram and the code will be what enables us to address the other questions you pose. Those are all good questions. We have no clue to their answers at this time. But all of that unknown machinery has as its function to access/read the stored information. And convert it into neuronal signals (spike-trains) that convey the information to the sites where it’s useful for behavior.

So the logical first goal in what will be a long, long, long scientific enterprise is to find the engram itself—the gene-equivalent substance that stores the information. The engram differs functionally from a gene only in that it stores acquired information, whereas genes store inherited information. If we could discover the physical realization of the gene, we can also find the physical realization of the engram. But to find that we have to understand that the function of the engram is to store information in the scientific sense of that word. We are looking for something that has the same function as the bit-registers in a conventional computing-machine.

19) What is your strategy for how to get brain-scientists to engage with the ferret-experiment and other experiments?

My strategy is to spread the word as best I can in the hope that it reaches the right ears.

And to try to get the neuroscientists looking for the engram to realize that they are basing their current efforts on a fundamental error. This error explains why they’re not succeeding.

I’m not the only one who thinks they’re not succeeding. I know several other very well-informed colleagues—several of whom teach the neuroscience of learning/memory at the graduate level—who agree that this field is getting nowhere because the story they’re attempting to tell does not make sense. The findings do not come together into anything that resembles a coherent story.

20) The public would like to think that scientists look at data and take data seriously. What do brain-scientists say when you show them this data? It’s disturbing to think that they refuse to look at data, since data is supposed to be what science is all about. 

Scientists are human. Like all humans, they’re prisoners of preconceptions. When a preconception takes strong hold, it becomes almost unshakable. Max Planck is often quoted as saying: “Science progresses one funeral at a time.”

21) Why does the experiment use a ferret, of all animals? 

The answer to that is lengthy and boring.

22) If every neuron has storage-sites within it, then what makes the Purkinje-cell (in the cerebellum) at all special? 

Just look at it! It’s a really dramatic-looking cell. It’s huge, with an enormous, flat dendritic tree on which 200,000 different inputs synapse. It looks like the biological realization of a computer-chip. And it’s the cerebellum’s sole output.

We can demonstrate with strong experimental evidence that this cell stores a specified, maximally-simple experiential fact. The Purkinje-cell is the only currently-known neuron for which we know the end of Ariadne’s thread. We can’t yet do that for any other type of cell.

And it’s also the only currently-known neuron for which we know the start of Ariadne’s thread—the first molecular machine in a string of molecular machines that leads to a substance that encodes a specified fact (the duration) that we can easily manipulate.

This particular cell has unique—and enormously important properties—for any researcher who wants to follow the thread into the intraneuronal molecular labyrinth. This neuron is huge (a technical advantage). And it’s immediately below the surface of the brain (another technical advantage). Johansson and a collaborator have recently shown that you can carve out a cerebellum-slice that contains one of these cells and manipulate the inputs in a Petri dish and get evidence that this neuron—which is no longer actually in the brain—can still form the engram for the interval between those two inputs.

23) If you had a billion dollars, which experiment would you do? 

I lack the knowledge/skills/laboratory required to do what needs to be done. I would set up a competition in which molecular-biology labs around the world could get a billion dollars in funding for the best, most detailed proposal about how to follow the thread.

24) What is the fastest/cheapest/easiest experiment that would prove that this “treasure trail” is real even more so that the ferret-experiment already has?

There is no fast/cheap way to do it, so far as I know.

25) How many parts of the sequence need to be found before it’s undeniable that the “treasure trail” is real and that the engram lies at the end of this “treasure trail”? 

No way to guess.

26) We’ve talked about the engram itself and about the mechanism that moves information to/from it. What does the neuron (and also the brain generally) do with this number once it’s retrieved? Does this number move between synapses? What is this number’s whole story as it’s retrieved from the engram and then moved outside the neuron? What is the number’s whole journey, just to complete the picture? 

On my story, this number—and the information that it carries—is shared among many other neurons.

On my story, that’s one of the things that makes the brain so robust. It’s the same thing that makes the internet (“the cloud”) so robust. I back up to the cloud (like all sensible people whose work depends on their computer). The information on my laptop is in the memories of servers around the world and can be retrieved from the cloud even if someone steals my laptop—or I spill my coffee on my laptop and fry its innards (I’ve been there).

This number is used in the computations that some of the other neurons perform, just like my data is shared with other researchers who do their own computations with this shared information.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK