Neuroscience

Neuroscience: The Inside Story
Most of what you know is probably wrong

Before I started Mastering Artificial Intelligence I thought I knew enough about neuroscience to understand AI in the way I wanted, and to the depth I wanted.

I felt that over the years I had gleaned enough by reading books on AI that covered aspects of neuroscience (e.g. Ray Kurzweil’s How To Create A Mind), by regularly following a range of high-quality technology media which publish odd neuroscience-related articles and watching probably a few too many YouTube videos on the subject.

Because the messages I was receiving across this range of sources was fairly consistent and static I assumed that I already knew enough about the AI-related aspects of neuroscience.

I thought I understood the basic functional behaviour of neurons, how the brain processes information in a hierarchical structure – similar to a computer-based neural network – and how neurons in the brain are arranged in cortical columns, each of which acts like a pattern detector that focuses on recognizing a particular unit of information, like the horizontal line in a capital letter ‘A’.

And like everyone else, I knew that the brain is vastly complex and that many important details of its operation – like how memory is stored and the reason for the apparent lack of a central processing function – remain unknown.

So I guess the feeling was that I basically knew enough.

How wrong I was.

To see just how wrong, let’s review the common explanation for how a biological neuron works:

The neuron receives pluses of electrical energy from many other neurons that are connected to its input end via synaptic connections.  Each of these neurons is dedicated to recognizing a particular ‘unit’ of information, for instance the central part of the letter ‘X’ or the idea ‘I need to eat’.

The electrical signals coming from these upstream neurons are then added together to create a combined signal which, when it reaches a certain threshold, causes  the neuron to ‘fire’ thereby sending a burst of electricity onward to all of the neurons connected to its output end.

Does this roughly match your own understanding?

If so then you’re in the same place that I was before I started digging into neuroscience.

Interestingly, if you have a background in digital signal processing or communications engineering then you might have spotted something rather odd about the above explanation – a hint that something is going on here that is far more complex than might first appear (Hint: in the first paragraph I’ve implied that different classes of information can be processed by the same neuron. Think about this for a bit…). We’ll get into the significance of this in this Module.

Actually, scientists who actually work in neuroscience at a senior level and who understand how complex the functional behaviour of a single neuron really is, will simply roll their eyes at the above explanation – just as they roll their eyes when people say they can model the human brain in a computer.

I’d also like to say that I have tremendous respect for anyone pursuing a career in neuroscience, let alone people who have dedicated the best part of the professional life in a quest to understand more about the brain and how it works.

Let me put it this way: the brain represents an extremely advanced type of technology that is working at a level that is multiple intellectual levels above where science is right now.

Whether you believe that this technological marvel is the result of natural processes (Darwinists), Divine intervention (theists) or a higher intelligence (intelligent designists) is not important.

What’s important,  is that the software that defines how the end result works in all its detail is far, far more complex than anything we can currently understand.

Neuroscientists are in effect trying to reverse engineer an extremely advanced form of computational technology without having access to the source code or even any understanding of the computational paradigm that it was created within.

Anyone who has tried to reverse engineer machine code but doesn’t even know what language the source code was written in can perhaps get a sense of the monumental scale of this challenge.

That neuroscience has manged to work out as much as it has represents a stunning intellectual achievement.

But I am quite certain that most neuroscientists would agree that compared with what there is to know we have barely even started.

Models need to capture key ideas embodied in the system being modeled

It’s not  just that the conventional explanation for how the neuron works lacks detail which by itself wouldn’t be a major problem. The problem is that it completely fails to capture the essence of what real biological neurons actually do.

To put this into some sort of perspective, modelling a neuron as a simple summing junction, which is what the above explanation in italics is doing, is like trying to model an F-35 fighter jet as a child’s tricycle – on the ludicrous basis that they are both used to transport a single human.

While technically correct, such a comparison amounts to missing the point by a country mile.

Because mathematics is fundamentally limited when it comes to modelling complex systems (see Module 3) then we are forced to make simplifying assumptions in order to build practical models – which means models we can understand and we can run on a computer.

But in order for the resulting model to be useful it must capture the key operational principles that define how the real system works.

The task of deciding which parts of the system make it into the model and which parts can be neglected is more art than a science – but this is what the world’s leading scientists do all the time.

I think it is safe to say that simplifying the behaviour of a biological neuron and modelling it as a simple ‘summing junction’ is an abject failure in the art of modelling that leads to analyses that are grossly disconnected from what real neurons actually do.

Now there are several important software tools that can be used to model neurons and networks of neurons – for instance Neuron.

Most of these tools allow researchers to add non-linear functions to the basic model of a neuron. So it is theoretically possible to model biological neural networks using neurons that are far more sophisticated than the simple summing junction. But there are three major problems with this:

  • Firstly, even these ‘closer to reality’ non-linear neural models are still far too simple because they focus on what is happening at the physical layer (e.g. action potentials) and do not attempt to model how information is represented by those action potentials.
  • Secondly, even putting this to one side, the resulting models are too complex to execute. Even models of biological neural networks that use very simple neurons are extremely complex and can only be run on a supercomputer. Models that attempt to model the known non-linearities in biological neurons are simply too complex even for today’s generation of supercomputers.
  • Thirdly, we do not know enough about how neurons are connected together to really know how to model a complete functional block in the brain, let alone how in can – in certain situations – change roles to process a different category of information altogether. The physical mechanisms that allow such behaviors clearly exist inside real biological neurons, but we do not know what the are and so that do not form part of our current models.

In this module you’ll be exposed to solid scientific evidence that proves beyond any doubt that a neuron is far, far more than a simple summing junction.

So complex is the neuron’s way of processing information that I have come to believe that it may be using some form of intelligent process.

In other words, human intelligence might be quantized down to the level of individual neurons.

Furthermore, that the way the neuron is processing information suggests that it is using a physical process or mechanism exists that is not presently known to science.

You certainly won’t find any serious scientific textbook or research paper floating this idea in such a dramatic way, but we will show in this Module that this is, indeed, the logical inference from the facts of the situation.

I should say that this is my personal viewpoint, and not something that is part of mainstream neuroscience – but when we get into the detail I hope you’ll see that it has a strong logical basis that is hard to rebut.

It came as something as a shock to realise that my prior understanding of basic neuroscience was so out of whack with reality. But far worse was the creeping realization that the consensus understanding of what neurons do is fundamentally wrong.

This encouraged me to dig deeper into how the brain works. The result was that I found more examples  where my prior understanding of brain function and reality were disconnected.

More deeply, and something that we cover in Module 1, is the festering dispute between people who believe that there is only matter in the universe (materialists) and others (dualists) who believe that there is something else as well – perhaps a soul or some form of energy field.

The vast majority of credentialed scientists, philosophers and mainstream media outlets take a materialist position while the dualist camp includes a minority of scientist, fringe media and people who hold spiritual beliefs.

Mainstream science has absolutely decided that reality is materialist and that duelists are deluding themselves if they believe that there is anything else.

But my own position on this has changed, based mainly on what I’ve discovered in researching Mastering Artificial Intelligence.

I’m now very open to the idea that there is something else – subject to that ‘something else’ being framed as a scientific question where the goal is to identify and define a real physical mechanism using the scientific method.

In case you’re wondering I am an atheist who rejects the idea of Creation as set out in the Bible and I do not subscribe to any form of organised belief system, religious or otherwise.

The argument that there is no evidence at all for the existence of ‘something else’ is I think a weak one because the full intellectual horsepower of mainstream science has been forbidden from looking for it.

Science made its materialist determination long ago, at a time when our understanding of physics was still very basic. But once that determination had been made nobody dared ask the question again because doing so would constitute scientific heresy and result in the immediate implosion of an otherwise promising career.

Therefore, given that we’ve never properly considered the possibility that reality might be dualist, so how can we be so sure that it’s not?

The findings of this Module combined with those of Module 1 amount to a very convincing scientific argument to support the dualist perspective, so you should I think be open minded when evaluating the facts and prepared to go where logic and inference leads, which is surely what science is all about?

Why is neuroscience important for people interested in AI?

For one thing it means that the neural networks we are building today in software contain neurons (which are actually called ‘units’ by AI researchers) are like a child’s tricycle, when the real thing looks more like an F35.

This means that although they are very powerful, computer-based deep neural networks are intrinsically far less capable than the equivalent neural networks in the human brain because of a dramatic difference in functional behaviour of the two lowest-level blocks used to build the networks.

Translation: It may well be technically impossible to replicate human intelligence using our current implementation of artificial neurons and neural networks.

As an aside, you can reach this same conclusion not by looking at the difference between real and artificial neurons, but by looking at the underlying mathematical theory, which one of the topics covered in Module 3.

Given that my own views on this have done a 180 – after having been exposed to new information – I think it is incumbent on anyone working on AGI or holding strong opinions about it, to take a close look at the facts and hard science – as well as the logical implications – that are contained in this Module.

Another reason why you should understand the core neuroscience covered in this Module is that it is only by getting into a fair bit of detail that you will clearly see why there is a massive, conceptual difference between human intelligence and artificial intelligence.

Without giving the game away, if human intelligence is real (whatever you infer from the word ‘real’), then AI is something else entirely which bears almost no relationship at all to what is going on inside the brain.

But, once again, I’d reiterate that in spite of the ‘fakery’ some of the most advanced AI systems that have been developed are extremely powerful, even though, as we will show many times during this programme, they have no ‘real’ intelligence at all.

What sort of detail are we going to get into?

The material in this section has been researched and prepared specially for people who are interested in AI.

I am definitely not aiming at people who are studying biology or basic neuroscience as part of a formal, accredited course.

I have tried very hard to get straight to the point, avoid academic niceties and describe the various processes in a way that ‘brings them to life’ – by using analogies and making constant comparisons with how computer systems work and injecting insights and ideas that you’ll probably not have been exposed to before.

Another very important point is that we will not simply be re-stating ideas and material that you can commonly find in any good textbook on neuroscience or even online.

We’ll be cutting some corners and omitting lots of detail to focus on the aspects of cognitive neuroscience that are highly correlated with how intelligence might work in the brain and what implications this has for the field of artificial intelligence.

No textbook that I know of takes this approach.

Throughout the module there are many vivid analogies between how a neuron works and how a microprocessors, transistors and semiconductor memory structures work.

Here are a few examples of how we wil convery import\nt ideas using powerful visual analogies:

  • We will build a powerful visual model of a neuron that you’ll never forget – by using a football, garden hosepipe and some bonsai trees…
  • We will build a model of a transistor in the form of a house made from oranges (silicon atoms) and then compare the size of the resulting structure with the size of the neuron’s axon.
  • We will calculate the equivalent number of bits needed to match the memory space that can be addressed by the simplest neuron (the answer is 10,000 bits).
  • We will then calculate the energy needed to power a cloud storage system that could be used to access all of the information states that can be represented by 10,000 bits.
  • We will use striking images created by Jeff Lichtman to show where the oft-quoted number of ‘500 trillion synaptic connections’ comes from and work out how many connections would be possible if we replaced the grey matter in the brain (about 40% of its volume) with 5nm transistors at the same volumetric density that they are realised on a microprocessor.

The whole point of this Module and analogies like those listed above is to shed light on the field of AI, as opposed to trying to explain classical neuroscience in a way that you can find in 100s of excellent textbooks or other online resources.

But, that said, we will be getting into some of the nitty-gritty:

  • Detailed explanation of how a neuron actually works, down to the level of sodium-potassium ion pumps, ion channels, dendrites, action potentials and more.
  • Different types of neurons, what neurons actually look like and how densely packed together they are
  • Advanced research findings that reveal many additional dimensions of functional behaviour of biological neurons.
  • Why the cortical column model so favoured by some AI researchers as a way to explain the behaviour of artificial neural networks is at best a very loose model that poorly reflects reality.
  • Structure and roles of the 20+ regions of the brain and how they may be connected together
  • How memory is thought to work, and how it might actually work
  • Structure and operation of the cerebral cortex, cerebellum and visual cortex