AI Explained In Plain English
Mastering Artificial Intelligence is an upskilling programme that will take your understanding of AI up to the level of a ‘strategic expert’, who is someone that commands the respect of peers and can help conceptualize, frame and direct AI projects.
Have fun learning while you listen – while knowing you’ve got a soft copy of the important bits.
Sign up below to access the Podcasts and eBooks, and receive emails when new material is published:
Why Mastering Artificial Intelligence?
Mastering Artificial Intelligence is the result of my personal quest to deeply understand AI, which I see as the defining technology of our age:
Human advancement has always relied on the of invention of tools. Millennia ago the great tool inventions included fire, the wheel and the irrigation pump. Today, the great inventions include the transistor, fiber optics, software and artificial intelligence.
And as the economy moves inexorably from one focused on the manufacture and distribution of physical goods to one that increasingly involves the harvesting and analysis of information then we will need new tools that can be used to extract value from information.
Artificial intelligence is the most promising of these information-domain tools and while still at a very early stage AI holds tremendous promise for the future: the concept of replicating and commoditising certain aspects of human intellect is utterly compelling and will truly change the world in a profound way.
Before starting Mastering Artificial Intelligence, I was using a range of online sources in an attempt to identify and understand the core ideas that underpin the AI field.
Examples of questions that interested me were:
- How state of the art AI systems actually work
- What sort of real-world problems can be uniquely solved with AI
- The underlying scientific foundation of AI
- Whether this is a limit to how powerful AI systems can become
I was also interested in a number of metaphysical questions, such as what is intelligence and is AI real intelligence?
My problem was that I was having trouble finding resources that were light on jargon, math and code and heavy on insight.
I knew that the answer to every question I had was out there somewhere on the web – and available for free – but finding those answers was increasingly feeling like looking for a needle in a haystack.
It became clear that there was no single place online that provided the sort of understanding I wanted, and I think that remains the case today:
- Media and trade outlets : These do a good job of covering the latest developments in AI, but they cannot be expected to explain those developments at a detailed level. I often read articles in elite publications like Wired, The Economist, New Scientist or MIT Technology Review – which are all excellent – but feel at the end that I now know more, but understand less.
- Courses: There are plenty of excellent, formal education courses focused on the technical aspects of AI, but they understandably ignore the non-technical aspects and do not dig into the theoretical foundations of the topic, which are understandably taken as fact. And so these courses don’t completely work for me either.
- Celebrity commentators: Certain people seem adept at throwing out very bold statements about AI, which are lapped up by the media, but I usually find that such statements are made with a distinct lack of supporting evidence, at least not that I find satisfactory.
- Books: There are a number of good books on AI but I’ve not so far found one that takes a truly holistic perspective and whose mission is to convey to the reader a deep understanding of AI as a whole.
Maybe you’ve come to the same conclusion?
Why is AI so hard to explain clearly?
I think that the main reason is that AI spans so many disciplines: computer science, mathematics, physics, electronics, information theory, philosophy, psychology, neuroscience, business and economics.
You really need to understand what each of these fields has to say about AI before a complete picture begins to emerge.
But most people just don’t have the time to invest the 1000s of hours needed to cover this amount of intellectual real estate.
Another reason is that many of the contentious questions in AI do not actually have clear answers.
This is very unusual in engineering: normally, practicing engineers are dealing with ideas and quantities that are precisely defined using the language of mathematics and whose existence and behaviour can be experimentally verified. But more than that, they know that their field that has a solid scientific foundation.
But AI isn’t like this: we’ll see later that the behaviour of deep neural networks is not understood mathematically.
This means that we have not yet discovered the mathematical structure that connects the semantic content of an image (e.g. the set of ideas that define a ‘gorilla’ with the condition of the artificial neurons that make up that network (e.g. the set of bias and weight values that are programmatically determined when the network is trained).
We can build a neural network that can reliably recognize a ‘gorilla’ but we do not understand why it works. The best we can do is offer an ‘arm waving’ type of explanation for what might be going on, but we really don’t understand it.
As an analogy, Faraday’s Law of Induction and Newton’s Laws Of Motion are intellectual masterpieces that defined whole fields of physics and engineering.
But we just don’t have anything like this for AI – yet.
One of the several underlying reasons for this is that “intelligence” is not actually a scientific idea. Another is that, so far, nobody knows how to accurately codify abstract ideas like ‘analyse’ or ‘think’ in a way that properly reflects how such behaviors are realised in the brain. We will discuss these and related points in great detail in Module 3.
So what does this mean?
For one thing, it means that the question as to whether or not we will be able to build a computational structure that is “intelligent” is a question for philosophers, sci-fi writers and the mainstream media – but not serious science.
The bottom line here is that if you want to really get to grips with AI you need to develop your own personal view of the field – because there is currently no overall ‘general theory’ of AI.
I’m sure that someday we will have a General Theory of Intelligence – but it will take an massive intellectual leap and require a mind in the class of Albert Einstein, Issac Newton or Michael Faraday to provide AI with a proper scientific foundation.
I very much look forward to that day and hope it happens in my lifetime!
But the best we can do right now is to:
- Think broadly, but not too broadly – while being prepared to go really deep when and where you need to.
- Think clearly and objectively – while being prepared to explore where the thinking leads, even if that means a place you’d rather not go.
But this is not how thinking in AI is developing: most people who work in AI conceptualise the subject using a narrow, well-conditioned mindset that has been defined by years or decades of intellectual pursuit in one discipline.
This is true of journalists, bloggers, AI researchers, software developers, economists and business people who are involved with AI.
For example, credentialed AI researchers will have very specific, deep knowledge about one aspect of AI but they might have minimal or no awareness of the latest developments in other areas of AI – which might seem to an outsider to be closely related.
As another example, most engineers who are hard at work building practical neural networks do not know how real biological neurons work (be sure to check out Module 2 which gets into the neuroscience aspects of AI).
Another example would be where people who work in computer vision might not have thought carefully about the connection between the software they are creating and what this means for how images are represented in the brain.
But some really brilliant people do have that special sort of measured, 360-degree understanding, but they tend to have a rather low priority and tend not have ready access to large media platforms. If you want a few examples, then take a look at:
- Carver Mead: a really brilliant engineer conceptual physicist who thinks Einstein was right after all about quantum uncertainty
- Geoffery Hinton: the inventor of backpropagation who is happy to admit that while backprop is core to every neural network in existence, it is insufficient to take AI to the next level
- Judea Pearl: a highly respected computer scientist who believes AI is stuck and that there is a limit to how far we can go with probabilistic learning
- Michael Jordan: a professor of computer science with 30 years experience in AI who has a mature and reasoned position on where AI is today
You’ll see that these people are thinking in a direction that is somewhat different to, or even orthogonal to the prevailing consensus. When trying to understand a complex field it is as important to look at the established theory as it is to probe the edges where you will find interesting people who are not seduced by buzz and hype.
I have a sense that there is too much vertically-focused thinking going on in AI right now, and far too little of the horizontal, or ‘joined-up’ thinking that the field’s breadth demands.
The purpose of Mastering Artificial Intelligence is to deliver the particular type of insights that can only come from thinking within and across multiple disciplines.
AI needs more ‘joined up’ thinking
The great Richard Feynman once said that an idea that cannot be explained in plain English is not properly understood.
I suspect that if he were alive today Feynman would take a dim view of how well understood AI is: you do not have to look for long to clearly see that too many AI practitioners cannot provide plain English answers to simple questions.
Too many word salads.
Of course, sometimes we just don’t know the answer and that’s OK: it’s better just to come clean and say you don’t know or aren’t sure, than it is to try to convey the impression you understand something that you don’t.
Worse, the field of AI seems to be resolving itself into rival camps with the ‘AI will kill us all’ people and the ‘rainbows and lollipops’ people digging themselves in on either side of an intellectual chasm.
This sort of polarization always ends badly, with high-profile people like Elon Musk and Mark Zuckerberg resorting to taunting each other on Twitter.
I guess the same thing happened in the 1950s when Niels Bohr and Albert Einstein faced off over the idea of quantum uncertainty (Bohn won that debate, by the way, but I wonder whether that was more because Einstein found his sheer force of personality overwhelming).
But the media loves a public slanging match between two high-profile figures – and actively works to whip things up. All in the name of
journalistic objectivity attracting readers.
The feeling I have is that in spite of the truly remarkable experimental results we’re seeing with the latest AI systems like IBM Project Debater and Google / DeepMind AlphaGo, there is still a lot of intellectual wheel-spinning going on with many – but by no means all – people who work in AI not truly understanding their own field.
Now I realise those words will wind some people up, and I’m sorry for that.
But I’d ask you to try to think objectively as you may at least find an element of truth in what I’m saying?
And I hope my words are not taken as disrespecting academics who are working on AI.
By necessity, mainstream academia is divided up into a multitude of very narrow fields of study and the reason for that is that the deeper you go the more specialised you have to become. It’s just the laws of information theory applied to academic research.
This is totally OK: we need our academics and researchers to be focused!
But, again, have a think about it: are you certain that your own research project would not benefit if you had a broader perspective on AI? And are you sure that your assumptions about aspects of AI that fall outside of your specific domain of expertise are correct?
Have you checked to find out, or are you trusting others?
There’s another problem here, which occurs when someone who has a narrow perspective on AI, but strong opinions about it – gains access to a large media platform that is on the look-out for controversial one-liners which can be used to scare people, or make them angry in order to attract traffic – which is sadly how much of the ad-supported web works these days.
The result is that ordinary people – people like us – then form opinions about hard questions in AI on the basis of one-liners which have been thrown out by people who themselves don’t fully understand their subject or, or if they do, then they have done a bad job of explaining it.
So not only is AI a very complex field that requires a multi-disciplinary approach, I think it is accurate to say that many major media outlets are actively creating and distributing a lot of information noise which serves only to further complicate the topic and polarise opinion.
So I decided to teach myself – and you can join me, if you like
Around the middle of 2017, I came to the realization that the only way I was going to obtain the sort of understanding I sought – that is, ‘deep understanding’ – would be for me to figure things out for myself.
I am a firm believer in Richard Feynman’s idea that even the most complex ideas can be explained in plain English without using technical jargon.
Given that my particular sort of multi-disciplinary experience seemed pretty well-suited to tackling a multi-faceted area like AI, and I seem to be pretty good at writing and explaining complex ideas to others, I thought it should be possible for me to understand the difficult ideas in AI and then explain those in plain English.
In order to provide some focus I decided to structure my own personal learning journey and summarise it in the form of a series of Podcasts that others can listen to when they have time, which has led to Mastering Artificial Intelligence.
Our focus will be the axiomatic foundation of AI, not the math built on top
AI is a very technical field and it is clearly impossible to understand how the low level detail works without the requisite academic training, which means that you really do need to be comfortable with mathematical ideas and have a good feel for electronics, computer science and software.
Here’s an example: you can explain the idea of matrix multiplication to someone in English, but if you want to actually multiply two matrices together and get an actual result then you have to do that using the language of mathematics.
But you do not need an M.Sc. computer science or a Ph.D. in machine learning to understand the idea of a matrix, and it is this deep understanding that Mastering Artificial Intelligence is all about.
Mastering Artificial Intelligence is focused on the ideas that AI is based on, rather than the mathematical language that connects those ideas together.
Make no mistake, we will be getting into the technical nitty-gritty – for examples how deep neural networks, how convolutional neural networks (CNNs) work and how neuromorphic chips work but we’ll be doing that in a way that focuses on the underlying ideas, not the math and code.
Remember that the math is not why something works. Math is simply a language humans have invented to describe a certain phenomena.
In the interest of objectivity, I should say that some scientists, like Max Tegmark, believe that reality is ultimately just math which means that the parts of our reality that we perceive to be physical are really just an illusion.
He might be right about this, but if he is I would say that the math that human minds have discovered is just a small and incomplete part of a far wider and richer math that our minds may never be able to fully comprehend. So I’d still say that “human math” is not the “why”, but simply the “what.”
Just as the ideas embodied in a novel are conveyed using the English language, so the ideas embodied in a neural network are conveyed using the language of mathematics.
What I’ve found is that if you spend the time getting a firm grip on the ideas that underpin AI systems you can quickly see ways to improve on the state of the art: in spite of its power, AI technology is still very crude and there are many ways to make it even more powerful and we’re going to get into some of that in this programme.
In summary, if you seek a deeper understanding of AI but don’t want AI explained superficially in terms of math, code or specialist jargon then you could be in the right place.
I am confident that Mastering Artificial Intelligence will equip you with the deep insights needed to contribute positively to any AI-related project or conversation – without feeling out of your depth or lacking the confidence to challenge experts.