World Philosophy Day 2019 Musings: Why Artificial Intelligence Needs Philosophy

November 19, 2019 | Angjelin

Comments (6)

World Philosophy Day, introduced by UNESCO in 2002, is recognized annually on November 21. World Philosophy Day celebrates the continued importance and role of philosophy in human endeavor. Philosophy is the wellspring of our intellectual culture and curiosity, and from where the special sciences emerged. More than ever, philosophy is needed to help us connect the accelerating pieces of our culture in the context of our shared humanity. So if you're feeling particularly philosophical this week, consider attending a philosophy meetup about town. Or,  if you're more interested in listening, there's a slew of events on topics ranging from ethics to issues in machine learning happening across the city as well.

Image result for philosophy meme

Because popular understanding of philosophy is often fraught with misconceptions, I'd like to take this opportunity to showcase why recent scientific endeavor in artificial intelligence (AI) requires the guiding hand of philosophy when it comes to understanding consciousness and the human mind. I'd also like to share some great philosophical resources for getting started on your own exploration of philosophy.

Artificial Intelligence and Philosophy

The prospect of genuine artificial intelligence has circulated in the popular consciousness since at least the advent of mainframe computers in the 1960s, but the feverish enthusiasm of the 1970s and 1980s soon petered out when the scientific community came face-to-face with a problem that has preoccupied philosophers for centuries: the problem of consciousness. 

That consciousness should be in the radius of scientific inquiry at all had been taboo. This was the result of the dominance of methodological behaviourism in psychology, which rejected the systematic study of mental phenomena because they proved so resistant to experimentation. This view reigned supreme until around the 1960s, when the limitations of modeling human behaviour on reinforcement and operant conditioning could no longer be ignored. Philosophers themselves had experienced their own behaviourist phase and articulated positions about consciousness that tried to reconcile the vocabulary of mental phenomena, such as pains and feelings, with materialist science. Many of these views had a reductionist agenda in the sense that they argued for the identity of mental states such as thoughts and desires with brain states described in neuroscientific language, say c-fibers firing. 

The Concept of Mind by Gilbert Ryle

A seminal work in behaviourist philosophy, Gilbert Ryle's The Concept of Mind (1949) kick-started modern philosophy of mind by arguing that talk of mental phenomena amount to category mistakes or misuse of language. This text is only available at Toronto Reference Library.

 

One of the positive outcomes of the cognitive revolution was that it recognized the limitations of behaviourist methodology, namely the sole study of observable human behaviour, and launched a multidisciplinary study of the mind that enlisted the insights of philosophers, psychologists, linguists and neuroscientists known as cognitive science. What became clear from this multidisciplinary approach was that reductionist theories and models were sorely inadequate in the sense that they tried to deny the reality of inner phenomena instead of explaining them.

Parallel to the cognitive revolution, the field of artificial intelligence was born in Dartmouth College in the 1950s spurred by Alan Turing’s theory of computation and the rise of digital computers in the same decade. Since digital computers model mathematical computation and symbol manipulation, the idea arose that symbol manipulation was the essence of the mind. This can be expressed by the metaphor that the mind is to the brain as the software is to the hardware in computers. Enthusiasm for this identification of computation with human thought was so great that Herbert Simon, one of the founders of AI, proclaimed that machines will be able to do what humans can in a matter of decades. This, alas, did not come to pass. What went wrong?

Rethinking Cognitive Computation Turing and the Science of Mind by Andrew Wells

For a defense of the computational theory of mind, read Andrew Wells' Rethinking Cognitive Computation (2006)

 

This is where philosophy comes in. One of the aspects that many philosophers recognized that computers could not emulate was the subjective, first-person experience that characterized human consciousness. Some philosophers and scientists find this problem so recalcitrant to explanation that they dubbed it the hard-problem of consciousness. The philosopher who coined the term, David Chalmers, developed his views in the book The Conscious Mind: In Search of  Fundamental Theory (1996, available at Toronto Reference Library), in which he argues that empirical science has made little to no strides in explaining how the brain gives rise to inner, subjective experience, in part because these cannot be broken into components like other phenomena, but are somehow fundamental. 

The Conscious Mind In Search of a Fundamental Theory by David Chalmers

 

Other philosophers, such as Daniel Dennett, disagree. These philosophers think that much of what we experience as a unified, internal, subjective theatre with the self at the helm is rather an elaborate illusion generated by massive information-processing systems in the brain that take sensory input and yield complex behaviour as output. The parts that we experience – namely thoughts, desires, beliefs, pains and pleasures – are the tip of the iceberg of an ocean of unconscious processes. Dennett therefore denies the reality of first person, subject experience, also known as qualia, and instead claims that the stream of awareness is the result of a vast bundle of parallel and almost independent processes that create the illusion of a unified field. The secret to cracking the code of the mind is not in overcoming a hurdle that somehow qualitatively separates the mind from other phenomena, but rather lies in letting empirical science run its course. Dennett explained his views in his seminal book Consciousness Explained (1993, available at Toronto Reference Library), which was both lauded for its efforts to naturalize consciousness and criticized for evading the problem of first-person subjective experience altogether.

Consciousness Explained by Daniel Dennett

Philosophers, therefore, fall into two camps: those who think that subjective experience is reducible, and therefore identical to, brain states, and those who think that subjective experience, while causally generated by the brain, cannot be reduced to it or explained away. The varieties of positions are, in reality, much too nuanced and complex to get into here, but in essence the distinction can be summarized as follows: while nearly all philosophers argue for physical identity, namely there’s only one physical reality, some deny property identity, namely that some properties, such as being in pain, are identical to physical properties, such as neuronal firings. A book that summarizes these views in plain language is John Searle’s Mind: A Brief Introduction (2004)

 

Mind A Brief Introduction by John Searle

Despite philosophical disagreement as to whether experience constitutes something ineffable that we cannot assimilate into materialist science, today philosophers and scientists are almost unanimous in their rejection of the computational theory of the mind. Minds are not computers; we are, in fact, on the whole bad at math and reasoning, and symbol manipulation captures only part of what the mind does, nor does it do it serially like a computer. Daniel Kahneman’s acclaimed book Thinking Fast and Slow (2011) popularizes a wealth of psychological evidence indicating that mental processes divide into two parallel systems that sit uneasily alongside each other: one heuristic and domain-specific managed by the evolutionarily older parts of the brain, such as the amygdala and brain stem, and the other slow and domain-general managed by the evolutionarily younger parts of the brain, such as the cerebral cortex. The sheer variety of cognitive biases covered in the book attests to the limitations of our mental capacities, and how separate systems in the brain evolved to cope with environmental problems by simplifying informational input.

Thinking, Fast and Slow by Daniel Kahneman (2011)

 

Today all the rage with AI has shifted to the burgeoning field of artificial neural networks. Not serial processing, modelled on computation, but artificial neural networks, implemented in computers, better model the mind. Unlike serial processing, neural networks model information processing on biological nervous systems consisting of neurons that transmit electro-chemical signals through a network. The neuronal equivalents in the model are nodes endowed with activation weights. The signal input is a real number, and the output is computed as a non-linear function of the sum of inputs in a layer of nodes. The signal is propagated if the output meets a threshold, the value of which changes with each iteration of input or learning. If you’re interested in ANN (artificial neural networks) and are new to the subject, check out Machine Learning for Absolute Beginners on Safari Tech and Business Books Online. Safari Tech has a number of great resources on the topic for both beginners, and more advanced learners.

 

Artificial Intelligence A Modern Approach

I recommend Artificial Intelligence: A Modern Approach (2010) by Stuart J. Russell.

Artificial Intelligence A Very Short Introduction

If the topic of artificial intelligence and attendant philosophical quandaries are entirely new to you, Artificial Intelligence: A Very Short Introduction (2018) by Margaret Boden is great at condensing and simplifying the issues.

 

Perhaps artificial neural networks are the answer. After all they are the closest model to our understanding of how the brain works. The only problem is that present understanding of biological neural networks is feeble at best. A biological signal is not a number, nor is it computed through a function. And above all, we do not know how it is that connections through billions of neurons give rise to the cognitive systems we have identified such as memory and attention, though we have very good guesses, and most elusive of all, subjective experiences like pains, thoughts, and the self. The question to be answered remains: how does a recurrent network architecture implement the mind? Until this question is answered, AI has no hope of rivaling human general intelligence. Sometimes referred to as AGI, artificial general intelligence, or strong AI, this is the type of intelligence that can emulate all human capabilities, including creativity and self-propagation. Perhaps it will turn out that consciousness is not the distinguishing feature of our intelligence, but some causally inert byproduct (epiphenomenon) of our biological hardware. If that turns out to be the case, our fascination with consciousness will have been little more than an anthropomorphic obsession with no broader significance than quenching our native curiosities.

 

Further Reading

For more up-to-date views about the nature of consciousness and the progress of artificial intelligence, consider the books below. 

Structuring Mind The Nature of Attention and How it Shapes Consciousness

Structuring Mind: The Nature of Attention & How it Shapes Consciousness (2017) by Sebastian Watzl

Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark

Life 3.0 Being Human in the Age of Artificial Intelligence (2017) by Max Tegmark

 

Comments