In his 1950 article "Computing Machinery and Intelligence," Alan Turing considered the question "can machines think?" He proposed replacing it by a test, famously called the Turing test, or -- in Turing's words -- the imitation game, in which a subject can ask written questions to one of two hidden sources: a computer and a human being. The machine passes the test if its answers fool the subject into thinking it's the human being most of the time. In light of the quality of answers that we get from generative AI programs based on Large Language Models, such as chatGPT, can we say that the Turing test has been passed?
If one believes the answer is yes, or close enough to yes to warrant believing the moment is at hand, then Turing's reformulation of the "can machines think?" question is inadequate, because it's safe to say that few people believe chatGPT can think. In fact, it isn't even trying. That is to say, its creators aren't. As AI pioneer Geoffrey Hinton and others have pointed out, such programs are not implementing human thinking at all. And there is certainly no hint of a claim that consciousness is close to being reproduced by such programs, just imitated. Is that enough? I (and many others) are also interested in what I will call the rebound question: How does this sharpen or muddy our notion of what we humans are? (cf. B.F. Skinner's question "Can humans think?")
This brings to mind a trope that has had some vogue in philosophy of mind, namely the figure of the zombie. Zombies are creatures "exactly like us in all physical respects but without conscious experiences." The subsequent literature on zombies is vast. One of many objections is that zombies cannot exist (Dennett). But, in the admittedly restricted but wide domain of expression in a natural language, aren't chatGPT and its progeny zombies? They cannot think or feel, have no concept of meaning, do not understand what they are doing, or anything else, but to an extent not remotely seen just a few years ago, they speak like us, more or less. Zombies bring into sharper focus the question: what's missing? What exactly is it that has to be added to zombies to make them conscious? This is the so-called hard problem of AI: how to define and reproduce consciousness.
Well, maybe there's another question coming round the bend. Will the two groups merge at some point? Will more and more refined imitations, possibly embodied, incorporating simulations (just simulations) of feelings, morality, pain, pleasure, be made and do much more? Will it turn out not just in theory, but backed up in practice that that's all we are? Are we Zombies 5.0?
These and other topics, including an introduction to programming with Python, the social and human impact of AI, the issues in neuroscience, and philosophy of mind will be the subject of the course. Grading will be based on programs, essays, presentations, and three tests. |