|Presentation Date||September 20, 2012|
|Topic(s)||Philosophy of Artificial Intelligence|
If you are an SE major, one of the support electives you may have the opportunity to take is a philosophy class called Philosophy of the Mind. The class is run much like a seminar where students debate with each other about topics related to the mind. We ask questions like do we have free will? Is there a soul? What is consciousness? And what is intelligence?
Some of these questions, particularly what is intelligence, are important for us because we as engineers would like to design computers that can think like we can, and if we can understand what causes us to have intelligence then perhaps we can create a computer that can be intelligent in the same way that we are.
To address whether or not designing such a system is possible a modern day philosopher, John Searle, proposed a thought experiment back in the 80s called the Chinese Room. His thought experiment works like this. Imagine a room with a desk that has on it a book, a few pens, and some scratch paper. The book is a manual full of chinese characters, and there is a key in english describing a way to decode one set of characters into another set of characters. There is a man in the room who can only understand english who receives a note in chinese on a sheet of paper slipped under the door. He follows the instructions in the book to decode one set of the chinese symbols into another set of chinese symbols. He has no idea what either of the notes say, he just flips through pages of the book to figure out what symbols he should write down and puts them on a piece of paper and slides the decoded note back under the door.
Outside the room, a woman writes down a question in chinese onto a slip of paper and slides it under the door. A few minutes later, she receives a note from under the door. She reads the note and sees it is an answer to her question. The answer is not only correct, it's intelligent and insightful. The woman outside the room is sure that who or whatever answered the question understood it.
So where did the understanding occur? The book didn't understand the question, it's just a book. The man didn't understand the question, he doesn't know any chinese. No understanding occurred, yet intelligent behavior was produced. Here's the kicker: this scenario is analogous to a computer. The man is like a processor, the scratch paper is like memory, and the book is a program. Searle believes that computers are incapable of having cognitive states like understanding.
Because of this debate, we distinguish two types of artificial intelligence: strong and weak. Weak AI is where the programmed computer behaves intelligently, but has no cognitive states. This is like any of the simple algorithms we talked about on tuesday like the roomba. Strong AI is where the programmed computer has cognitive states regardless of its behavior. Strong AI is a reflection of human intelligence, where we can be intelligent regardless of our behavior, you're intelligent when you're writing software and you're intelligent when you're lying awake in bed doing nothing.
Do you think strong AI is possible?