Sense of Humor? I’ve been working with AI for five years, but I haven’t seen a system that can explain why a joke is funny. I’m not talking about pre-programmed answers like Siri’s: if you ask “What’s the meaning of life?” you’ll get a funny answer like “A good life is about wearing clean and dry clothes,” which was put there by a human programmer; the system has no idea why this could be funny. Here, we ask the following three questions:
- Can a robot make up jokes?
- Is it possible for a robot to understand jokes?
- Can jokes amuse a robot?
Notice that the three questions are in order of increasing difficulty, which may seem strange to many of us because we think it’s harder to create something than to understand it. However, the opposite is true in this case, as we’ll show.
1. Creating Jokes
Do you remember Milos Forman’s movie “Amadeus”? The sneaky Salieri knew that Mozart was much better than him, so he hated God because he wasn’t as good at making great music as Mozart, but he was smart enough to understand Mozart’s music.
But when making jokes, if you have a large enough set of jokes to choose from, you might be able to make changes that aren’t really “creations.” Variant generation is how the “deep-fakes” popular right now are made.
Generative systems like GPT-3 can make pieces of text, images, or videos that make sense to us humans or seem mysterious but trust me, the machine has no idea why the part it produced could be funny. So let’s move up a level.
2. Understanding Jokes
“Sense of humor” isn’t just a mysterious spark in our minds; it also assumes everyone knows what the joke is about. For example, think about the joke: “What’s the difference between a vacuum cleaner and a lawyer on a motorcycle? The bag of dirt is on the inside of the vacuum cleaner.” Well, here we’re supposed to know that lawyers are bad people who would betray their mother to win a case (of course, not all of them, just the successful ones).
We should also know more about the world in general. For example, we ride a motorcycle from the outside while driving a car from the inside. …et cetera.
So, for the joke to say that the lawyer is the motorcycle’s dirtbag, it takes a lot of assumptions and quick thinking, which is how we humans think. People can do this so quickly because they have “common sense.” This is a hard-to-define concept that is so hard for AI to understand that whole conferences and journal issues are devoted to it.
In his book “Common Sense, the Turing Test, and the Quest for Real AI,” Hector Levesque talks about how the standard Turing test stopped being an excellent way to show that a machine is smart. In the Turing Test, a hidden player (who could be a human or a machine) has a text conversation with a human. A group of people evaluates the text from the conversation. A machine “passes” the Turing test when most judges think it was a human player.
Levesque said that the Turing Test is not a good way to judge how smart a machine is, and I agree with him. You probably already know that in 2014, a machine did pass the Turing test. That year, a made-up test participant named “Eugene Goostman” used many clever tricks to convince most judges that “he” was human. For example, he pretended to be a Ukrainian boy, which explained why he didn’t know much English, and so on. But the plans were all thought up by people before the machine was built, so the machine wasn’t “smart.”
Levesque said that a better way to test how smart a machine is would be to ask it to answer hard questions, some of which would require common sense. Explaining jokes could be such a question. I sent him an email to ask him to explain his points.
I dare any smart deep-learning AI, researcher to show me a robot that can explain jokes. I could sit here for years and never see the robot that tells jokes.
3. Have Fun With Jokes
Even if computers can one day figure out what a joke is about, this could be a boring way to do math (for the robot, I mean).
Do robots get to enjoy anything? We’re getting into dangerous territory here. Some people who are supposed to be smart and well-educated have lost their minds and started talking nonsense about consciousness, mind uploading, singularities, and even more strange ideas.
I don’t want to be like them in the end. Even after more than 10 years of working on AI research, I don’t know how machines could be made to “feel.” But it could just be that I’m not creative.
There is nothing more complex than the things we don’t know. What does it matter if a robot laughs when it hears a joke? Is the robot just pretending to like the joke, even though it knows it’s a joke? But it looks more like something from the movie Blade Runner than actual AI research. We better stop here.
I hope this post was fun for you. You’re not a robot, are you?