Why are artificial intelligence researchers teaching computers to recognize…

What was your first reaction when you heard about Blake Lemoinethe Google engineer who last month announced the AI ​​program he was working on had developed a conscience?

If, like me, you’re instinctively suspicious, it could have been something like, “Is this guy serious?” Does he sincerely believe what he says? Or is it an elaborate hoax? »

Put the answers to these questions aside. Instead, focus on the questions themselves. Isn’t it true that even for interrogate is for them to presuppose something crucial about Blake Lemoine: precisely, he is conscious ? In other words, we can all imagine that Blake Lemoine is misleading. And we can do this because we assume that there is a difference between his inner convictions – what he sincerely believes – and his outer expressions: what he complaints to believe. Is not this difference the mark of consciousness? Would we ever assume the same about a computer?

Consciousness: ‘the difficult problem’

It is not for nothing that philosophers have taken to calling consciousness “the difficult problem”. It is notoriously difficult to define. But for the moment, let’s say that a conscious being is a being capable of having a thought and not disclosing it. This means that conscience would be the prerequisite for irony, or saying one thing while meaning the opposite. I know you’re ironic when I realize your words don’t match your thoughts.

That most of us have this ability – and that most of us regularly convey our unspoken meanings in this way – is something that I think should surprise us more often than it does. He seems almost discreetly human. Animals can certainly be funny – but not on purpose. What about machinery? Can they cheat? Can they keep secrets? Can they be ironic?

AI and irony

It’s a universally recognized truth (among academics at least) that any research question you could invent with the letters “AI” is already being researched somewhere by an army of obscenely resourceful computer scientists – often, if not always, funded by the US military.

This is certainly the case with the issue of artificial intelligence and irony, which has recently attracted a significant amount of research interest. Of course, since irony is about saying one thing while meaning the opposite, creating a machine that can detect it, let alone generate it, is no simple task. But if we could create such a machine, it would have a multitude of practical applications, some more sinister than others.

In the age of online reviews, for example, retailers have become very fond of supposedly “opinion mining” and “sentiment analysis,” which uses AI to map not just the content, but also the mood of reviewer comments. Knowing if your product is getting praise or becoming the butt of a joke is valuable information. Or consider content moderation on social media. If we want to limit online abuse while protecting free speech, wouldn’t it be useful to know when someone is serious and when they are joking?

Or what if someone tweets that they’ve just joined their local terror cell or are packing a bomb in their suitcase and heading to the airport? (Never tweet that, by the way.) Imagine if we could instantly tell if they’re serious or just “ironical.”

In fact, given the proximity of irony to falsehood, it is not hard to imagine how all the dark machinery of governmental and corporate surveillance that has grown up around new communication technologies would find the prospect of an extremely interesting irony detector. And that largely explains the growing literature on the subject.

AI, from Clippy to facial recognition

To understand the state of current research on AI and irony, it helps to know a bit more about the history of AI more generally. This history is generally divided into two periods. Until the 1990s, researchers sought to program computers with a set of handmade formal rules for how to behave in predefined situations.

If you used Microsoft Word in the 1990s, you might remember the irritating desktop assistant Clippy, which kept popping up to offer unwanted advice. Since the turn of the century, this model has been replaced by data-driven machine learning and neural networks. Here, huge caches of examples of a given phenomenon are translated into numerical values, on which computers can perform complex mathematical operations to determine patterns that no human could ever discover.

Moreover, the computer does not just apply a rule. On the contrary, it learns from experience and develops new operations independent of human intervention. The difference between the two approaches is the difference between Clippy and, say, facial recognition technology.

sarcasm search

To build a neural network with the ability to detect irony, researchers initially focus on what some would consider its simplest form: sarcasm. The researchers start with data extracted from social media. For example, they can collect all tweets tagged #sarcasm or Reddit posts tagged /s, a shortcut Reddit users use to indicate they’re not serious.

The goal is not to teach the computer to recognize the two distinct meanings of a given sarcastic message. In fact, the meaning is irrelevant.

Instead, the computer is tasked with looking for recurring patterns, or what one researcher calls “syntactic fingerprints” – words, phrases, emoticons, punctuation, errors, contexts, etc.

On top of that, the dataset is bolstered by the addition of multiple sample feeds – other posts in the same threads, for example, or from the same account. Each new individual example is then run through a battery of calculations until we arrive at a single determination: sarcastic or non-sarcastic.

Finally, a bot can be programmed to respond to each original poster and ask if they were being sarcastic. Any answer can be added to the computer’s growing mountain of experience. The success rate of the most recent approaches to sarcasm detectors an astonishing 90% – more, I suspect, than many humans could achieve.

So assuming AI will continue to advance at the rate that took us from Clippy to facial recognition technology in less than two decades, can tongue-in-cheek androids be far off?

What is irony?

But isn’t there a qualitative difference between sorting out the “syntactic fingerprints” of irony and actually understanding it?

Some would suggest not. If a computer can be taught to behave exactly like a human, it doesn’t matter that a rich internal world of meaning hides beneath its behavior. But the irony is undoubtedly a unique case: it rests on the distinction between external behaviors and internal beliefs.

Here, it may be useful to recall that, while computer scientists have only recently become interested in irony, philosophers and literary critics have been thinking about it for a very long time. And perhaps exploring this tradition would shed some old light, so to speak, on a new problem.

Among the many names that could be invoked in this context, two are indispensable: the German romantic philosopher Frederic Schlegel; and the post-structuralist literary theorist Paul de Man.

For Schlegel, irony does not simply imply false outer meaning and true inner meaning. Rather, ironically, two opposite meanings are presented as equally true. And the resulting indeterminacy has devastating implications for logic, including the law of non-contradiction, which holds that a statement cannot be simultaneously true and false.

De Man follows Schlegel on this point and, in a sense, universalizes his insight. He Remarks any effort to define a concept of irony is bound to be infected by the phenomena it claims to explain.

Indeed, de Man believes everything the language is infected with irony, and involves what he calls “the permanent parabasis.” Because humans have the power to hide their thoughts from each other, it will always be possible – permanently possible – that they don’t mean what they say.

Irony, in other words, is not one type of language among others. It structures – or better, haunts – every use of language and every interaction. And in this sense, it goes beyond the order of proof and calculation. The question is whether the same is true for human beings in general. DM/ML

This story was first published in The conversation.

Charles Barbour is a senior lecturer in the School of Humanities and Communication Arts at Western Sydney University.

James G. Williams