A common understanding between humans and machines


Can we ever achieve a shared understanding between humans and machines? Jonas Ivarsson* said depending on how we approach understanding, the short answer is yes and no.


At the heart of this problem lies humanity’s complicated relationship with technology, so careful examination might tell us something about ourselves.

The nature of common understanding is fundamentally tied to who we are as humans and how we create meaning in our lives.

If a lion could talk, we wouldn’t understand it. Such was the position of the philosopher Ludwig Wittgenstein on the disjunction between the worlds of life.

The daily lives of our two species are so fundamentally different that very little meaning would cross the divide.

Even if the words were present, insurmountable obstacles would remain.

Today, it is the inner life of machines that is the subject of debate.

Specialized technologies have been endowed with a language – previously a uniquely human capacity.

These developments are currently making headlines well outside of academic circles, where old questions of AI philosophy are being pushed: can machines be conscious or conscious?

This proposal stimulates the imagination and invites speculation on deep existential and ethical mysteries.

If we look beyond the buzz and the sensibilities puzzle, there’s another exciting discussion to be had here.

He begins by restating Wittgenstein’s statement about the lion: If a machine could talk, would we understand it?

Or rather, now that we have machines that dialogue with us, what forms of agreement can we have between our different modes of existence?

Sorting that out, of course, revolves around the whole question of what constitutes common understanding.

We can approach the problem in two ways, practical and conceptual.

Without thinking too much about it, we do it daily, not as amateur philosophers, but simply as active social beings.

According to one school of thought, our understandings are constantly displayed through the responses we provide to each other in interaction.

We show how we interpret ourselves in the answers we offer.

Upon entering a warm room, I might say something like, “Is it hot in here? »

My comment may elicit various responses, more or less distant from my intentions.

Some might take it as a request for an opinion, while others might respond in terms of a thermometer reading.

However, even though my temperature complaint was formatted as a question, it could also be treated as a request for something where the appropriate response is a stocknot a verbal response.

If you turned to open the window, as they still do here in the Nordic countries, to regulate the heat, my feeling would be that you understood me.

Your actions are therefore the only proof I need as assurance of our common understanding.

I don’t need to peek into your skull to know that you understood me, because there would only be brains.

From this perspective, our common understanding is a social phenomenon that emerges in these fleeting moments of interaction.

It is constantly created, lost and recreated as we move through time together, one action after another.

Because of this, computers can sometimes exhibit the forms of interactional understanding that we expect from other social beings.

Properly administered machine responses can give me the same feelings of connection, similarity of perspective, and shared understanding that I have with another human.

Such interactions can be practical, therapeutic or joyful.

In other words, they can make sense to us.

So, would that settle the matter then? Am I saying that computers can understand us? Well, not quite.

There is yet another school of thought expressing arguments from ordinary language philosophy.

When we deploy our concepts, these are usually restricted to certain types of topics.

Although a musical recording may sound a lot like an orchestra, we would never think of talking about the musical skills of such a recording.

Without poetic license, we don’t use language in this twisted way.

Skill is a form of attribute reserved for living beings; it just isn’t meaningfully applicable to inanimate objects.

Likewise, this argument has been made in relation to intelligence or thought.

Thus, a “thinking machine” is the oxymoron of our time.

Alan Turing argued against this reservation and argued that historical linguistic biases should not blind us.

The fact that we have not observed something in the past does not guarantee that we will never encounter it in the future.

The borderline cases of robots and talking machines now confuse what was once a simple separation.

Nevertheless, the conceptual control over the type of agent to which we want to attach our attributions remains significant.

When we try to understand ourselves, categorizing our interactional partners is a valuable method of assessing our situation.

Who the other party is, their age, cognitive ability, motivations and interests, and the activity we are engaged in, are all potential resources for making sense.

Categorizing an unknown person as a scammer or a police officer will have a significant impact on my ability to understand whatever they are trying to communicate.

Disregarding this information could have disastrous consequences if I accept or ignore an invitation from this person.

The take home message is that meaning is not simply contained in the words spoken; it must also feed on contextual information.

So what to do with these concerns when interacting with machines?

What is the proper categorization of something like LaMDA, GPT-3, or whatever comes around the next bend?

This is where we all struggle.

Some refuse to accept these systems as anything other than code running on silicon chips.

Therefore, the ultimate issue of shared understanding collapses like a house of cards.

Others, certainly less numerous, embrace the idea of ​​an extended sense of being.

In their view, the evidence of susceptibility is in plain sight.

The expressions on the fear of dying are to be respected, even if they come from a machine.

If the trickster acts friendly, who am I to refuse his friendship?

*Jonas Ivarsson is a computer science teacher with a background in cognitive science, communication and education.

This article first appeared on bdtechtalks.com

James G. Williams