Mathematician on the dystopia of AI and human superiority over machines

As computer technology advances rapidly, there has been much discussion about the potential threats posed by artificial intelligence. But the author of a book that explores the nature of the machine versus human intelligence says some of these debates about the future of AI have been overstated and may distract from more pressing issues. .

Junaid Mubeen, research mathematician turned educator and author of Mathematical intelligence: a story of human superiority over machineswhich will be released on November 1, said Newsweek one of the reasons he wrote the book is that AI has recently received significant publicity.

“Some of it may be justified because there are exciting developments to come, but a lot of it, I think, is overstated,” Mubeen said. “And I think there’s a real risk that we’ll rush to judgement, exaggerate the capabilities of AI in the process and undermine our own human intelligence.

“It was Arthur C. Clarke who said, ‘Any sufficiently advanced technology is indistinguishable from magic,’ and we see that now,” he said.

Mubeen cited the example of the Google engineer who made headlines earlier this year after he said a chatbot developed by the company called LaMDA (Language Model for Dialogue Applications) had acquired sentience.

At the heart of this type of machine learning-based artificial intelligence, which is used in a wide variety of applications, from medicine and agriculture to astronomy and robotics, is pattern recognition. . Machine learning is a subfield of artificial intelligence that gives computers the ability to learn without being explicitly programmed.

“That’s just one aspect of intelligence,” Mubeen said. “They have the appearance of intelligence – when you engage with a chatbot, you can feel like you’re conversing with a human. But then we tend to go overboard and attribute these other human qualities to them, like sentience and conscience. Or we say: Because they are so capable, it doesn’t matter if they lack emotion or have no conscience.”

He continued, “I think because machine learning has been so useful over the last 10 years, we kind of took the leap and assumed we were already at the point of what people call machine learning. “artificial general intelligence, which is the ability to navigate the world and solve a whole host of problems. But right now all the examples of AI we have are very narrow in their focus, which is fine until that we go too far and then suggest that they are ready to replace humans.”

The machine learning approach is very data intensive, requiring large amounts of data to be fed into the given system. This raises a number of questions about how this data is collected and how reliable it is.

“If you haven’t made the effort to separate truth from falsity, to filter that data, there’s a real risk that you’re releasing discriminatory technologies onto the world, and we’ve seen many examples of that,” Mubeen said, “We see examples of technologies that spit a lot of hate, that spread misinformation. And so I think it’s good to master them and remember that pattern recognition is at the core.

“Some models are really meaningful, but some are just misleading. And humans are very easily fooled by false models,” he said.

Mubeen pointed to figures from Silicon Valley and beyond who are discussing potential threats that are perhaps years or decades away while not paying the same attention to the potential damage of artificial intelligence technologies today. Some examples of this potential harm include the use of AI in mass surveillance programs or how certain algorithms reinforce social biases based on the data they were trained on.

“My concern about these speculations about the future of AI and doomsday doomsday scenarios is that they distract from current issues,” Mubeen said. “Today’s issues may not be as glamorous, they may not lend themselves to Hollywood scripts – you can’t make a terminator-type movie about these bias issues that exist in our data today, but they are issues that affect people now. And I think disproportionate attention is focused on future threats.”

He continued: “Now I’m not saying we should completely ignore them – I think we should spend some time thinking about the consequences of how AI might develop, but we have to start by doing in the face of ethical problems today.”

Intelligence is multifaceted, and depending on your definition, it’s possible to argue that in some respects some technologies have already achieved this. Computers can already beat the best chess players in the world, and they are increasingly capable of outperforming human experts on individual tasks. A properly trained AI system, for example, has been shown to perform better than humans at reading chest X-rays for signs of tuberculosis.

But according to Mubeen, the leap from there to general intelligence has yet to be realized, and as a result, humans still have the edge in many ways.

“There are aspects of intelligence that, for now at least, are uniquely human and that we would all do well to embrace as we now usher in this new era of intelligent machines,” Mubeen said. “We have these amazing systems of thought that we’ve developed over thousands of years, and one of them is math. That’s the lens through which I looked at this whole notion of intelligence. “

Many people might be surprised by the idea that humans have an advantage over computers in the area of ​​mathematics, given the ability of machines to calculate numbers or perform complex calculations. But Mubeen said much of that perception boiled down to how math is taught in schools and often represented in society at large – an area allegedly dominated by memorizing formulas, performing calculations and the execution of algorithms.

“All things that computers do very well, and humans often struggle with those skills,” he said. “It turns out that calculus doesn’t come very naturally to us – some people have a knack for it, but a lot don’t.”

But in his book, Mubeen outlines several aspects of human mathematical intelligence that go beyond our very rigid views of the subject and show where humans have an advantage.

A stock illustration shows a robot arm holding a human skull. Is the hype around artificial intelligence overdone?

“For example, computers can be relied upon to skim through numbers, to perform calculations with speed and precision. But what they don’t have is knowledge of the world. They don’t know if those calculations are significant and whether an answer is plausible, or whether it even makes sense to make that calculation in a particular context,” he said.

Another example is questioning. Computers increasingly have the ability to answer questions. But it’s humans who are driven by an innate curiosity, according to Mubeen.

“One of the claims I make in the book is that human curiosity and our ability to ask interesting questions will always exceed the ability of computers to answer them,” he said. “In many ways, this could become the defining trait of mathematicians – that mathematics will evolve into a subject made up of questions that cannot be reduced to calculation or the things that computers can do.

“We are already seeing examples of this,” he continued. “There are a lot of problems that have arisen by simply questioning the limits of what computers can do. In fact, even the story of how the modern computer was invented has its roots in mathematical research – people about a hundred years ago thought about how to develop algorithms to solve abstract problems, and this led Alan Turing and others to formally define what we mean by computer and algorithm.

Another area is the imagination. Because computers have become very good at games like chess and go, some have speculated that mathematics might be outsourced to computers because the field is rule-based logic, similar to a game.

“But what I mean is that, contrary to public opinion, [mathematics] it’s not about following rules, it’s more about tinkering with those rules and creating alternate realities,” he said.

A notable example that many children learn in school is that you cannot take the square root of a negative number. But Mubeen said that if you go deep enough into the subject, you discover that it is possible to do so, by producing so-called imaginary numbers.

“Until the 17th century it was forbidden, it was an accepted convention that you couldn’t take the square root of negative one,” he said. “Then a handful of mathematicians explored the idea and asked, what’s the worst that can happen? Sometimes you do this and you get ridiculous results. But other times you end up with really powerful constructs. Imaginary numbers now underpin our understanding of electronics and quantum mechanics, and are incredibly useful in a wide range of applications.”

He continued: “I would certainly suggest that the way AI works today is very much about its programmatic instructions – it’s given rules and then it has to follow them. And it can do some very creative things in that set. of rules – it produces works of art, it wins games of Go, etc. But what I’m talking about is another form of creativity. It’s not just about combining rules, it’s about breaking them down. And that’s something we’ve always done, it’s really in our human nature.”

Despite the differences between human intelligence and artificial intelligence, mathematicians have always found ways to work with technology. There is a rich history, for example, of humans inventing tools to extend our math skills – everything from the abacus to the slide rule and now the modern computer.

As for the future, this spirit of human-machine collaboration should continue, which we could all benefit from, according to Mubeen.

“As with any collaboration, it forces you to think about what you bring to the table, what skills you have that complement your partner, which in this case are the computers,” he said. “When we view computers as collaborators, we’re less likely to go overboard and take on too much,” he said.

“I fear that with the contradictory framing of human versus machine, it leads to the very binary notion that one is better than the other, when we can think of them in more collaborative terms. I think that leaves more room for examination to understand how the two can work more closely together.”

James G. Williams