Google’s real challengers could be independent AI researchers

Artificial intelligence (AI) is not always smart. He amplified outrage on social media and struggled to report hate speech. He referred to engineers as male and nurses as female when translating the language. It failed to recognize people with darker skin. Machine learning-powered systems are accumulating greater influence on human life, and while they work well most of the time, developers are constantly fixing errors like a game of whack-a-mole. This means that the future impact of AI is unpredictable. At best, he will likely continue to harm at least some people as he is often not trained properly; at worst, it will cause society-wide damage because its intended use is unverified – think surveillance systems that use facial recognition and pattern matching.

Many say we need independent research on AI. Good news on this came from Timnit Gebru, a former ethical AI researcher at Alphabet Inc’s Google. AI, including those made by Google. Gebru is launching Distributed AI Research (DAIR), which will study these “free from the pervasive influence of Big Tech” and explore ways to eliminate the damage that often runs deep.

Good luck to her, because it will be an uphill battle. Big Tech conducts its own AI research with a lot more money, effectively sucking oxygen out of the room for everyone. In 2019, for example, Microsoft invested $1 billion in OpenAI, a research company co-founded by Elon Musk, to fuel its development of a massive language prediction system called GPT-3. A Harvard study of AI ethics published last week said the investment went to a project led by just 150 people, marking “one of the largest capital investments ever made exclusively by a si small group”.

Independent research groups like DAIR will be lucky to get even a fraction of that kind of money. Gebru has secured funding from the Ford, MacArthur, Kapor Center, Rockefeller and Open Society foundations, enough to hire five researchers over the next year. But it’s telling that its first researcher is based in South Africa and not in Silicon Valley, where most of the top researchers work for tech companies.

Google’s DeepMind AI unit, for example, has cornered much of the world’s top talent for AI research, with salaries in the range of $500,000 a year, according to one researcher. This person said he was offered three times his salary to work at DeepMind. They refused, but many others accept the higher salary. The promise of adequate funding is too strong a lure, as many academics and independent researchers are reaching an age where they have families to support. In academia, the influence of Big Tech has become glaring. A recent study by scientists from multiple universities showed that academic research into machine learning saw Big Tech funding and affiliations triple to more than 70% in the decade to 2019. Its presence increasing “closely resembles the strategies used by Big Tobacco”, the authors of this study. noted.

Researchers who want to leave Big Tech also find it almost impossible to get out. The founders of Google’s DeepMind sought for years to negotiate greater independence from Alphabet to protect their AI research from corporate interests, but those plans were ultimately canceled by Google in 2021. Several of Open AI’s top security researchers also left earlier this year to start their own San Francisco-based company called Anthropic Inc, but they turned to venture capitalists for funding. Among the funders: Facebook co-founder Dustin Moskovitz and former Google CEO Eric Schmidt. It has raised $124 million to date, according to PitchBook, which tracks venture capital investments. “[Venture capital investors] make their money from the tech hype,” says Meredith Whittaker, a former Google researcher who helped lead employee protests against Google’s work with the military, before stepping down in 2019. “Their interests are aligned with the technology.

Whittaker, who says she wouldn’t be comfortable with venture capital funding, co-founded another independent AI research group at New York University called the AI ​​Now Institute. Other similar groups that rely primarily on grants for funding include the Algorithmic Justice League, Data for Black Lives, and Data and Society.

At least Gebru is not alone. And these groups, though humbly resourced and vastly overwhelmed, have, through the constant publication of studies, raised awareness of once unknown issues like bias in algorithms. This has helped inform new legislation like the upcoming EU AI law, which will ban certain AI systems and require others to be more closely monitored. There’s no single hero in this, says Whittaker. But, she adds, “we changed the conversation.”

Parmy Olson is a Bloomberg Opinion columnist covering technology

To subscribe to Mint Bulletins

* Enter a valid email address

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our app now!!

James G. Williams