Beijing-funded AI researchers outperform Google and OpenAI with new language processing model Machine Learning Times

Originally published in South China Morning Post, June 2, 2021.
  • The WuDao 2.0 natural language processing model had 1.75 trillion parameters, surpassing the 1.6 trillion Google unveiled in a similar model in January.
  • China has poured money into AI to try to close the gap with the United States, which retains an advantage due to its dominance in semiconductors.

A government-funded project artificial intelligence (AI) The Beijing institute on Monday unveiled the world’s most sophisticated natural language processing (NLP) model, surpassing those of Google and OpenAI, as China seeks to increase its technological competitiveness on the world stage.

WuDao Model 2.0 is a pre-trained AI model that uses 1.75 trillion parameters to simulate conversational speech, write poems, understand images, and even generate recipes. The project was led by the non-profit Beijing Academy of Artificial Intelligence (BAAI) research institute and developed with more than 100 scientists from multiple organizations.

Parameters are variables defined by machine learning models. As the model evolves, the parameters are further refined to allow the algorithm to better find the correct result over time. Once a model is trained on a specific data set, such as human speech samples, the result can then be applied to solving similar problems.

In general, the more parameters a model contains, the more sophisticated it is. However, creating a more complex model requires time, money and breakthroughs in research.

To continue reading this article, Click here.

James G. Williams