What machines cannot (yet) do in real working conditions

Across the wide range of real-world use scenarios, there have been far more examples of augmenting human labor by intelligent machines than full automation. This scenario is expected to continue for the foreseeable future.

Thomas H. Davenport and Steven M. Miller

Reading time: 14 mins


Nearly 30 years ago, Bob Thomas, then a professor at MIT, published a book entitled “What Machines Can’t Do.” He focused on the manufacturing technology and claimed that it was not yet ready to take over the factory from humans. Although recent developments in artificial intelligence have raised the bar considerably for what machines can do, there are still many things that they cannot yet do or at least do not do well in a way. very reliable.

AI systems can work well in the research lab or in highly controlled application settings, but they still needed human assistance in the kinds of real-world working settings we researched for a new book, Working with AI: True Stories of Human-Machine Collaboration. Human workers figured prominently in our 30 case studies.

In this article, we use these examples to illustrate our list of AI-enabled activities that still require human assistance. These are activities where organizations should continue to invest in human capital and where practitioners can expect continuity of employment in the immediate future.

Current limitations of AI in the workplace

AI continues to gain capabilities over time, so the question of what machines can and cannot do in real-world work environments is a moving target. Perhaps the reader of this article in 2032 will be strangely mistaken about the limits of AI. For now, though, it’s important not to expect more from the AI ​​than it can deliver. Some of the important current limitations are described below.

Understand the context. AI does not yet understand the larger context in which the business and the task at hand take place. We have seen this problem in several case studies. It is relevant, for example, in a “digital life underwriter” job, in which an AI system assesses underwriting risk based on many data elements in a candidate’s medical records, but without understanding the situation-specific context. A commonly prescribed drug, for example, reduces nausea in cancer patients undergoing chemotherapy and in pregnant women with morning sickness. At this time, the machine does not distinguish between these two situations when assessing the life insurance risk associated with this prescription.

We also saw cases where AI systems could not know the context of the relationship between humans.

James G. Williams