Artificial Intelligence Researchers Design Fault Detection Method for Safety-Critical Machine Learning

Missed a Data Summit session? Watch on demand here.

Researchers from MIT, Stanford University, and the University of Pennsylvania have developed a method to predict the failure rates of safety-critical machine learning systems and effectively determine their rate of occurrence. Safety-critical machine learning systems make decisions for automated technologies such as self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and airplanes. Unlike AI that helps you write an email or recommend a song, safety-critical system failures can lead to serious injury or death. Problems with such machine learning systems can also lead to financially costly events like SpaceX missing its landing strip.

The researchers say their neural bridge sampling method offers regulators, academics and industry experts a common reference to discuss the risks associated with deploying complex machine learning systems in safety-critical environments. In an article entitled “Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems”, recently published on arXiv, the authors argue that their approach can satisfy both the public’s right to know that a system has been rigorously tested and an organization’s desire to treat AI models as trade secrets. In fact, some AI startups and big tech companies refuse to grant access to raw models for testing and verification for fear that such inspections will reveal confidential information.

“They don’t want to tell you what’s inside the black box, so we need to be able to look at these systems from afar without dissecting them,” co-lead author Matthew O’Kelly told VentureBeat during of a telephone interview. “And so one of the benefits of the methods that we’re proposing is that basically somebody can send you a scrambled description of that generated model, give you a bunch of distributions, and get you out of it, and then return the search space and scores. They don’t tell you what actually happened during the deployment.

Safety-critical systems have failure rates so low that the rates can be difficult to calculate, and the better the systems get, the harder it is to estimate, O’Kelly said. To arrive at a predicted failure rate, a novel Markov Chain Monte Carlo (MCMC) scheme is used to identify areas of a distribution that are assumed to be near a failure event.

“Then you continue this process and you build what we call this ladder towards the failing regions. You keep going from bad to worse playing against the Tesla autopilot algorithm or the pacemaker algorithm to keep pushing it to failures that are getting worse and worse,” said co-lead author Aman Sinha .

The neural bridge sampling method detailed in the article relies on decades-old statistical techniques, as well as on book published in part by O’Kelly and Sinha which uses a simulation test framework to evaluate a black box autonomous vehicle system. In addition to the contribution of the neural bridge in the article, the authors argue for continued progress in privacy-conscious technologies, such as federated learning and differential privacy, and urge more researchers and people with technical knowledge to join regulatory conversations and contribute to driving policy.

“We would like to see more initiatives based on statistics and science, in terms of regulation and policy around things like autonomous vehicles,” O’Kelly said. “We think this is such a new technology that information will have to flow fairly quickly from the academic community to the companies making the objects to the government who will be responsible for regulating them.”

In other recent safety-critical systems news, autonomous navigation has grown during the COVID-19 pandemic, and last week a team of researchers detailed DuckieNet, a physical model for evaluating autonomous vehicles. and robotic systems. Also last week: Medical experts presented the first set of standards for reporting the use of artificial intelligence in medical clinical trials.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more

James G. Williams