23 May 2017

“We can build these models, but we don’t know how they work”

Fifteen years ago a technology company shipped PCs, operating systems, or word-processing software. Today, technology companies are at the core of most of the world’s major industries.

Look at Facebook, Uber or Airbnb, who all leveraged technology to drive fundamental change in the way things are done.

Machine learning is at the start of the same journey. Today it is still an emerging technology, used by a small number of cutting-edge companies to improve their products with more accurate insights or better user experiences.

As an investor though, my current focus is to find companies going beyond this, making machine learning core to the products they are building, and using it to address significant problems in areas such as healthcare, agriculture or transportation.

Truly impactful adoption of machine learning will face many hurdles, and one of the biggest issues I see companies dealing with today is that of machine learning as a ‘black box’. “No one really knows how the most advanced algorithms do what they do. That could be a problem”, wrote Will Knight recently.

The crux of this problem is that in many use cases understanding how a machine reaches a decision is viewed as fundamental to that machine’s mass-adoption.

Autonomous vehicles have been at the forefront of both machine learning innovation and this black box discussion. When an autonomous vehicle crashes, the operators and other parties such as law enforcement, insurers and media work round the clock to understand why.

Speaking with a company working on this recently, it was clear that explainability was viewed as a core product feature, even at the expense of superior performance from more opaque methods.

Explaining safety-critical decisions is not the only relevant example; it would equally apply to identifying biases in an AI tasked with making decisions impacting individuals, such as approving a loan application, or assigning accountability to an AI making a medical diagnosis.

Because of this, I see this broader adoption of AI/machine learning coming in two waves.

In the first wave, machine learning systems will have to be adapted or limited in order to provide clearer explanations of their decisions at the expense of performance. This will allow stakeholders a chance to better understand the implications of the technology, and to develop and adapt their processes accordingly.

In the second wave, however far off, we reach a point where we put greater trust in these systems, and agree that the benefit of running them at full power outweighs the risks of not being able to explain why they are taking particular decisions.

Joel Dudley at Mount Sinai used deep learning on the hospital’s patient record database, and found its disease predictions were “just way better” than existing tools. Will Knight writes that Dudley ruefully says “We can build these models, but we don’t know how they work”. That kind of potential is hard to ignore.

This piece was first published in the Machine Learnings newsletter on May 21st, 2017.

Show me all insights

spinner