Finding the value in delivering AI services

At the AI Expo in London in April, it became clear that the market for artificial intelligence (AI) is somewhat unclear. Many businesses understand that advances in the capabilities of computing power, data analysis methods and machine learning are having an increasing impact on many areas of commercial activity, but they are not sure whether – or more importantly how – this might apply to what they do.

I had originally thought that AI solutions for business applications were likely to involve custom-built AI engines, making use of the data collection, management and analysis tools that are already provided by the software industry for applications across many sectors. But I have changed my mind. What seems to be happening at this stage in the maturing of AI is rather the reverse of this. A number of relatively low-cost AI services exist that are used by specialists to build the solution: the specialists’ expertise is not in AI so much as in understanding the data requirements of the application, and working with the limitations of existing AI engines. The specialists’ real understanding is the application domain, not AI theory.

Here’s an example: Parallel AI is a solution builder working in multiple verticals including gaming / betting, healthcare, aerospace/defence, and fashion retailing. It uses AI services from the likes of Microsoft Azure, Google and Amazon Web Services (AWS). Its view is that these services can deliver machine learning models that will get a company to perhaps 90% prediction accuracy, but companies need to do better than this for pretty much any application where AI is being considered – particularly where real-time decision making is needed and traditional rules-based approaches are therefore not applicable (because the rules change too fast).

Parallel’s approach is to use multiple models, and to understand the constraints and limitations of the AI engines it uses. Those constraints include programming language and data size limits. Parallel points out that much in AI depends on how much resource the user is prepared to throw at a problem: there are many trade-offs.

Another company mentioning the 90% figure for off-the-shelf AI models is Trik – a visual inspection and building information modelling company. According to Trik, the AI engines from the big providers like Google, AWS and Azure are similar, and are becoming commodity services. Trik says that to get better prediction accuracy – and more value – from them, better model training data is needed, which means thinking carefully about the data labels used. As an example, labelling an image artefact as a crack is good, but combining this with precise location data that identifies it as a crack on a structural building component is very much more useful to the model as it learns.

The scope for differentiation among most of the early commercial pioneers of AI solutions for business seems to be less to do with algorithm writing and AI theories and more to do with systems for gathering and classifying training data. In my analysis of how value is created from AI, Parallel AI and Trik both fit Model 3 in Figure 1. The business need is for more, and better, data – not better AI theories and approaches, which are still the preserve of university departments and giant IT companies with huge resources.

Add this: