Machine learning has great capabilities. It can turn data into knowledge, find patterns and trends in data that humans cannot detect, improve itself over time, and save time in data processing and analysis. It also has great limitations.
Many machine learning models fail to be commercialized for reasons that include:
A widening gap between large company 'haves' and smaller company 'have nots' is also limiting the reach of machine learning (ML).
While most ML currently is supervised and based on pattern matching to support predictions, there is still plenty that can go wrong. Ramesh Raskar, an associate professor at MIT Media Lab, says that in the instance of supervised ML, the elements of models that need to come together to make them successful include the right data, data quality, model selection, skillsets, and how the model is trained and used.
Raskar says, "Machine learning models typically fail because of a lack of resources and overestimation of what a model can do."
Unsupervised machine learning that figures things out for itself and learns from the real world can work better, Raskar says, although it is still in early development. He cites the case of training a self-driving car to avoid accidents. "This is an open problem, as there is a lack of data to train a model," he says. "Supervised learning based on video games could be used to see how a car would behave in rain or fog, but are we ready to put this is in a real car on the street?"
Fiona Browne, head of software development and ML at Belfast, Northern Ireland-based software developer Datactics, and a lecturer at University of Ulster, Northern Ireland, concurs with many of the limitations of ML models identified by Raskar, but starts at the perception and objective level. According to Browne, "Companies are under pressure to show how they are using ML and building it into systems, but they are not necessarily understanding how it fits with business objectives. There's a lot of hype around AI, and as MIT puts it, it's math, not magic."
Browne also notes nervousness about deploying ML into production, as if it goes wrong, it could cause huge reputational damage, particularly in high-stakes apps in sectors such as health. "There needs to be training around perception, and business objectives must be at the start of project design and come before technology, along with measurable metrics of what completion and success of a project will look like," says Browne. "Be conservative about deploying the technology, and only use it where it works well." An example here is Datactics' use of ML to identify outliers and errors in datasets.
Turning to the issues of using the right data and quality of data for ML models, Browne says, "Everyone wants to do the model work, but not the data work. Data is undervalued; there needs to be a change in mindset here." With an 80/20 rule in ML (80% of data scientists' time is spent on data preparation, and the remaining 20% on modelling and ML), Browne promotes an embedded, systematic approach to data and data quality. "Adding more data to a model improves performance, compared to tweaking the model."
As in all areas of AI, risk assessment, accountability, explainability, and bias are challenges to successful ML. Browne refers to academic research showing that Google's speech recognition software is 70% more likely to accurately recognize male speech than the speech of women or children's speech, which he says is the result of decisions taken on datasets used for the model.
Says Jon Campbell, director of library science at Moody's Analytics, "When management says it needs AI, ML, and data scientists, there will be a bias towards neural networks. When all you have is a hammer, everything looks like a nail." Similarly, Campbell adds, "ML needs context. If a data scientist views a problem in their own box without an understanding of the impact of inputs and outputs, there is a high chance of failure. Models need peer review and challenge, as everyone has biases." From Moody's perspective, he notes, "ML and AI need transparency and a human in the loop."
While these and other challenges to successful ML are on the table with a view to resolution, Raskar identified a less-obvious limitation in the development of ML. "Big companies have plenty of data to learn from; they have an advantage over smaller enterprises that don't have the same volumes and quality of data and can't build high-quality models. There is a widening gap in AI between the 'haves' and 'have nots' right now."
A potential solution is split learning, an MIT Media Labs initiative that shares data resources to allow participating entities to train machine models without sharing any raw data. Says Raskar, "ML models are only as good as the training data they have; split data plays a role here."
That said, Raskar says large volumes of data are not necessarily sufficient, and suggests the use of Maximum Relevance Minimum Redundancy (mRMR), a minimal-optimal feature selection algorithm designed to find the smallest relevant subset of features for a given ML task.
MLOps also are being developed to stitch together components of ML. "In ML, you have data scientists and engineers, who are quite experimental and not like traditional software developers," says Browne. "ML needs a systematic approach, transparency, auditability, and governance. Hopefully, we will see ML apps maturing to support these processes."
Considering the soft side of ML modelling, Campbell concludes, "We are close to a watershed moment when technology will be just part of ML. There will be additional oversight, a focus on results and explainability, although there is not yet a clear vision of how to show that."
Sarah Underwood is a technology writer based in Teddington, U.K.
No entries found