Unlock your future in MLOps with Navigating MLOps: A Beginner's Blueprint.
Avoid model meltdowns as an ML Engineer with 3 monitoring strategies
Photo by Pawel Czerwinski on UnsplashLooking to up your MLOps game? Check out the MLOps Now newsletter.
Model monitoring is critical to the operational success of Machine Learning within businesses. A well-trained model can take advantage of large and complex datasets to significantly impact decision-making and profitability. The last place you want to be is in front of your boss explaining why the forecasting model has royally screwed the business over.
Monitoring models is not the same as monitoring an online service. Monitoring of software is usually restricted to how fast it runs and whether it is encountering failures or not.
For models, these things need to be considered as well as monitoring for concept drift where underlying patterns change and make models less accurate.
Another significant challenge is when the outcome of a prediction is not able to be confirmed for weeks or months. This can leave businesses with uncertainty about the efficacy of a model until it is too late and they are negatively impacted.
3 ways to make model monitoring easier
While each implementation of a model comes with unique monitoring challenges here are 3 strategies that can be used. These are particularly effective for models that have a significant lag time between prediction and confirmation.
1. Monitor Data Quality: Garbage In, Garbage Out
The reliable phrase used by Data Scientists naturally holds for models in production as well.
If the quality of data begins to degrade upstream of a model then it will hurt the performance of the model. This is particularly insidious when the data is present but incorrect. Missing data would cause the model to throw an error, an event that is much easier to detect. Data in, for example, a categorical variable being incorrectly categorised is a much more difficult problem to identify.
There are several ways to monitor data quality. Comparing distributions of data against those used to train/test the model can help flag potential anomalies. Another approach is to use data validation packages such as Great Expectations to confirm the quality of data before it is passed into a model.
2. Monitor prediction distribution: Deviations indicate drift
Similar to monitoring the distribution of data going in, monitoring prediction distribution is a good way to identify when a model is starting to drift. Producing a value that compares prediction distribution at training time and now is a great metric to add to a dashboard or early warning system.
Changes in distribution may not necessarily indicate poor performance but they can be a way to notify you of when to investigate to perform preventative maintenance rather than reactive.
3. Watch the news: Real-world events change data
Watching the news may not be what you think about when monitoring models but it can provide effective early warnings.
The COVID-19 pandemic saw customer behaviour change overnight, sending many companies using ML reeling as their models could no longer make sense of their customers.
While there isn’t much that can be done by the instantaneous change in customer behaviour it provides an important lesson in why you should be informed of world events.
If you’re a FinTech and the economy begins to take a turn for the worse your customer behaviour will inevitably change. This provides you with an opportunity to collect data to retrain your model so that when it drifts too far it can quickly be corrected to the new status quo.
Conclusion
Model monitoring is not an easy task but a necessary one. Ensuring your models remain accurate and reliable over time will ensure the sustainability of your business. Monitoring data quality, prediction and the news are good ways to make monitoring your models easier.
Want to become an MLOps master? Sign up to the MLOps Now newsletter to get weekly MLOps insights.