Machine Learning

How To Make Data-Driven Predictions with Predictive Modeling

David Weedmark2021-10-29 | 8 min read

Return to blog home

When you hear words like machine learning (ML) or artificial intelligence (AI), one of the first things that comes to mind is correctly predicting future occurrences or answering difficult questions about the present based on past events. At its core, this is what predictive modeling is all about.

What Is Predictive Modeling?

Predictive modeling is a machine learning technique that predicts or forecasts likely future outcomes based on historical and current data. Predictive models can be used to predict just about anything, including loan risks and weather forecasts, to what your next favorite TV show will be. Often, the predictions describe situations like whether or not a credit card transaction is fraudulent or whether or not a patient has heart disease.

How Does Predictive Modeling Work?

When developing and deploying machine learning solutions, model-driven organizations follow the four stage data science lifecycle: Manage, Develop, Deploy and Monitor.

Manage

Predictive modeling usually begins with a question or a problem statement that needs to be solved. These vary widely and there is no shortage of such questions, such as:

  • How much inventory should we order?
  • When should we invest?
  • How many hospital beds will we need?

Once you have formulated your question, you then need to gather a data set related to that question. If you are building a sales forecasting model, for example, you need sales data. The project should be prioritized against other projects to focus data science resources where they can have the most value.

Develop

During the development stage, you begin by exploring which predictive algorithms that will work best to answer your question with the data you have available.

At the same time, you need to clean and analyze the data to make sure it’s usable by your algorithms. Cleaning the data includes removing duplicate data, extraneous variables as well as ensuring that the data is standardized. Data from different sources, for example, will often be formatted differently, common examples being state abbreviations or product SKUs. You also need to check null values, outliers and ensure placeholders are present where needed.

Separating the data into a training set and a testing or validation set allows you to train the model on one set of data and then test it on another, avoiding undercutting or overfitting. Pre-processing the data after it has been split into two sets helps to avoid data leakage.

Test each candidate model on the same training dataset. Then run each model again using the testing set. With the results of these tests, you can then evaluate the model results to determine which performed the best. This step is always highly iterative and so you may cycle through finding data, preparing it, building models and testing them many times before selecting the final model.

Deploy

Once you have a model that provides you with satisfactory results, you can then move to the deployment stage to migrate the model into a live environment. This can involve building an app, using an API or deploying the model in a database. It’s at this point that the end users are introduced to the model and trained on using its results in whatever form they are surfaced.

Monitor

After the model has been deployed, it needs to be monitored to ensure that it continues to operate within expected parameters. If the model begins to drift, it may need to be removed and re-deployed. It’s also important to monitor the data that the model is being fed. If the data is of a different type or a different format than the data it was trained on, the model will likely begin to generate unexpected results.

Once a model is deemed to not be operating properly, it's critical to quickly take action to remediate the situation and get the model redeployed.

Data scientist looking at a predictive model

Inaccurate or Unacceptable Predictions

Regardless of the technique being used, the model’s ability to predict or forecast is limited by the data it has to work with. This has often been the problem in forecasting epidemics, for example. When a new virus emerges, or a variant of an existing virus, predictive models are limited by the available data on hand. If the existing data on similar viruses or variants does not reflect the realities of the new strain, then predictions on hospital bed requirements, for example, or the results of shutdowns, will not be precise.

In other cases, the models may be accurate, but provide results that create more problems than they solve due to model bias. Predictive models used by the US criminal justice system had regularly provided law enforcement with racially biased results when predicting things like crime hotspots in communities or future parole violations. Even when race is removed as a data variable, other related variables like geography or income can inadvertently bring race predictions back into the results.

Importance of Predictive Models

Thousands of organizations are using Domino’s Enterprise MLOps Platform to improve their approach to predictive modelling, generating advancements in fraud detection, determining new predictors in early-stage cancer and even helping farmers grow crops more efficiently with higher yields.

Imagine, for example, knowing with 99.9% accuracy the likelihood that your investment will be profitable, or that your car engine will fail tomorrow if it’s not serviced today. Finding the right balance between accuracy and reliability and cost and profitability is what characterizes a successful data science project. Being able to balance these variables at scale is what distinguishes a profitable data model organization.

Predictive Modeling Use Cases

When predictive modeling is done right, there’s no denying that it can pay huge rewards. A key ingredient to success is working on an MLOps platform that delivers the tools and resources your data science team needs to do their work well in a collaborative and well-documented environment.

Predictive Modeling with Domino Data Labs

A new Forrester study has revealed that companies can realize a 542% return on investment by using Domino’s Data Science Platform. In February 2021, Lockheed Martin announced that it had realized over $20 million in annual value by scaling their AI/ML solutions using the Domino Data Science Platform.

Above all, it’s important to remember that predictive modeling is a science. Successful, model-driven organizations approach it as such, while those who do not will only achieve success on a hit-and-miss basis. To find out why Steve Wozniak says the “Domino Data Labs Machine Learning Ops is changing the world,” watch a demo of it in action. You can also start exploring its features for yourself with a free trial.

David Weedmark is a published author who has worked as a project manager, software developer and a network security consultant.

David Weedmark is a published author who has worked as a project manager, software developer and as a network security consultant.

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.