Machine Learning

7 Important Machine Learning Techniques and Algorithms

David Weedmark2021-10-28 | 8 min read

Return to blog home

At the core of every machine learning model are the algorithms that power them, consuming the data and providing you with the answers you need. In machine learning (ML) today, there are countless algorithms being used, each designed to find solutions to different problems. Before you can determine which algorithm, or algorithms, are going to do the job, it’s important to understand what your options are.

What Are Machine Learning Algorithms?

In general terms, machine learning algorithms are formulas that work as a set of instructions or procedures to solve problems or perform tasks. Once the model is trained with a set of data, it is able to deliver an outcome or result for a given set of inputs.

This is similar to what the brain does. Once you train a child to tie their own shoes, they can use that training to tie their own shoes all on their own, as well as their boots, skates, and even to tie bows with ribbon on packages. Tying a necktie, however, is a different model, requiring a different set of instructions and a brand new training session. ML models use different algorithms, depending on what the model needs to do and the data you have available.

7 Types of Machine Learning Algorithms

Machine learning algorithms can be classified by how they learn, the type of data they are suited for, and what they do with that data. This is why an ML model used for forecasting the weather will be completely different under the hood than one that filters spam email, but may be actually quite similar to one that tells you when to buy or sell stocks.

Many algorithms use a supervised learning technique while being trained, in which the output data they are trying to predict is known and available during training. As the algorithm processes the data, it creates rules that map inputs to the outputs. This is usually a faster technique than unsupervised learning, which requires the algorithms to decipher the training data to generate their own output.

There are many other ways to classify types of algorithms. Regression, for example, is a supervised learning technique commonly used to identify relationships between variables. Another common form is called classification, in which the algorithms group data into categories, which includes linear classifiers, decision trees and support vector machines.

While there are a multitude of different types of algorithms at your disposal, here are seven of the most common you should know about before embarking on any machine learning project:

Linear regression icon

Linear Regression

Linear regression is one of the easiest ML algorithms to work with. When you provide it with adequate data, it will show you a linear relationship between the data’s dependent and independent variables. Simple linear regression refers to problems where there is one independent variable, while multiple linear regression refers to problems with multiple independent variables. The key assumption for linear regression is that the dependent variable is continuous.

Logistic Regression icon

Logistic Regression

Logistic regression is used when the dependent variable is binary rather than continuous. The model explains the relationship between the dependent variable and one or more independent variables in the form of a probability. This technique is very useful when trying to understand how different factors impact the likelihood of something happening - i.e. is a transaction fraudulent?

Decision Tree icon

Decision Trees

Decision trees are a type of supervised learning algorithms, which use a tree-like branching structure to generate predictions of the target variable. Each branch node in the tree corresponds to a specific attribute, while each leaf node corresponds to specific value(s) of that attribute. Due to its graphical nature it is one of the easiest models to interpret. This method is very helpful in identifying the best action to take for a given decision.

K Means Clustering icon

Clustering (K Means)

K-means clustering is a type of unsupervised learning, used to categorize data into groups, with the variable K representing how many groups are to be used. Each point of data is clustered together based on similarities.

These algorithms can be used by retailers to group customers together based on purchase history or to group inventory by sales activity. They are also used to group products together, separate audio tracks, or to separate human activity from bots online.

Naive Bayes icon

Naive Bayesian

Naive Bayes algorithms use probability classifiers from a set of unrelated data to determine the probability of another variable. These are based on Bayes’ theorem, which determines the probability of an event or occurrence based on otherwise unrelated data, like the increased risk of heart disease based on age.

A common use of Naive Bayes algorithms is in spam filters, which can analyze words and phrases in email messages to determine the probability of whether they are spam.

Support Vector Machine (SVM) icon

Support Vector Machine (SVM)

Another example of supervised learning methods are SVMs, which classify data by separating it into two categories. It can be used for classification, regression and outlier detection. SVM draws a line between two classes, so data on one side is placed in one category and data on the other is placed in a second category. SVM can then draw additional lines.

The key to SVM, compared to other algorithms, is that it can choose where to place the line to ensure data is not miscategorized. A simple linear SVM classifier draws a straight line, however other SVM methods aren’t limited to a straight line or a single plane.

Ensemble icon

Ensemble

As its name suggests, ensemble learning involves using more than one algorithm to create an ML model. It’s used primarily to solve complex problems. A common example of this is the random forest technique, which is used to solve classification and regression problems. A random forest consists of several decision trees all working together on the same data.

How To Choose The Right Machine Learning Technique

Choosing the right machine learning algorithm is a skill set all by itself. Always keep in mind that the more complex your model is, the harder it is to build, train, test, maintain and deploy. In addition to this, the more complex your model is, the harder it will be to document and explain for when the working model is handed off to the engineers, or other data scientists, when it’s deployed, monitored and maintained.

When you’re selecting algorithms, first define the problem that needs to be solved and explore the data you have available to solve it.

Always begin with the most simple solution first. Build that model and test it before adding complexity. Then, you can compare the results of the simpler models to the more complex ones until you find that sweet spot between getting the results you want and a complex solution.

Conclusion

For a model-driven organization, having access to a full library of machine learning algorithms is the best place to start when exploring your options. The Domino Enterprise MLOPs Platform gives you immediate access to all of the libraries you need. Built with collaboration in mind, it also provides you with the tools you need for monitoring, documentation, and governance.

To explore the resources available in the Domino Enterprise MLOps Platform, watch a demo of it in action.

David Weedmark is a published author who has worked as a project manager, software developer and as a network security consultant.

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.