What is a Machine Learning Framework?
Machine learning (ML) frameworks are interfaces that allow data scientists and developers to build and deploy machine learning models faster and easier. Machine learning is used in almost every industry, notably finance, insurance, healthcare, and marketing. Using these tools, businesses can scale their machine learning efforts while maintaining an efficient ML lifecycle.
Companies can choose to build their own custom machine learning framework, but most organizations choose an existing framework that fits their needs. In this article, we’ll show key considerations for selecting the right machine learning framework for your project and briefly review four popular ML frameworks.
How to choose the right Machine Learning Framework
Here are several key considerations you should take into account when selecting a machine learning framework for your project.
Evaluating Your Needs
When you start your search for a machine learning framework, ask these three questions:
- Will you use the framework for deep learning or classic machine learning algorithms?
- What is your preferred programming language for artificial intelligence (AI) model development?
- What hardware, software, and cloud services are used for scaling?
Python and R are languages that are widely used in machine learning, but other languages such as C, Java, and Scala are also available. Most machine learning applications today are written in Python and are transitioning away from R because R was designed by statisticians, and is somewhat awkward to work with. Python is a more modern programming language, it offers a simple and concise syntax, and is easier to use.
Machine learning algorithms use different methods to analyze training data and apply what they learn to new examples.
Algorithms have parameters, which you can think of as a dashboard with switches and dials that control how the algorithm operates. They adjust the weights of variables to be considered, define how much to consider outliers, and make other adjustments to the algorithm. When choosing a machine learning framework, it is important to consider whether this adjustment should be automatic or manual.
Scaling Training and Deployment
In the training phase of AI algorithm development, scalability is the amount of data that can be analyzed and the speed of analysis. Performance can be improved through distributed algorithms and processing, and through the use of hardware acceleration, primarily graphical processing units (GPUs).
In the deployment phase of an AI project, scalability is related to the number of concurrent users or applications that can access the model simultaneously.
Because there are different requirements in the training and deployment phase, organizations tend to develop models in one type of environment (e.g. Python-based machine learning frameworks running in the cloud) and run them in a different environment with stringent requirements for performance and high availability—for example, in an on-premises data center.
When choosing a framework, it is important to consider whether it supports both types of scalability, and see if it supports your planned development and production environments.
Top Machine Learning Frameworks
Let’s take a look at some of the most popular machine learning frameworks in use today:
- Sci-Kit Learn
TensorFlow was created by Google and released as an open-source project. It is a versatile and powerful machine learning tool with a comprehensive library of extensive and flexible functions, and allows you to build classification models, regression models, neural networks, and most other types of machine learning models. This also includes the ability to customize machine learning algorithms to your specific requirements. TensorFlow runs on both CPUs and GPUs. The primary challenge with TensorFlow is that it is not easy to use for beginners.
Main features of TensorFlow:
- Visibility into computational graph—TensorFlow makes it easy to visualize any part of the computational process of an algorithm (called a graph), which is not supported by older frameworks like Numpy or SciKit.
- Modular—TensorFlow is highly modular and you can use its components standalone, without having to use the entire framework.
- Distributed training—TensorFlow provides strong support for distributed training on both CPUs and GPUs.
- Parallel neural network training—TensorFlow provides pipelines that let you train multiple neural networks and multiple GPUs in parallel, making it very efficient on large distributed systems.
With the release of TensorFlow 2.0, TensorFlow has added several important new features:
- Deploying on multiple platforms - improved compatibility for mobile devices, IoT and other environments, using the SavedModel format that allows you to export Tensorflow models to virtually any platform.
- Eager execution - in Tensorflow 1.x, users needed to build the entire compute graph and run it, in order to test and debug their work. Tensorflow 2.0, like PyTorch, enables eager execution. This means that models can be modified and debugged while they being built, without needing to run the entire model.
- Tighter integration of Keras - previously, Keras was supported by TensorFlow, but wasn’t integrated as part of the library. In TensorFlow 2.x, Keras is an official high-level API that ships with TensorFlow.
- Improved support for distributed computing - improved training performance using GPUs, which is up to three times faster than in Tensorflow 1.x, as well as the ability to work with multiple GPUs and Google TensorFlow Processing Units (TPUs).
PyTorch is a machine learning framework based on Torch and Caffe2, which is ideal for neural network design. PyTorch is open-source and supports cloud-based software development. It supports Lua language for user interface development. It is integrated with Python and compatible with popular libraries like Numba and Cython. Unlike Tensorflow, PyTorch is more intuitive and quicker for beginners to pick up.
Main features of PyTorch:
- Supports eager execution and greater flexibility through use of native Python code for model development.
- Rapidly switches from development to graph mode, providing high performance and faster development in C++ runtime environments.
- Uses asynchronous execution and peer-to-peer communication to improve performance both in model training and in production environments.
- Provides an end-to-end workflow allowing you to develop models in Python and deploy on iOS and Android. Extensions of the PyTorch API handle common pre-processing and integration tasks required to embed machine learning models into mobile applications.
SciKit Learn is open-source, is very user-friendly for those new to machine learning, and comes with detailed documentation. It allows the developer to change the algorithm's preset parameters either in use or at runtime, making it easy to tune and troubleshoot models.
SciKit-Learn supports machine learning development with an extensive Python library. It is one of the best tools available for data mining and analysis. Sci-Kit Learn has extensive pre-processing capabilities, and enables algorithm and model design for clustering, classification, regression, dimensionality reduction, and model selection.
Main features of Scikit-Learn:
- Supports most supervised learning algorithms—linear regression, support vector machines (SVM), decision trees, Bayesian, etc.
- Supports unsupervised learning algorithms—cluster analysis, factoring, principal component analysis (PCA), and unsupervised neural networks.
- Performs feature extraction and cross validation—extracts features from text and images can be extracted, and tests the accuracy of models on new unseen data.
- Supports clustering and ensemble techniques—can combine predictions from multiple models, and can group unlabeled data.
H2O is an open-source ML framework developed to solve the organizational problems of decision support system processes. It integrates with other frameworks, including the ones we reviewed above, to handle actual model development and training. H2O is widely used in risk and fraud trend analysis, insurance customer analysis, patient analysis in healthcare, advertising costs and ROI, and customer intelligence.
H2O components include:
- Deep Water—integrates H2O with other frameworks like TensorFlow and Caffe.
- Sparkling Water—integrates H2O with Spark, the big data processing platform.
- Steam—an enterprise edition that enables training and deploying machine learning models, making them available through APIs, and integrating them into applications.
- Driverless AI—enables non-technical personnel to prepare data, adjust parameters, and use ML to determine the best algorithm to solve a specific business problem.
Machine Learning Frameworks with Domino
Through the environment management feature of Domino, it's easier than ever to choose the right ML framework for your use case. You're able to easily build environments and have them run on the best compute source, be it CPU, GPU, or APU.
Environments in Domino are easily configured and include these main features
- Version Control - call back prior versions of an environment where upgrades may have broken model code or drastically changed results
- Choose your own IDE GUI - include any HTML browser-based GUI for use inside the Domino workbench solution
- Easily share your environments with fellow data scientists - achieve the same flexibility you have on your laptop in a server managed instance, being able to instantly share code and environments with your workmates
- Different environments for different use cases - for best server utilization, install only packages critical for the workflow your code needs to run, having multiple environments that get the best out of your server resource
For examples of how Machine Learning frameworks operate in Domino, check out some of our articles below showcasing PyTorch, Tensorflow, and Ludwig.