Skip to content

    What is Model Tuning?

    Model tuning is the experimental process of finding the optimal values of hyperparameters to maximize model performance. Hyperparameters are the set of variables whose values cannot be estimated by the model from the training data. These values control the training process. Model tuning is also known as hyperparameter optimization.

    Why is it Essential to Tune an ML Model?

    The purpose of tuning a model is to ensure that it performs at its best. This process involves adjusting various elements of the model to achieve optimal results. By fine-tuning the model, you can maximize its performance and get the highest rate of performance possible.

    How do you Tune a Model?

    A robust evaluation criterion should be identified and set before model tuning to optimize the tuning parameters towards the specific goal. Model tuning can be done manually or using automated methods.

    • Manual model tuning: In this method, hyperparameter values are set based on intuition or past experience. The model is then trained and evaluated to determine the performance using the respective set of hyperparameters. Adjustments are made and this process is continued till optimal value for each hyperparameter is found.

    • Automated model tuning: In this method, optimal hyperparameter values are found using algorithms. Here, we define a hyperparameter search space from which the optimal set of hyperparameter values is selected. Some of the popular algorithms for doing automated hyperparameter tuning are

      • Grid search: The user defines a set of values for each hyperparameter to form a grid. Different combinations of these hyperparameter values are tried and the combination which yields the best result is selected as the final set of optimal hyperparameters. The process is very resource-intensive as the algorithm trains one model for each set of possible hyperparameter combinations.

      • Random search: The user here also defines a set of hyperparameter values but here the algorithm will only try random combinations of hyperparameter values rather than every possible combination. The combination that yields the best result from this is selected as the optimal set of hyperparameters.

      • Bayesian search: Bayesian hyperparameter tuning, also known as Bayesian optimization, methods keeps track of past evaluation results to form the information used to make future decisions in selecting future hyperparameter values. Bayesian hyperparameter tuning is efficient because it chooses the hyperparameter values in an informed manner.

    Additional Resources