Voiced by Amazon Polly |
Overview
Optimizing hyperparameters is a crucial step in building machine learning models. Hyperparameters are a key component of machine learning models that play a key role in determining model performance. Hyperparameters are typically set before training a model. Therefore, the process of optimizing hyperparameters is important for best model performance. There are different ways of Optimizing the Hyper Parameters. This blog will see the most commonly used Optimization techniques, Grid Search, and Random Search.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Introduction
Hyperparameters include parameters such as Learning rate, Number of Iteration, Number of Hidden Layers, Number of Hidden Units, Batch size, regularization type, choice of activation function, and Model Optimizer. The choice of these parameters depends on the type of use case and the dataset. This got the name Hyper Parameter because it controls the model parameters like Weight, Bias, etc.
On the other hand, Model Parameters are adjusted by the model based on the training. Initially, we set these model parameters to random, zero, or some scientific initialization methods. Some examples of model parameters are Weights, Bias, cluster centroids, etc.
Grid Search
The Grid Search approach trains a model using all possible combinations of a specific set of values for each hyperparameter. It may be used to optimize a limited collection of hyperparameters, but as the number of hyperparameters rises since it computes all possibilities, it becomes computationally costly.
For example, if we are tuning the hyperparameters of a support vector machine (SVM) model,
C: [0.1, 1]
kernel: [‘linear’, ‘rbf’]
gamma: [0.01, 0.1, 1]
12 possible combinations of hyperparameters will be computed, and the highest giving accuracy will be tracked.
Advantages of Grid Search:
- It is quite simple to implement and parallelize.
- It searches the full search space and determines which set of specified hyperparameters works best together.
Random Search
Randomized search is an alternative technique in which hyperparameters are randomly selected from a predefined search space. This looks optimal when compared to Grid Search. This technique is often faster than grid search because it evaluates only a subset of possible hyperparameter combinations. However, a randomized search is not guaranteed to find the best hyperparameters and may require more converging iterations than a grid search.
For example, if we are tuning the hyperparameters of a support vector machine (SVM) model,
C: [0.1, 1,0.01,0.12,0.0015]
kernel: [‘linear’, ‘rbf’]
gamma: [0.01, 0.1, 1]
From the above given space, some values are randomly selected for the model tuning process, and the best parameter is tracked.
Advantages of random search
- It only considers a portion of the combinations of hyperparameters, so it is faster than grid search.
- When the vast search space and hyperparameters are not highly correlated, it may be more productive than grid search.
When to use Grid Search vs. Randomized Search?
Grid search is a feasible option when there are few hyperparameters and a manageable amount of search space. It is also helpful when the hyperparameters are very dependent on one another.
On the other hand, when the search space is broad, and the hyperparameters are not substantially dependent on one another, randomized search is a suitable option. It is also helpful when there is insufficient time or computational resources to conduct a combination of hyper parameters.
Conclusion
Also, the selection of hyperparameters depends on the nature of the problem and the dataset used for training. It is recommended to try both methods and compare their results to choose the best one for the given problem.
Drop a query if you have any questions regarding Hyper Parameter Tuning and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is an official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, Amazon QuickSight Service Delivery Partner, AWS EKS Service Delivery Partner, and Microsoft Gold Partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best-in-industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.
To get started, go through our Consultancy page and Managed Services Package, CloudThat’s offerings.
FAQs
1. Why do we need Optimization in Model building?
ANS: – It helps to generalize and improve the accuracy of the model.
2. What is a Hyper Parameter?
ANS: – Hyper Parameter is the one that controls the model parameters like Weights, Bias, learning rate, and batch size that influence a model’s training and performance in machine learning. They are crucial for optimizing model outcomes. We define the hyper parameter before model training.
WRITTEN BY Ganesh Raj
Ganesh Raj V works as a Sr. Research Associate at CloudThat. He is a highly analytical, creative, and passionate individual experienced in Data Science, Machine Learning algorithms, and Cloud Computing. In a quest to learn and work with recent technologies, he strives hard to stay updated on advanced technologies along efficiently solving problems analytically.
Click to Comment