The increasing data size and model complexity makes it even harder to find a reasonable configuration within a limited computational or time budget. Multi-Fidelity techniques in general approximate the true value of an expensive black box function with a cheap (maybe noisy) evaluation proxy and thus, increase the efficiency of HPO approaches substantially. For example, we can use a small subset of the dataset or train a DNN for only a few epochs.
Our Packages
- Auto-sklearn increased its efficiency in version 2.0 by using multi-fidelity optimization.
- Auto-Pytorch was designed as a multi-fidelity approach from the first moment and demonstrates how important it is for AutoDL.
- SMAC implements the approach of BOHB, by combining Hyperband as a multi-fidelity approach and Bayesian Optimization.
- DEHB replaces random search in Hyperband with Differential Evolution to show strong performance for multi-fidelity HPO [blog].