AutoML.org

Freiburg-Hannover-Tübingen

Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars

Authors: Simon Schrodi, Danny Stoll, Binxin Ru, Rhea Sanjay Sukthanker, Thomas Brox, and Frank Hutter TL;DR We take a functional view of neural architecture search that allows us to construct highly expressive search spaces based on context-free grammars, and show that we can efficiently find well-performing architectures. NAS is great, but… The neural architecture plays […]

Read More

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

Deep learning is applied to a wide variety of socially-consequential domains, e.g., credit scoring, fraud detection, hiring decisions, criminal recidivism, loan repayment, and face recognition, with many of these applications impacting the lives of people more than ever — often in biased ways. Dozens of formal definitions of fairness have been proposed, and many algorithmic […]

Read More

Symbolic Explanations for Hyperparameter Optimization

Authors:  Sarah Segel, Helena Graf, Alexander Tornede, Bernd Bischl, and Marius Lindauer TL;DR We propose to apply symbolic regression in a hyperparameter optimization setting to obtain explicit formulas providing simple and interpretable explanations of the effects of hyperparameters on the model performance. HPO is great, but… In the field of machine learning, hyperparameter optimization (HPO) […]

Read More

Self-Adjusting Bayesian Optimization with SAWEI

By Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr and Marius Lindauer TLDR: In BO: We self-adjust the exploration-exploitation trade-off online in the acquisition function, adapting to any problem landscape. Motivation Bayesian optimization (BO) encompasses a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. However, BO itself has numerous design […]

Read More

Experience-Driven Algorithm Selection: Making better and cheaper selection decisions

Authors: Tim Ruhkopf, Aditya Mohan, Difan Deng, Alexander Tornede, Frank Hutter, Marius Lindauer TL;DR: We are augmenting classical algorithm selection with multi-fidelity information, which we make non-myopic through meta-learning – enabling us for the first time to interpret partial learning curves of varying lengths jointly and make good algorithm recommendations at low cost. Why should […]

Read More

PFNs4BO: In-Context Learning for Bayesian Optimization

Can we replace the GP in BO with in-context learning? Absolutely. We achieve strong real-world performance on a variety of benchmarks with a PFN that uses only in-context learning to provide training values. This is what we found out in our ICML ‘23 paper PFNs4BO: In-Context Learning for Bayesian Optimization. Our models are trained only […]

Read More

Contextualize Me – The Case for Context in Reinforcement Learning

Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter and Marius Lindauer TLDR: We can model and investigate generalization in RL with contextual RL and our benchmark library CARL. In theory, without adding context we cannot achieve optimal performance and in the experiments we saw that using context […]

Read More

Hyperparameter Tuning in Reinforcement Learning is Easy, Actually

Hyperparameter Optimization tools perform well on Reinforcement Learning, outperforming Grid Searches with less than 10% of the budget. If not reported correctly, however, all hyperparameter tuning can heavily skew future comparisons.

Read More

Learning Activation Functions for Sparse Neural Networks: Improving Accuracy in Sparse Models

Authors: Mohammad Loni, Aditya Mohan, Mehdi Asadi, and Marius Lindauer TL, DR: Optimizing activation functions and hyperparameters of sparse neural networks help us squeeze more performance out of them; thus helping with deploying models in resource-constrained scenarios. We propose a 2-stage optimization pipeline to achieve this.  Motivation: Sparse Neural Networks (SNNs) – the greener and […]

Read More

Understanding AutoRL Hyperparameter Landscapes

Authors: Aditya Mohan, Carolin Benjamins, Konrad Wienecke, Alexander Dockhorn, and Marius Lindauer TL;DR: We investigate hyperparameters in RL by building landscapes of algorithm performance for different hyperparameter values at different stages of training. Using these landscapes we empirically demonstrate that adjusting hyperparameters during training can improve performance, which opens up new avenues to build better […]

Read More