AutoML.org

Freiburg-Hannover-Tübingen

Call for Datasets: OpenML 2023 Benchmark Suites

Algorithm benchmarks shine a beacon for machine learning research. They allow us, as a community, to track progress over time, identify challenging issues, to raise the bar and learn how to do better. The OpenML.org platform already serves thousands of datasets together with tasks (combination of a dataset with the target attribute, a performance metric […]

Read More

Can Fairness be Automated?

At the risk of sounding cliché, “with great power comes great responsibility.” While we don’t want to suggest that machine learning (ML) practitioners are superheroes, what was true for Spiderman is also true for those building predictive models – and even more so for those building AutoML tools. Only last year, the Netherlands Institute for […]

Read More

Zero-Shot Selection of Pretrained Models

Deep learning (DL) has celebrated many successes, but it’s still a challenge to find the right model for a given dataset — especially with a limited budget. Adapting DL models to new problems can be computationally intensive and requires comprehensive training data. On tabular data, AutoML solutions like Auto-SkLearn and AutoGluon work very well. However, […]

Read More

Wrapping Up AutoML-Conf 2022 and Introducing the 2023 Edition

The inaugural AutoML conference 2022 was an exciting adventure for us! With 170 attendees in the very first iteration, we assess this conference as a big success and are confirmed in our belief that it was the right time to transition from a workshop series to a full-fledged conference. In this blogpost, we will summarize […]

Read More

Review of the Year 2022 (Hannover)

by the AutoML Hannover Team The year 2022 was an exciting year for us. So much happened: At the Leibniz University Hannover (LUH), we founded our new institute of Artificial Intelligence AI, in short LUH|AI; Marius got tenure and was promoted to full professor; The group is further growing with our new team members Alexander […]

Read More

Learning Synthetic Environments and Reward Networks for Reinforcement Learning

In supervised learning, multiple works have investigated training networks using artificial data. For instance, in dataset distillation, the information of a larger dataset is distilled into a smaller synthetic dataset in order to improve train time. Synthetic environments (SEs) aim to apply a similar idea to Reinforcement learning (RL). They are proxies for real environments […]

Read More

Rethinking AutoML: Advancing from a Machine-Centered to Human-Centered Paradigm

In this blog post, we argue why the development of the first generation of AutoML tools ended up being less fruitful than expected and how we envision a new paradigm of automated machine learning (AutoML) that is focused on the needs and workflows of ML practitioners and data scientists. The Vision of AutoML The last […]

Read More

TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second

A radically new approach to tabular classification: we introduce TabPFN, a new tabular data classification method that takes < 1 second & yields SOTA performance (competitive with the best AutoML pipelines in an hour). So far, it is limited in scale, though: it can only tackle problems up to 1000 training examples, 100 features and […]

Read More

DEHB

DEHB: EVOLUTIONARY HYPERBAND FOR SCALABLE, ROBUST AND EFFICIENT HYPERPARAMETER OPTIMIZATION By Noor Awad, Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. We believe that a practical, general HPO method must fulfill many desiderata, including: (1) strong anytime performance, […]

Read More

Deep Learning 2.0: Extending the Power of Deep Learning to the Meta-Level

Deep Learning (DL) has been able to revolutionize learning from raw data (images, text, speech, etc) by replacing domain-specific hand-crafted features with features that are jointly learned for the particular task at hand. In this blog post, I propose to take deep learning to the next level, by also jointly (meta-)learning other, currently hand-crafted, elements […]

Read More