Research
My research so far has been on pre-training and robustness of LLMs and VLMs. I am currently exploring techniques that "go beyond the data" to make LLMs reason. Selected papers are highlighted.
|
|
LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws
Prasanna Mayilvahanan*,
Thaddäus Wiedemer*,
Sayak Mallick,
Matthias Bethge,
Wieland Brendel
ICML, 2025
arXiv
/
project page
We find that two substantially different training setups—differing in architectures, tokenizers, optimizers, etc.—when trained on the same data and achieving identical training losses, consistently yield matching downstream performance across diverse tasks.
|
|
In Search of Forgotten Domain Generalization
Prasanna Mayilvahanan*,
Roland S. Zimmermann*,
Thaddäus Wiedemer,
Evgenia Rusak,
Attila Juhos,
Matthias Bethge,
Wieland Brendel
ICLR, 2025   (Spotlight)
arXiv
/
project page
CLIP's high performance on style-centric domain shifts is significantly influenced by the presence of such images in its training set.
|
|
Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?
Prasanna Mayilvahanan*,
Thaddäus Wiedemer*,
Evgenia Rusak,
Matthias Bethge,
Wieland Brendel
ICLR, 2024
arXiv
/
project page
CLIP's ability to generalize to standard OOD benchmarks does not mainly stem from exact duplicates and near-duplicates in its training dataset.
|
|
Compositional Generalization from First Principles
Thaddäus Wiedemer*,
Prasanna Mayilvahanan*,
Matthias Bethge,
Wieland Brendel
NeurIPS, 2023
arXiv
We introduce a theoretical framework to analyze compositional generalization of neural networks within the regression setting.
|
|
Representation Learning for the Clustering of Multi-Omics Data
Gautier Viaud,
Prasanna Mayilvahanan,
Paul-Henry Cournède
IEEE/ACM TCBB, 2022
paper
We provide a neural network-based representation learning and clustering method for multi-omics data integration.
|
* denotes equal contribution
Template from Jon Barron's website.
|
|