since ‘21 Ram and Vijay Shriram Data Science Fellow @Stanford University
‘15-20 Doctoral Researcher in ML @Technical University of Berlin
'14-15 Research Scientist in Neuroeconomics @Caltech
DSc CS/ML @TU Berlin
MSc Neurosci. @FU Berlin
BSc Psychology @FU Berlin
…developing self-supervised learning tasks to enable the training of deep learning models at scale on public brain data (e.g., [Paper])
…studying brain activity through the explanation deep learning models trained on brain data (e.g., [Paper], [Paper], [Paper])
...building AI systems that can learn at scale from unconventional data (e.g., brain data), using HPC clusters
...providing insight into deep learning model behavior with tools from explainable AI
...state-of-the-art computer vision, language, and sequence modeling with Transformers
…designing computational models to predict and understand human behavior
...making sense of complex, real-world data with tools from Bayesian/Frequentist statistics and predictive modelling
[March, 2023] Grateful to to be awarded a Google Cloud Computing Grant by Stanford HAI to develop next-level AI tools for neuroscience research
[February, 2023] Excited to share our new work in which we show that one can match the performance of sophisticated state-of-the-art sequence models (such as S4) simply by modeling input sequences with long convolutions (+some simple regularization). There is also a blog post providing a high-level view of this work.
[January, 2023] Very grateful about the recent extension of my Ram and Vijay Shriram data science fellowship by another year! Looking forward to continue pushing boundaries at the intersection of AI and neuroscience with Russ and Chris!
[January, 2023] Excited that our recent work on language modeling with state space models just got accepted for ICLR 2023 as notable top-25%!
[January, 2023] Our recent work on “differentiable programming for functional connectomics” won a best poster award at the ML4H workshop at NeurIPS!
[January, 2023] Our recent work on language modeling with state space models is now available on arXiv — We propose i) a new state space model layer, H3, that can recall and compare tokens across (long) sequences well and ii) a new algorithm, FlashConv, to speed up state space model training! There is also a blog post!
[November, 2022] Our recent work on exploring differential programming as a paradigm to learn analysis pipelines for functional connectomics is now accepted at the upcoming Machine Learning for Health (ML4H) workshop at NeurIPS!
[October, 2022] Our latest review on “Interpreting mental state decoding with deep learning models” is now published in Trends in Cognitive Science!
[September, 2022] Looking forward to presenting our paper on self-supervised learning for neuroimaging data at NeurIPS 2022 in New Orleans!
[July, 2022] Our recent work on the role of gaze allocation in multi-alternative risky choice is now published in PLOS Computational Biology!
[June, 2022] Excited to share our new preprint in which we devise self-supervised learning frameworks for broad functional neuroimaging data by taking inspiration from recent advances in NLP! All code, models, and our dataset can be found on GitHub
[June, 2022] We just uploaded a new preprint comparing different interpretation methods in mental state decoding analyses with deep learning models; Find the GitHub repo here
[June, 2022] Excited to share our recent preprint in which we explore differential programming as a paradigm to learn analysis pipelines for functional connectomics; Find all code here
[March, 2022] Had a great time discussing recent advances in data science and machine learning at this year’s future leader’s summit by the Michigan Institute for Data Science
[October, 2021] I gave a practical tutorial on reproducible modelling for the 2021 fall lecture series of Stanford's Center for Open and REproducible Science; A recording of the talk can be found here and the accompanying GitHub repository here
[September, 2021] The Massachusetts Society for Medical Research wrote a brief summary of our work on many-alternative choices.
[August, 2021] We uploaded a new preprint in which we discuss challenges (and solutions) for deep learning methods in brain decoding; together with with Russ Poldrack and Chris Ré
[August, 2021] Happy to have contributed to the report on foundation models (i.e., broadly pretrained models with wide adaptation; e.g., GPT-3, BERT, CLIP) by Stanford HAI with a section on the interpretability of foundation models
[June, 2021] I will be working along side an amazing team as head technical mentor for Stanford's Data Science For Social Good (DSSG) summer program this year
[May, 2021] Honored to be awarded a Google Cloud Computing Grant by Stanford HAI
[April, 2021] Our paper on the computational mechanisms underlying many-alternative choices appeared in eLife!
[January, 2021] Thrilled to begin my work as a Ram and Vijay Shriram postdoctoral fellow with Stanford Data Science
- Fu, D., Epstein, E., Nguyen, E., Thomas, A. W., Zhang, M., Dao, T., Rudra, A., & Ré, C. (2023). Simple Hardware-Efficient Long Convolutions for Sequence Modeling. preprint: arXiv:2302.06646.
- Dao, T., Fu, D., Y., Saab, K. K., Thomas, A. W., Rudra, A., & Ré, C. (2023). Hungry Hungry Hippos: Towards Language Modeling with State Space Models. International Conference on Learning Representations (ICLR). preprint: arXiv:2212.14052.
- Thomas, A. W., Ré, C., & Poldrack, R. A. (2022). Interpreting mental state decoding with deep learning models. Trends in Cognitive Sciences, 26(11), 972-986. doi.org/10.1016/j.tics.2022.07.003.
- Thomas, A. W., Ré, C., & Poldrack, R. A. (2022). Self-supervised learning of brain dynamics from broad neuroimaging data. Advances in Neural Information Processing Systems (NeurIPS). preprint: arXiv:2206.11417.
- Ciric, R., Thomas, A. W., Esteban, O., & Poldrack, R. A. (2022). Differentiable programming for functional connectomics. In Machine Learning for Health. Proceedings of Machine Learning Research. preprint: arXiv:2206.00649.
- Thomas, A. W., Ré, C., & Poldrack, R. A. (2022). Comparing interpretation methods in mental state decoding analyses with deep learning models. preprint: arXiv:2205.15581.
- Molter, F., Thomas, A. W., Huettel, S. A., Heekeren, H. R., & Mohr, P. N. (2022). Gaze-dependent evidence accumulation predicts multi-alternative risky choice behaviour. PLoS computational biology, 18(7), e1010283. doi.org/10.1371/journal.pcbi.1010283
- Bommasani, R., …, Thomas, A. W., ... & Liang, P. (2021). On the opportunities and risks of foundation models. preprint: arXiv:2108.07258.
- Thomas, A. W., Molter, F., & Krajbich, I. (2021). Uncovering the computational mechanisms underlying many-alternative choice. Elife, 10, e57012. doi.org/10.7554/eLife.57012
- Thomas, A. W., Lindenberger, U., Samek, W., & Müller, K. R. (2021). Evaluating deep transfer learning for whole-brain cognitive decoding. preprint: arXiv:2111.01562.
- Thomas, A. W. (2020). Machine learning methods for modeling gaze allocation in simple choice behavior and functional neuroimaging data on the level of the individual. Technische Universität Berlin, Berlin. doi.org/10.14279/depositonce-10932
- Thomas, A. W., Molter, F., Krajbich, I., Heekeren, H. R., & Mohr, P. N. (2019). Gaze bias differences capture individual choice behaviour. Nature human behaviour, 3(6), 625. doi.org/10.1038/s41562-019-0584-8
- Thomas, A. W., Heekeren, H. R., Müller, K. R., & Samek, W. (2019). Analyzing Neuroimaging Data Through Recurrent Deep Learning Models. Frontiers in Neuroscience, 13, 1321. doi.org/10.3389/fnins.2019.01321
- Thomas, A. W., Molter, F., Heekeren, H. R., & Mohr, P. N. (2019). GLAMbox: A Python toolbox for investigating the association between gaze allocation and decision behaviour. PloS one, 14(12). doi.org/10.1371/journal.pone.0226428
- Thomas, A. W., Müller, K. R., & Samek, W. (2019). Deep transfer learning for whole-brain FMRI analyses. Machine Learning in Clinical Neuroimaging Workshop at MICCAI 2019. doi.org/10.1007/978-3-030-32695-1_7
💿 Open Source Software
I believe in open science and therefore put strong emphasis on open sourcing all code and data used for my research and teaching. Find key examples of my open source work below:
- Self-supervised learning of brain dynamics
- Comparing interpretation methods in neuroimaging
- Evaluating transfer learning for brain decoding
- Gaze biases capture individual choice behaviour
- A Python toolbox for the gaze-weighted linear accumulating model
- Uncovering the computational mechanisms of many-alternative choice
Teaching & Tutorials: