Armin W. Thomas

image

📩 athms.research@gmail.com

👋🏻
Hi there! I am a Ram and Vijay Shriram Data Science Fellow (Postdoc) at Stanford University, where I work with Profs. Russ Poldrack and Christopher Ré on state-of-the-art artificial intelligence tools that can help study and understand the mind and brain. I am also affiliated with Stanford Data Science and Stanford's Centers for Research on Foundation Models and for Open and Reproducible Science.

🧑🏻‍💻
I have a wide skill set connecting state-of-the-art machine learning, data science, psychology, and neuroscience, with a focus on …

…self-supervised training at scale on unconventional data (e.g., brain data)

…interpreting deep learning model behavior with tools from explainable AI

…state-of-the-art computer vision and language modeling with Transformers

…designing computational models to predict and study human (choice) behavior

…making sense of complex data with Bayesian/Frequentist statistical analysis

🔥
To date, my work has contributed to …

…improving our understanding of the computational mechanisms that support decision-making in everyday life (e.g., [Paper], [Paper])

…improving the analysis of low sample size brain data through pre-training at scale (e.g., [Paper], [Paper])

…improving the interpretation of deep learning model predictions (e.g., [Paper], [Paper], [Paper])

…advancing sequence and language modeling with state-space models (e.g., [Paper])

[January, 2023] Excited that our recent work on language modeling with state space models just got accepted for ICLR 2023 as notable top-25%!

[January, 2023] Our recent work on “differentiable programming for functional connectomics” won a best poster award at the ML4H workshop at NeurIPS!

[January, 2023] Our recent work on language modeling with state space models is now available on arXiv — We propose i) a new state space model layer, H3, that can recall and compare tokens across (long) sequences well and ii) a new algorithm, FlashConv, to speed up state space model training!

2022

[November, 2022] Our recent work on exploring differential programming as a paradigm to learn analysis pipelines for functional connectomics is now accepted at the upcoming Machine Learning for Health (ML4H) workshop at NeurIPS!

[October, 2022] Our latest review on “Interpreting mental state decoding with deep learning models” is now published in Trends in Cognitive Science!

[September, 2022] Looking forward to presenting our paper on self-supervised learning for neuroimaging data at NeurIPS 2022 in New Orleans!

[July, 2022] Our recent work on the role of gaze allocation in multi-alternative risky choice is now published in PLOS Computational Biology!

[June, 2022] Excited to share our new preprint in which we devise self-supervised learning frameworks for broad functional neuroimaging data by taking inspiration from recent advances in NLP! All code, models, and our dataset can be found on GitHub

[June, 2022] We just uploaded a new preprint comparing different interpretation methods in mental state decoding analyses with deep learning models; Find the GitHub repo here

[June, 2022] Excited to share our recent preprint in which we explore differential programming as a paradigm to learn analysis pipelines for functional connectomics; Find all code here

[March, 2022] Had a great time discussing recent advances in data science and machine learning at this year’s future leader’s summit by the Michigan Institute for Data Science

2021

[October, 2021] I gave a practical tutorial on reproducible modelling for the 2021 fall lecture series of Stanford's Center for Open and REproducible Science; A recording of the talk can be found here and the accompanying GitHub repository here

[September, 2021] The Massachusetts Society for Medical Research wrote a brief summary of our work on many-alternative choices.

[August, 2021] We uploaded a new preprint in which we discuss challenges (and solutions) for deep learning methods in brain decoding; together with with Russ Poldrack and Chris Ré

[August, 2021] Happy to have contributed to the report on foundation models (i.e., broadly pretrained models with wide adaptation; e.g., GPT-3, BERT, CLIP) by Stanford HAI with a section on the interpretability of foundation models

[June, 2021] I will be working along side an amazing team as head technical mentor for Stanford's Data Science For Social Good (DSSG) summer program this year

[May, 2021] Honored to be awarded a Google Cloud Computing Grant by Stanford HAI

[April, 2021] Our paper on the computational mechanisms underlying many-alternative choices appeared in eLife!

[January, 2021] Thrilled to begin my work as a Ram and Vijay Shriram postdoctoral fellow with Stanford Data Science

📚 Publications

[2023]

  • Dao, T., Fu, D., Y., Saab, K. K., Thomas, A. W., Rudra, A., & Ré, C. (2023). Hungry Hungry Hippos: Towards Language Modeling with State Space Models. arXiv preprint arXiv:2212.14052.

[2022]

  • Thomas, A. W., Ré, C., & Poldrack, R. A. (2022). Interpreting mental state decoding with deep learning models. Trends in Cognitive Sciences, 26(11), 972-986. doi.org/10.1016/j.tics.2022.07.003.
  • Thomas, A. W., Ré, C., & Poldrack, R. A. (2022). Self-supervised learning of brain dynamics from broad neuroimaging data. In Advances in Neural Information Processing Systems, 35. (preprint: arXiv:2206.11417).
  • Ciric, R., Thomas, A. W., Esteban, O., & Poldrack, R. A. (2022). Differentiable programming for functional connectomics. In Machine Learning for Health. Proceedings of Machine Learning Research. (preprint: arXiv:2206.00649).
  • Thomas, A. W., Ré, C., & Poldrack, R. A. (2022). Comparing interpretation methods in mental state decoding analyses with deep learning models. arXiv preprint arXiv:2205.15581.
  • Molter, F., Thomas, A. W., Huettel, S. A., Heekeren, H. R., & Mohr, P. N. (2022). Gaze-dependent evidence accumulation predicts multi-alternative risky choice behaviour. PLoS computational biology, 18(7), e1010283. doi.org/10.1371/journal.pcbi.1010283

[2021]

  • Bommasani, R., …, Thomas, A. W., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
  • Thomas, A. W., Molter, F., & Krajbich, I. (2021). Uncovering the computational mechanisms underlying many-alternative choice. Elife, 10, e57012. doi.org/10.7554/eLife.57012
  • Thomas, A. W., Lindenberger, U., Samek, W., & Müller, K. R. (2021). Evaluating deep transfer learning for whole-brain cognitive decoding. arXiv preprint arXiv:2111.01562.

[2020]

  • Thomas, A. W. (2020). Machine learning methods for modeling gaze allocation in simple choice behavior and functional neuroimaging data on the level of the individual. Technische Universität Berlin, Berlin. doi.org/10.14279/depositonce-10932

[2019]

  • Thomas, A. W., Molter, F., Krajbich, I., Heekeren, H. R., & Mohr, P. N. (2019). Gaze bias differences capture individual choice behaviour. Nature human behaviour, 3(6), 625. doi.org/10.1038/s41562-019-0584-8
  • Thomas, A. W., Heekeren, H. R., Müller, K. R., & Samek, W. (2019). Analyzing Neuroimaging Data Through Recurrent Deep Learning Models. Frontiers in Neuroscience, 13, 1321. doi.org/10.3389/fnins.2019.01321
  • Thomas, A. W., Molter, F., Heekeren, H. R., & Mohr, P. N. (2019). GLAMbox: A Python toolbox for investigating the association between gaze allocation and decision behaviour. PloS one, 14(12). doi.org/10.1371/journal.pone.0226428
  • Thomas, A. W., Müller, K. R., & Samek, W. (2019). Deep transfer learning for whole-brain FMRI analyses. In OR 2.0 Context-Aware Operating Theaters and Machine Learning in Clinical Neuroimaging (pp. 59-67). Springer, Cham. doi.org/10.1007/978-3-030-32695-1_7

💿 Open Source Software

I believe in open science and therefore put strong emphasis on open sourcing all code and data used for my research and teaching. Find key examples of my open source work below:

Research:

Teaching & Tutorials: