I am also affiliated with Stanford's Center for Open and Reproducible Science, Stanford's Center for Research on Foundation Models, and the Max Planck Institute for Human Development.
I am interested in:
- Deep & machine learning
- Decision neuroscience
- Open science & reproducibility
- Model interpretability & robustness
- [October, 2021] I had the opportunity to give a practical tutorial on reproducible modelling for the 2021 fall lecture series of Stanford's Center for Open and REproducible Science; A recording of the talk can be found here and the accompanying GitHub repository here
- [September, 2021] The Massachusetts Society for Medical Research wrote a brief summary of our work on many-alternative choices.
- [August, 2021] We uploaded a new preprint in which we discuss challenges (and solutions) for deep learning methods in brain decoding; together with with Russ Poldrack and Chris Ré
- [August, 2021] Happy to have contributed to the report on foundation models (i.e., broadly pretrained models with wide adaptation; e.g., GPT-3, BERT, CLIP) by Stanford HAI with a section on the interpretability of foundation models
- [June, 2021] I will be working along side an amazing team as head technical mentor for Stanford's Data Science For Social Good (DSSG) summer program this year
- [May, 2021] Honored to be awarded a Google Cloud Computing Grant by Stanford HAI
- [April, 2021] Our paper on the computational mechanisms underlying many-alternative choices appeared in eLife!
- [January, 2021] Thrilled to begin my work as a Ram and Vijay Shriram postdoctoral fellow with Stanford Data Science
- Thomas, A. W., Ré, C., & Poldrack, R. A. (2021). Challenges for cognitive decoding using deep learning methods. arXiv preprint arXiv:2108.06896.
- Thomas, A. W., Molter, F., & Krajbich, I. (2021). Uncovering the computational mechanisms underlying many-alternative choice. Elife, 10, e57012. https://doi.org/10.7554/eLife.57012
- Molter, F., Thomas, A. W., Huettel, S. A., Heekeren, H., & Mohr, P. N. C. (2021). Gaze-dependent evidence accumulation predicts multi-alternative risky choice behaviour. PsyArXiv. https://doi.org/10.31234/osf.io/x6nbf
- Thomas, A. W. (2020). Machine learning methods for modeling gaze allocation in simple choice behavior and functional neuroimaging data on the level of the individual. Technische Universität Berlin, Berlin. https://doi.org/10.14279/depositonce-10932
- Thomas, A. W., Heekeren, H. R., Müller, K. R., & Samek, W. (2019). Analyzing Neuroimaging Data Through Recurrent Deep Learning Models. Frontiers in Neuroscience, 13, 1321. doi.org/10.3389/fnins.2019.01321
- Thomas, A. W., Molter, F., Krajbich, I., Heekeren, H. R., & Mohr, P. N. (2019). Gaze bias differences capture individual choice behaviour. Nature human behaviour, 3(6), 625. doi.org/10.1038/s41562-019-0584-8
- Thomas, A. W., Molter, F., Heekeren, H. R., & Mohr, P. N. (2019). GLAMbox: A Python toolbox for investigating the association between gaze allocation and decision behaviour. PloS one, 14(12). doi.org/10.1371/journal.pone.0226428
- Thomas, A. W., Müller, K. R., & Samek, W. (2019). Deep transfer learning for whole-brain FMRI analyses. In OR 2.0 Context-Aware Operating Theaters and Machine Learning in Clinical Neuroimaging (pp. 59-67). Springer, Cham. doi.org/10.1007/978-3-030-32695-1_7
Key research projects
On the computational mechanisms of simple choice:
On the analysis of fMRI data with deep learning models:
📩 reach out: firstname.lastname@example.org