The Brains and Machines Lab

Mission

Our brain continuously processes complex visual information, constructing a comprehensive understanding of the world around us from fleeting 2D retinal impressions. At the Brains and Machines Lab at BGU, we endeavor to understand human visual cognition by employing neural network modeling and model-driven experiments. Our research utilizes a range of techniques, including psychophysics, eye-tracking, and functional magnetic resonance imaging (fMRI), to probe the mechanisms underlying high-order human vision. By leveraging deep learning, we develop computational hypotheses that could explain the observed behavioral and neural data. We then design and conduct innovative model-driven experiments to empirically test these hypotheses, closing the loop between theory and experiment.

Team

Tal Golan

Tal Golan

Principal investigator

Itamar Avitan

Itamar Avitan

Doctoral student

Itay Inbar

Itay Inbar

Master's student

Omer Yanai

Omer Yanai

Master's student
Co-advised with Lior Rokach & Bracha Shapira

Gal Shubeli

Gal Shubeli

Master's student
Co-advised with Dan Vilenchik

Ishai Bleiweiss

Ishai Bleiweiss

Research assistant

Avraham Lupovitch

Avraham Lupovitch

Research assistant

Selected Publications/Preprints

Controversial stimuli: Pitting neural networks against each other as models of human cognition
Tal Golan, Prashant C. Raju, Nikolaus Kriegeskorte. "Controversial stimuli: Pitting neural networks against each other as models of human cognition." Proceedings of the National Academy of Sciences, vol. 117, no. 47, pp. 29330–29337, 2020. DOI: 10.1073/pnas.1912334117.
This is a seminal work for our research program. It introduces the method of controversial stimuli, which are stimuli designed to elicit distinct predictions in alternative models. Using this method, we discovered that generative neural network classifiers are more human-like than discriminative neural network classifiers.
Testing the limits of natural language models for predicting human language judgments
Illustration: Matteo Farinella, Columbia University's Zuckerman Institute
Tal Golan*, Matthew Siegelman*, Nikolaus Kriegeskorte, Christopher Baldassano. "Testing the limits of natural language models for predicting human language judgements." Nature Machine Intelligence, 1–13, 2023. DOI: 10.1038/s42256-023-00718-1.
In this work, we translated the controversial stimuli method to the domain of language models, creating pairs of sentences where one sentence was highly likely according to one language model and highly unlikely according to another model, and vice versa. This manipulation allowed us to effectively compare language models in terms of their validity as cognitive models of human relative acceptability judgments.
Face dissimilarity judgments are predicted by representational distance in morphable and image-computable models
Kamila M. Jozwik*, Jonathan O'Keeffe*, Katherine R. Storrs*, Wenxuan Guo, Tal Golan, Nikolaus Kriegeskorte. "Face dissimilarity judgments are predicted by representational distance in morphable and image-computable models." Proceedings of the National Academy of Sciences, vol. 119, no. 27, 2022. DOI: 10.1073/pnas.2115047119.
Similarity and dissimilarity relations among stimuli define a geometry in a representational space. In this work, led by several graduates of Niko Kriegeskorte's old Cambridge lab, the representational geometry of human face similarity judgments is compared to that of multiple computational models, including the Basel Face Model and convolutional neural networks trained on various tasks.
Distinguishing representational geometries with controversial stimuli: Bayesian experimental design and its application to face dissimilarity judgments
Tal Golan*, Wenxuan Guo*, Heiko H. Schütt, Nikolaus Kriegeskorte. "Distinguishing representational geometries with controversial stimuli: Bayesian experimental design and its application to face dissimilarity judgments." SVRHM 2022 Workshop @ NeurIPS, 2022. openreview.net/forum?id=a3YPu2-Mf2h.
In the previous study, we encountered a challenge: multiple representational models predicted the human similarity ratings equally well. In this work, we introduced a method for generating stimuli that elicit distinct representational geometries in different models. Using the synthetic controversial face stimuli in a human psychophysical experiment, we found that training on "inverse rendering" (estimating structure from pixels) results in a neural network that is more human-like than when training the same architecture on alternative tasks.
Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses
Tal Golan, JohnMark Taylor, Heiko Herbert Schütt, Benjamin Peters, Rowan Paolo Sommers, Katja Seeliger, Adrien Doerig, Paul Linton, Talia Konkle, Marcel van Gerven, Konrad Kording, Blake Richards, Tim Christian Kietzmann, Grace W. Lindsay, Nikolaus Kriegeskorte. "Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses." Behavioral and Brain Sciences, vol. 46, 2023, e392. DOI: 10.1017/S0140525X23001553. Preprint: 10.31234/osf.io/tr7gx.
This is a concise statement from various cognitive computational neuroscience labs, explaining how challenges posed by cognitive psychology findings should drive progress in neural network-based cognitive modeling rather than a return to traditional box-and-arrow models.
Emergence of brain-like mirror-symmetric viewpoint tuning in convolutional neural networks
Amirhossein Farzmahdi, Wilbert Zarco, Winrich Freiwald, Nikolaus Kriegeskorte, Tal Golan. "Emergence of brain-like mirror-symmetric viewpoint tuning in convolutional neural networks." eLife, vol. 9, pp. e90256, 2024. DOI: 10.7554/eLife.90256.
The AL face-patch in macaque monkeys is sensitive to faces' head orientations, but it does not differentiate between mirrored orientations (e.g., left and right profiles). In this study, we show that convolutional neural networks (CNNs) trained to discriminate between real-world objects (which are mostly bilaterally symmetric) provide a parsimonious explanation for this property.
Self-supervised Natural Image Reconstruction and Large-scale Semantic Classification from Brain Activity
Guy Gaziv*, Roman Beliy*, Niv Granot*, Assaf Hoogi, Francesca Strappini, Tal Golan, Michal Irani. "Self-supervised Natural Image Reconstruction and Large-scale Semantic Classification from Brain Activity." NeuroImage, vol. 254, pp. 119121, 2022. DOI: 10.1016/j.neuroimage.2022.119121.
In this cool collaboration with Michal Irani's lab, self-supervised neural networks are trained to reconstruct seen images from fMRI signals. The self-supervision "trick" allows the networks to learn also from images for which no fMRI data is available.
Neural network models and deep learning
Nikolaus Kriegeskorte, Tal Golan. "Neural network models and deep learning." Current Biology, vol. 29, no. 7, pp. R231–R236, 2019. DOI: 10.1016/j.cub.2019.02.034.
This is an accessible tutorial on neural network models and deep learning.
For a complete publication record, see Google Scholar.
* Denotes equal contribution.

Resources

Controversial stimuli tutorial

Synthesizing Controversial Stimuli (a tutorial with PyTorch)

github.com/kriegeskorte-lab/controversial_stimuli_tutorial

This is a PyTorch tutorial on synthesizing controversial stimuli to disentangle the predictions of object recognition models. This tutorial was presented at CCN 2021 (Cognitive Computational Neuroscience).

Metroplot - a compact alternative to pairwise significance brackets

Metroplot - a compact alternative to pairwise significance brackets

github.com/brainsandmachines/metroplot

No more ugly brackets! Matplotlib-based visualization of pairwise comparisons.

Openings

Contact

Email: golan.neuro@bgu.ac.il

Lab: Building 90, Room 4

PI Office: Building 93B, Room 4

Address: Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, David Ben Gurion Blvd 1, Be'er Sheva, Israel