Humans are exceptionally quick to move their eyes towards faces. Previous studies typically have studied this showing isolated faces and control objects on a screen. Are faces prioritised in a similar way under more natural viewing conditions, e.g. when freely viewing a scene? This is hard to pin down because natural viewing is... well, natural. Eye movements towards faces in scenes cross a multitude of distances and angles, land on faces of different sizes, and start from objects with different low-level properties and at different times on trial. Petra and Ben used a large free-viewing dataset to identify > 50.000 saccades towards faces and inanimate objects. This allowed them to statistically control for a plethora of confounding factors and test whether face-directed saccades are special under more natural viewing conditions. The answer was yes: Face-directed saccades of all shapes and sizes have higher peak velocities than object-directed ones. Interestingly, face-directed saccades are also preceded by shorter fixations - we rush to make a saccade when it will land on a face. However, this latter effect is limited to short saccades following the trajectory of the previous one, i.e. when we only have to hop on to land on a face - which is strikingly similar to a recent model of perisaccadic attention by Lisa Schwetlick and colleagues. We speculate that faces may be special to the saccade system in at least two ways, resting on independent mechanisms with different visual field coverage. For more, please check out the paper, data & code
Marcel and Ben collaborated with Özlem Sensoy and Gudrun Schwarzer to compare the scene-viewing behaviour of 5-year-old children to that of adults. They found that preschool children fixate text elements a lot less and instead are drawn towards hands and the things we do with them, which matches recent findings on the development of neural tuning in the visual brain. For more, please check out the SciRep paper, data & code
Marcel wins Dr. Herbert Stolzenberg Prize!
Marcel will be awarded the Dr. Herbert Stolzenberg Prize of the Giessen Graduate Center for Natural Sciences and Psychology (GGN) for his work on individual gaze. The ceremony will take place during the GGN's Opening Event on July 3, 3pm. Marcel will give a presentation of his work - so don't miss out!
Research Workshop in the Dolomites
The whole lab had a wonderful time at a research workshop close to Pale di San Martino. We got exciting input on individual perception and gaze behavior from our fantastic guest speakers Miriam Celli (Padova Neuroscience Center) and Nitzan Guy (Hebrew University Jerusalem); conceived and designed new experiments; spent time in the mountains; and enjoyed the remarkable talents of our chef Max!
Marcel and Ben will give another talk about our research on individual eye movements and perception at Mathematikum. The talk aims at a general audience, admission is free and everybody welcome! If you're interested, come to Mathematikum, May 2nd, 7pm.
Yeliz Dinç is an Erasmus+ student visiting the lab for three months before completing her MSc in Cognitive Neuroscience and Clinical Neuropsychology at the University of Padova. She is very interested in the cognitive neuroscience of perception and would like to learn more about neuroimaging and programming. Welcome, Yeliz!
Which parts of the face inform us about the feelings and traits of others? A little while ago, Max and Ben prepared face stimuli for an (unrelated) experiment and made the fun observation that nose regions are surprisingly informative. Max and Ben could confirm this in an online sample and just published the quirky finding in iPerception. For more, check out the paper, data & code, or the university's press release (german).
Faces can evoke rapid saccades that are hard to supress. Max, Theresa and Ben found that this effect generalizes to upper halves of faces. Interestingly, this seems specific: Lower halves of faces do not evoke rapid saccades, neither do artificial, face-associated stimuli like glasses and masks. For details, check out their paper, data & code!
We just about recovered from Diana's Active Advent (and the ensuing chocolate overcompensation during the holidays). So now it's Marcel's turn to challenge us - and he hasn't held back!
Marek Pędziwiatr is a postdoc from Isabelle Mareschal's lab at Queen Mary University of London and visiting the Indivisual lab for a two-months research stay. Marek will test whether individual differences in gaze can be probed without eyetracking - which would make large scale data collection a lot easier. Welcome, Marek!
'Tis the season... for planks. Most of what we do is desk work, but all members of the lab like to move. Diana had the fantastic idea of challenging us with a very active Advent calendar (see to the left).
Susanne Stoll teamed up with Ben, Elisa Infanti and Sam Schwarzkopf to elaborate on the type of pitfalls that have led to Ben's retraction of a population receptive field (pRF) paper. If you want to learn about regression to the mean, egression from the mean, the effects of cross-thresholding and all the other reasons you shouldn't use the same data for defining inclusion criteria and testing hypotheses - check out the paper, data & code!
Marcel, Max, Tamara (former RA) and Ben teamed up with Meike Ramon to study the gaze behavior of so-called Super Recognizers - individuals who are extremely good at face recognition. Their study showed that Super Recognisers not only tend to look more at faces in scenes, they also do so in a specific way. Compared to controls, Super Recognisers fixate closer to a point just below the eyes, which has previously been identified as the ideal fixation location for extracting discriminable information from a face. If this sounds interesting to you, please check out the paper, code and data.
Marcel wins Young Talent Communicator Award
Tasfia Ahsan is a PhD student from York University (Toronto, Ca) and visits the Indivisual lab for a one-month research stay. In her work with supervisor Erez Freud, she found that illusory depth can modulate perceptual precision. During her time in Giessen, Tasfia would like to find out whether this effect is mediated by eye movements. Welcome, Tasfia!
The lab had a fantastic experience at their first in-person conference at VSS 2022. We saw dolphins, talks and posters and met vision scientists from all over the world. If you signed up for downloading one of our posters, you're at the right place. Here's
Diana's poster showing that Learned interpretations of ambiguous drawings affect response times in a familiar-size Stroop task,
Petra's poster asking Does individual gaze lead to individual visual representations?,
Diana just published her first paper on The influence of familiarity on memory for faces and mask wearing in Cognitive Research: Principles and Implications. Diana found that it's harder to remember the occurence of faces if they wear a mask and if they are unfamiliar. It's also harder to remember whether an unfamiliar face wore a mask or not. These findings suggest a memory bottleneck for contact tracing, which is more severe when people don't know each other well. As usual, you can find the data and code online, including familiarity ratings and masked versions for the Celebrities in Frontal Profile (CFP) data set. Congratulations Diana!
Diana (Kollenda) won the 1st prize of this years TeaP poster competition! Her poster on 'Seeing it differently: The AmbigObj stimulus-set depicting ambiguous drawings of small-large and animate-inanimate object pairs' presents a new stimulus set of ambiguous shapes, which Diana produced, validated and used for a first study with intriguing results. We're immensely proud of Diana's creativity, cleverness, dedication and success! Check out Diana's poster for more.
As part of his PhD, Marcel works towards the ambitious goal of collecting a massive benchmark sample of individual gaze behavior during free viewing. He and Ben teamed up with mathematikum, a popular science museum in Giessen to developed an interactive exhibit. This enables visitors to take part in a real experiment and learn about gaze in a unique way. The 'eyetracking booth' was officially opened today and already attracted over 500 participants during the preceeding pilot stage (hooray!). You can read more about the project in the press release or this newspaper article (german).
It's grant reaping season! The HMWK decided to fund The adaptive mind. TAM brings together researchers from Experimental Psychology, Clinical Psychology and Artificial Intelligence from several universitites. We aim to understand how the human mind successfully adapts to changing conditions, and what happens when these adaptive processes fail. Ben is co-PI on projects on sensory processing in ASD and overgeneralization in persisting fear, as well as the project-wide Data Hub.
One of the main strategies in the fight against CoViD-19 is Test and Trace. Data scientists have highlighted a main challenge to this approach: Classic contact tracing may be too slow to keep up with the infection dynamics of CoViD-19. However, there is an additional challenge, which is less well understood. For accurate reporting, interviewees have to remember all their contacts, sometimes going as far back as two weeks. What is the memory bottleneck for contact tracing? Ben and Max have developed a study idea to find out, which just received funding from the DFG.
Susanne Stoll tried to replicate methods from one of Ben's PhD papers and discovered a potential flaw in the analysis. Ben went back to the old data, confirmed the problem and retracted his original publication (retraction notice). He and some of his colleagues joined Susanne in an effort to explain the (surprisingly complicated) problem in a technical paper (preprint), so that others won't have to repeat his mistake. Sam Schwarzkopf wrote a blog post about the ordeal, Retraction Watch featured it as 'doing the right thing' and Ben shared his thoughts on curiosity and correcting our errors in an invited commentary in Nature.
Marcel gave a talk at Fribourg
Ben gave a talk at Berkeley
The gods of Zoom provide us with the opportunity to present our work afar, despite the distance and the virus. Ben kicked off with a talk on 'Where' in the ventral stream at the Neuroimaging Seminar Series hosted by Sonia Bishop's lab at UC Berkeley.
Marcel used the OSIE dataset to develop a short test of gaze behaviour and found that some dimensions of individual gaze biases can be estimated from less than 5 minutes worth of eyetracking data. This will help us and other researchers to probe individual gaze in a much more time-efficient manner, which is especially important for research involving children and other vulnerable individuals. Marcel published this as his very first paper, which just appeared in Journal of Vision. Congrats, Marcel! Also, special thanks to Dr. Stefanie Mueller and the ZPID PsychLab offline for collecting the validation data. The paper, code and data are all open access, so go ahead and use it =)
the group homepage is live - welcome to individual-perception.com =)