Decoding stimulus identity in occipital, parietal and inferotemporal cortices during visual mental imagery
In the absence of input from the external world, humans are still able to generate vivid mental images. This cognitive process, known as visual mental imagery, involves a network of prefrontal, parietal, inferotemporal, and occipital regions. Using multivariate pattern analysis (MVPA), previous studies were able to distinguish between the different orientations of imagined gratings, but not between more complex imagined stimuli, such as common objects, in early visual cortex (V1). Here asked whether letters, simple shapes, and objects can be decoded in early visual areas during visual mental imagery. In a delayed spatial judgment task, we asked participants to observe or imagine stimuli. To examine whether it is possible to discriminate between neural patterns during perception and visual mental imagery, we performed ROI-based and whole-brain searchlight-based MVPA. We were able to decode imagined stimuli in early visual (V1, V2), parietal (SPL, IPL, aIPS), inferotemporal (LOC) and prefrontal (PMd) areas. In a subset of these areas (i.e. V1, V2, LOC, SPL, IPL and aIPS), we also obtained significant cross-decoding across visual imagery and perception. Moreover, we observed a linear relationship between behavioral accuracy and the amplitude of the BOLD signal in parietal and inferotemporal cortices, but not in early visual cortex, in line with the view that these areas contribute to the ability to perform visual imagery. Together, our results suggest that in the absence of bottom-up visual inputs, patterns of functional activation in early visual cortex allow distinguishing between different imagined stimulus exemplars, most likely mediated by signals from parietal and inferotemporal areas.