My research focuses on how the brain generates and represents subjective experiences. Specifically, I investigate the neural mechanisms underlying conscious phenomena, such as mental imagery and dreaming, using brain decoding techniques and computational models. Through the development of advanced decoding methods, I aim to elucidate how the human brain integrates and transforms multifaceted information to construct conscious experiences.
fMRI, decoding, consciousness, imagery, deep neural networks, aphantasia
2025
We developed a generative decoding method that produces descriptive text mirroring semantic information represented in the brain. The method successfully generated meaningful linguistic descriptions for both seen and recalled content without relying on the traditional language networks.
2024
We replicated the proportions of aphantasia identified by the VVIQ (3.7%) and by self-identification (12.1%) in a large-scale sample, confirming previous findings and suggesting the existence of subtypes that differ across sensory modalities of imagery loss.
~2023 (works in Kamitani Lab.)
Using an fMRI-based image reconstruction method, we found that reconstructed images more closely resembled attended rather than unattended stimuli when subjects viewed superimposed images. These results demonstrate that top-down attention modulates visual representations, shaping them to reflect subjective perception.
Using fMRI responses to 2,185 emotion-evoking videos, we identified neural representations underlying a high-dimensional space of emotional experience. We demonstrated that dozens of emotion categories could be accurately predicted from distributed brain activity, that categorical emotion models outperformed affective-dimension models in explaining fMRI responses, and that these representations form a clustered neural organization structured by distinct emotion categories.
We developed a novel image reconstruction method that optimizes pixel values so that their deep neural network (DNN) features match those decoded from fMRI activity across multiple hierarchical layers. This approach successfully reconstructed viewed and imagined images, demonstrating that combining hierarchical neural representations enables visualization of perceptual and subjective contents in the human brain.
We developed a decoding approach that predicts arbitrary object categories from fMRI activity by leveraging hierarchical visual features derived from computational models, including deep convolutional neural networks. The decoded features revealed a hierarchical correspondence between the brain and machine and enabled the identification of both seen and imagined objects beyond the training examples.
We applied a brain decoding approach to fMRI activity during sleeping to predict the content of visual imagery at sleep-onset periods. By linking brain activity patterns with verbal reports, assisted by lexical and image databases, decoders trained on stimulus-induced activity successfully classified, detected, and identified dreamed contents. These results reveal that visual experiences during sleep share neural representations with those underlying perception.
If you are interested in potential collaborations or supervision opportunities, please feel free to get in touch.
Communication Science Laboratories, NTT, Inc.
3-1 Wakamiya, Morinosato, Atsugi, Kanagawa 243-0198, Japan
Email: horikawa.t [at] gmail [dot] com