About

My research focuses on how the brain generates and represents subjective experiences. Specifically, I investigate the neural mechanisms underlying conscious phenomena, such as mental imagery and dreaming, using brain decoding techniques and computational models. Through the development of advanced decoding methods, I aim to elucidate how the human brain integrates and transforms multifaceted information to construct conscious experiences.

Keywords

fMRI, decoding, consciousness, imagery, deep neural networks, aphantasia

News
Selected representative works

2025

Mind captioning
Mind Captioning: Evolving descriptive text of mental content from human brain activity. Science Advances (2025).
Horikawa, T.

We developed a generative decoding method that produces descriptive text mirroring semantic information represented in the brain. The method successfully generated meaningful linguistic descriptions for both seen and recalled content without relying on the traditional language networks.


2024

Aphantasia diversity
Diversity of aphantasia revealed by multiple assessments of visual imagery, multisensory imagery, and cognitive style. Front. Psychol. (2023).
Takahashi, J., Saito, G., Omura, K., Yasunaga, D., Sugimura, S., Sakamoto, S., Horikawa, T., & Gyoba, J.

We replicated the proportions of aphantasia identified by the VVIQ (3.7%) and by self-identification (12.1%) in a large-scale sample, confirming previous findings and suggesting the existence of subtypes that differ across sensory modalities of imagery loss.


~2023 (works in Kamitani Lab.)

Attention reconstruction
Attention modulates neural representation to render reconstructions according to subjective appearance. Commun. Biol. (2022).
Horikawa, T. & Kamitani, Y.

Using an fMRI-based image reconstruction method, we found that reconstructed images more closely resembled attended rather than unattended stimuli when subjects viewed superimposed images. These results demonstrate that top-down attention modulates visual representations, shaping them to reflect subjective perception.

emotion
The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions. iScience (2020).
Horikawa, T., Cowen, A.S., Keltner, D., & Kamitani, Y.

Using fMRI responses to 2,185 emotion-evoking videos, we identified neural representations underlying a high-dimensional space of emotional experience. We demonstrated that dozens of emotion categories could be accurately predicted from distributed brain activity, that categorical emotion models outperformed affective-dimension models in explaining fMRI responses, and that these representations form a clustered neural organization structured by distinct emotion categories.

deep image reconstruction
Deep image reconstruction from human brain activity. Plos Comput. Biol. (2019).
Shen, G., Horikawa, T., Majima, K., & Kamitani, Y.

We developed a novel image reconstruction method that optimizes pixel values so that their deep neural network (DNN) features match those decoded from fMRI activity across multiple hierarchical layers. This approach successfully reconstructed viewed and imagined images, demonstrating that combining hierarchical neural representations enables visualization of perceptual and subjective contents in the human brain.

generic decoding
Generic decoding of seen and imagined objects using hierarchical visual features. Nat. Commun. (2017).
Horikawa, T. & Kamitani, Y.

We developed a decoding approach that predicts arbitrary object categories from fMRI activity by leveraging hierarchical visual features derived from computational models, including deep convolutional neural networks. The decoded features revealed a hierarchical correspondence between the brain and machine and enabled the identification of both seen and imagined objects beyond the training examples.

dream decoding
Neural decoding of visual imagery during sleep. Science (2013).
Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y.

We applied a brain decoding approach to fMRI activity during sleeping to predict the content of visual imagery at sleep-onset periods. By linking brain activity patterns with verbal reports, assisted by lexical and image databases, decoders trained on stimulus-induced activity successfully classified, detected, and identified dreamed contents. These results reveal that visual experiences during sleep share neural representations with those underlying perception.

Contact

If you are interested in potential collaborations or supervision opportunities, please feel free to get in touch.

Communication Science Laboratories, NTT, Inc.
3-1 Wakamiya, Morinosato, Atsugi, Kanagawa 243-0198, Japan

Email: horikawa.t [at] gmail [dot] com