Yuya Chiba, Ph.D.

Senior Research Scientist
Interaction Research Group
Innovative Communication Lab
NTT Communication Science Laboratories


Contact Information:
yuya.chiba (at) ntt.com
2-4 Hikaridai, Seika, Souraku,
Kyoto 619-0237, JAPAN

Research Interests

  • Spoken Dialog System
  • Multimodal Information Processing
  • Verbal/Non-verbal Communication Analysis

Publication

  • Journal Paper
  • Conference Paper
  • Software

Journal Paper

  1. S. Kawanishi, Y. Chiba, A. Ito, and T. Nose, We open our mouths when we are silent, Acoustical Letter, 2024 (accepted).
  2. A. Guo, R. Hirai, A. Ohashi, Y. Chiba, Y. Tsunomori, and R. Higashinaka, Personality prediction from task-oriented and open-domain human-machine dialogues, Scientific Reports, Vol. 14, No. 68, pp. 1-13, 2024.
  3. Y. Chiba and R. Higashinaka, Dialogue situation recognition in everyday conversation from audio, visual, and linguistic information, IEEE Access, pp. 70819-70832, 2023.
  4. Y. Chiba and R. Higashinaka, Analyzing variations of everyday Japanese conversations based on semantic labels of functional expressions, ACM Transactions on Asian and Low-Resource Language Information Processing, Vol. 22, No. 2, pp. 1-26, 2023.
  5. K. Nakamura, T. Nose, Y. Chiba, and A. Ito, A symbol-level melody completion based on a convolutional neural network with generative adversarial learning, Journal of Information Processing, Vol. 28, pp. 248-257, 2020.
  6. J. Fu, Y. Chiba, T. Nose, and A. Ito, Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural acoustic models, Speech Communication, Vol. 116, pp. 86-97, 2020.
  7. Y. Chiba, T. Nose, and A. Ito, Multi-condition training for noise-robust speech emotion recognition, Acoustical Letter, Vol. 40, No. 6, pp. 406-409, 2019.
  8. H. Prafiyanto, T. Nose, Y. Chiba, and A. Ito, Improving human scoring of prosody using parametric speech synthesis, Speech Communication, Vol. 111, pp. 14-21, 2019.
  9. H. Prafiyanto, T. Nose, Y. Chiba, and A. Ito, Analysis of preferred speaking rate and pause in spoken easy Japanese for non-native listeners, Acoustical Science and Technology, Vol. 39, No. 2, pp. 92-100, 2018.
  10. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Analyses of example sentences collected by conversation for example-based non-task-oriented dialog system, IAENG International Journal of Computer Science, Vol. 45, No. 1, pp. 285-293, 2018.
  11. Y. Chiba, T. Nose, and A. Ito, Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt, Journal on Multimodal User Interfaces, Vol. 11, No. 2, pp. 185-196, 2017.
  12. Y. Chiba, T. Nose, A. Ito and M. Ito, Estimating the user’s state before exchanging utterances using intermediate acoustic features for spoken dialog systems, IAENG International Journal of Computer Science, Vol. 43, No. 1, pp. 1-9, 2016.
  13. T. Kase, T. Nose, Y. Chiba, and A. Ito, Evaluation of spoken dialog system with cooperative emotional speech synthesis based on utterance state estimation, IEICE Transaction on Fundamentals of Electronics, Communications and Computer Sciences, Vol. J99-A, No. 1, pp. 25-35, 2016 (in Japanese).
  14. N. Suzuki, Y. Hiroi, Y. Chiba, T. Nose, and A. Ito, A computer-assisted English conversation training system for response-timing-aware oral conversation exercise, Journal of Information Processing, Vol. 56, No. 11, pp. 2177-2189, 2015 (in Japanese).
  15. Y. Chiba and A. Ito, Estimating a user's internal state before the first input utterance, Advances in Human-Computer Interaction, Vol. 2012, pp. 1-10, 2012.

Conference Paper

  1. Y. Sato, Y. Chiba and R. Higashinaka, Effects of multiple Japanese datasets for training voice activity projection models, in Proc. O-COCOSDA, 2024 (accepted).
  2. Y. Chiba, K. Mitsuda, A. Lee, and R. Higashinaka, The Remdis toolkit: Building advanced real-time multimodal dialogue systems with incremental processing and large language models, in Proc. IWSDS, pp. 1-6, 2024.
  3. A. Guo, A. Ohashi, Y. Chiba, Y. Tsunomori, R. Higashinaka, and R. Hirai, Personality-aware natural language generation for task-oriented dialogue using reinforcement learning, in Proc. IEEE RO-MAN, pp. 1823-1828, 2023.
  4. H. Sugiyama, M. Mizukami, T. Arimoto, H. Narimatsu, Y. Chiba, H. Nakajima, and T. Meguro, Empirical analysis of training strategies of transformer-based Japanese chit-chat systems, in Proc. SLT, pp. 685-691, 2023.
  5. R. Yahagi, A. Ito, T. Nose, and Y. Chiba, Response sentence modification using a sentence vector for a flexible response generation of retrieval-based dialogue systems, in Proc. APSIPA-ASC, pp. 853-859, 2022.
  6. M. Inaba, Y. Chiba, R. Higashinaka, K. Komatani, Y. Miyao, and T. Nagai, Collection and analysis of travel agency task dialogues with age-diverse speakers, in Proc. LREC, pp. 5759-5767, 2022.
  7. R. Yahagi, Y. Chiba, T. Nose, and A. Ito, Multimodal dialogue response timing estimation using dialogue context encoder, in Proc. IWSDS, pp. 133-141, 2021.
  8. A. Guo, A. Ohashi, R. Hirai, Y. Chiba, Y. Tsunomori and R. Higashinaka, Influence of user personality on dialogue task performance: A case study using a rule-based dialogue system, in Proc. the 3rd Workshop on NLP for Conversational AI, pp. 263-270, 2021.
  9. Y. Chiba and R. Higashinaka, Variation across everyday conversations: Factor analysis of conversations using semantic categories of functional expressions, in Proc. PACLIC, pp. 160-169, 2021.
  10. Y. Chiba, Y. Yamazaki, and A. Ito, Speaker intimacy in chat-talks: Analysis and recognition based on verbal and non-verbal information, in Proc. SemDial, pp. 1-10, 2021.
  11. Y. Chiba and R. Higashinaka, Dialogue situation recognition for everyday conversation using multimodal information, in Proc. INTERSPEECH, pp. 241-245, 2021.
  12. Y. Yamazaki, Y. Chiba, T. Nose, and A. Ito, Neural spoken-response generation using prosodic and linguistic context for conversational systems, in Proc. INTERSPEECH, pp. 246-250, 2021.
  13. Y. Chiba, T. Nose, and A. Ito, Multi-stream attention-based BLSTM with feature segmentation for speech emotion recognition, in Proc. INTERSPEECH, pp. 3301-3305, 2020.
  14. Y. Yamazaki, Y. Chiba, T. Nose, and A. Ito, Construction and analysis of a multimodal chat-talk corpus for dialog systems considering interpersonal closeness, in Proc. LREC, pp. 436-441, 2020.
  15. S. Tada, Y. Chiba, T. Nose, and A. Ito, Effect of mutual self-disclosure in spoken dialog system on user impression, in Proc. APSIPA-ASC, pp. 806-810, 2018.
  16. H. Wu, Y. Chiba, T. Nose, and A. Ito, Analyzing effect of physical expression on English proficiency for multimodal computer-assisted language learning, in Proc. INTERSPEECH, pp. 1746-1750, 2018.
  17. Y. Chiba, T. Nose, T. Kase, M. Yamanaka, and A. Ito, An analysis of the effect of emotional speech synthesis on non-task-oriented dialogue system,” in Proc. SIGDIAL, pp. 371-375, 2018.
  18. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Improving user impression in spoken dialog system with gradual speech form control, in Proc. SIGDIAL, pp. 235-240, 2018.
  19. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Collection of example sentences for non-task-oriented dialog using a spoken dialog system and comparison with hand-crafted DB, HCI International 2017-Posters' Extended Abstracts, pp. 458-563, 2017.
  20. Y. Chiba, T. Nose, and A. Ito, Analysis of efficient multimodal features for estimating user's willingness to talk: Comparison of human-machine and human-human dialog, in Proc. APSIPA-ASC, pp. 428-431, 2017.
  21. S. Tada, Y. Chiba, T. Nose, and A. Ito, Response classification of interview-based dialog system using user focus and semantic orientation, in Proc. IIH-MSP, pp. 84-90, 2017.
  22. Y. Chiba and A. Ito, Estimation of user's willingness to talk about the topic: Analysis of interviews between humans, in Proc. IWSDS, pp. 1-6, 2016.
  23. E. Takeishi, T. Nose, Y. Chiba, and A. Ito, Construction and analysis of phonetically and prosodically balanced emotional speech database, in Proc. O-COCOSDA, pp. 16-21, 2016.
  24. N. Totsuka, Y. Chiba, T. Nose, and A. Ito, Robot: Have I done something wrong? Analysis of prosodic features of speech commands under the robot's unintended behavior, in Proc. the 4th International Conference on Audio, Language and Image Processing, pp. 887-890, 2014.
  25. Y. Chiba, T. Nose, A. Ito, and M. Ito, User modeling by using Bag-of-Behaviors for building a dialog system sensitive to the interlocutor's internal state, in Proc. SIGDIAL, pp. 74-78, 2014.
  26. Y. Chiba, M. Ito, and A. Ito, Modeling user's state during dialog turn using HMM for multi-modal spoken dialog system, in Proc. of the Seventh International Conference on Advances in Computer-Human Interactions, pp. 343-346, 2014.
  27. Y. Chiba, M. Ito, and A. Ito, Effect of linguistic contents on human estimation of internal state of dialog system users, in Proc. Interdisciplinary Workshop on Feedback Behavior in Dialog, pp. 11-14, 2012.
  28. Y. Chiba, M. Ito, and A. Ito, Estimation of user's internal state before the user's first utterance using acoustic features and face orientation, in Proc. International Conference on Human System Interaction, pp. 23-28, 2012.
  29. Y. Chiba, S. Hahm, and A. Ito, Find out what a user is doing before the first utterance: Discrimination of user's internal state using non-verbal information, in Proc. APSIPA-ASC, 4 pages, 2011.