Yuya Chiba, Ph.D.

Research Scientist
Interaction Research Group
Innovative Communication Lab
NTT Communication Science Laboratories


Contact Information:
yuya.chiba.ax (at) hco.ntt.co.jp
2-4 Hikaridai, Seika, Souraku,
Kyoto 619-0237, JAPAN

Research Interests

  • Spoken Dialog System
  • Multimodal Information Processing
  • Verbal/Non-verbal Communication Analysis

Publication

  • Journal Paper
  • Conference Paper

Journal Paper

  1. K. Nakamura, T. Nose, Y. Chiba, and A.Ito, A symbol-level melody completion based on a convolutional neural network with generative adversarial learning, Journal of Information Processing, Vol. 28, pp. 248-257, 2020.
  2. J. Fu, Y. Chiba, T. Nose, and A. Ito, Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural acoustic models, Speech Communication, Vol. 116, pp. 86-97, 2020.
  3. Y. Chiba, T. Nose, and A. Ito, Multi-condition training for noise-robust speech emotion recognition, Acoustical Letter, Vol. 40, No. 6, pp. 406-409, 2019.
  4. H. Prafiyanto, T. Nose, Y. Chiba, and A. Ito, Improving human scoring of prosody using parametric speech synthesis, Speech Communication, Vol. 111, pp. 14-21, 2019.
  5. H. Prafiyanto, T. Nose, Y. Chiba, and A. Ito, Analysis of preferred speaking rate and pause in spoken easy Japanese for non-native listeners, Acoustical Science and Technology, Vol. 39, No. 2, pp. 92-100, 2018.
  6. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Analyses of example sentences collected by conversation for example-based non-task-oriented dialog system, IAENG International Journal of Computer Science, Vol. 45, No. 1, pp. 285-293, 2018.
  7. Y. Chiba, T. Nose, and A. Ito, Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt, Journal on Multimodal User Interfaces, Vol. 11, No. 2, pp. 185-196, 2017.
  8. Y. Chiba, T. Nose, A. Ito and M. Ito, Estimating the user’s state before exchanging utterances using intermediate acoustic features for spoken dialog systems, IAENG International Journal of Computer Science, Vol. 43, No. 1, pp. 1-9, 2016.
  9. T. Kase, T. Nose, Y. Chiba, and A. Ito, Evaluation of spoken dialog system with cooperative emotional speech synthesis based on utterance state estimation, IEICE Transaction on Fundamentals of Electronics, Communications and Computer Sciences, Vol. J99-A, No. 1, pp. 25-35, 2016 (in Japanese).
  10. N. Suzuki, Y. Hiroi, Y. Chiba, T. Nose, and A. Ito, A computer-assisted English conversation training system for response-timing-aware oral conversation exercise, Journal of Information Processing, Vol. 56, No. 11, pp. 2177-2189, 2015 (in Japanese).
  11. Y. Chiba and A. Ito, Estimating a user's internal state before the first input utterance, Advances in Human-Computer Interaction, Vol. 2012, pp. 1-10, 2012.

Conference Paper

  1. Y. Yamazaki, Y. Chiba, T. Nose, and A. Ito, Construction and analysis of a multimodal chat-talk corpus for dialog systems considering interpersonal closeness, in Proc. LREC, pp. 436-441, 2020.
  2. S. Tada, Y. Chiba, T. Nose, and A. Ito, Effect of mutual self-disclosure in spoken dialog system on user impression, in Proc. APSIPA-ASC, pp. 806-810, 2018.
  3. H. Wu, Y. Chiba, T. Nose, and A. Ito, Analyzing effect of physical expression on English proficiency for multimodal computer-assisted language learning, in Proc. INTERSPEECH, pp. 1746-1750, 2018.
  4. Y. Chiba, T. Nose, T. Kase, M. Yamanaka, and A. Ito, An analysis of the effect of emotional speech synthesis on non-task-oriented dialogue system,” in Proc. SIGDIAL, pp. 371-375, 2018.
  5. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Improving user impression in spoken dialog system with gradual speech form control, in Proc. SIGDIAL, pp. 235-240, 2018.
  6. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Collection of example sentences for non-task-oriented dialog using a spoken dialog system and comparison with hand-crafted DB, HCI International 2017-Posters' Extended Abstracts, pp. 458-563, 2017.
  7. Y. Chiba, T. Nose, and A. Ito, Analysis of efficient multimodal features for estimating user's willingness to talk: Comparison of human-machine and human-human dialog, in Proc. APSIPA-ASC, pp. 428-431, 2017.
  8. S. Tada, Y. Chiba, T. Nose, and A. Ito, Response classification of interview-based dialog system using user focus and semantic orientation, in Proc. IIH-MSP, pp. 84-90, 2017.
  9. Y. Chiba and A. Ito, Estimation of user's willingness to talk about the topic: Analysis of interviews between humans, in Proc. IWSDS, pp. 1-6, 2016.
  10. E. Takeishi, T. Nose, Y. Chiba, and A. Ito, Construction and analysis of phonetically and prosodically balanced emotional speech database, in Proc. O-COCOSDA, pp. 16-21, 2016.
  11. N. Totsuka, Y. Chiba, T. Nose, and A. Ito, Robot: Have I done something wrong? Analysis of prosodic features of speech commands under the robot's unintended behavior, in Proc. the 4th International Conference on Audio, Language and Image Processing, pp. 887-890, 2014.
  12. Y. Chiba, T. Nose, A. Ito, and M. Ito, User modeling by using Bag-of-Behaviors for building a dialog system sensitive to the interlocutor's internal state, in Proc. SIGDIAL, pp. 74-78, 2014.
  13. Y. Chiba, M. Ito, and A. Ito, Modeling user's state during dialog turn using HMM for multi-modal spoken dialog system, in Proc. of the Seventh International Conference on Advances in Computer-Human Interactions, pp. 343-346, 2014.
  14. Y. Chiba, M. Ito, and A. Ito, Effect of linguistic contents on human estimation of internal state of dialog system users, in Proc. Interdisciplinary Workshop on Feedback Behavior in Dialog, pp. 11-14, 2012.
  15. Y. Chiba, M. Ito, and A. Ito, Estimation of user's internal state before the user's first utterance using acoustic features and face orientation, in Proc. International Conference on Human System Interaction, pp. 23-28, 2012.
  16. Y. Chiba, S. Hahm, and A. Ito, Find out what a user is doing before the first utterance: Discrimination of user's internal state using non-verbal information, in Proc. APSIPA-ASC, 4 pages, 2011.