千葉 祐弥


日本電信電話株式会社
コミュニケーション科学基礎研究所
協創情報研究部
インタラクション対話研究グループ
主任研究員


連絡先:
yuya.chiba (at) ntt.com
〒619-0237 京都府相楽郡精華町光台2-4

Publication

  • Journal Paper
  • Conference Paper
  • Software

Journal Paper

  1. A. Guo, R. Hirai, A. Ohashi, Y. Chiba, Y. Tsunomori, and R. Higashinaka, Personality prediction from task-oriented and open-domain human-machine dialogues, Scientific Reports, Vol. 14, No. 68, pp. 1-13, 2024.
  2. Y. Chiba and R. Higashinaka, Dialogue situation recognition in everyday conversation from audio, visual, and linguistic information, IEEE Access, pp. 70819-70832, 2023.
  3. Y. Chiba and R. Higashinaka, Analyzing variations of everyday Japanese conversations based on semantic labels of functional expressions, ACM Transactions on Asian and Low-Resource Language Information Processing,Vol. 22, No. 2, pp. 1-26, 2023.
  4. K. Nakamura, T. Nose, Y. Chiba, and A.Ito, A symbol-level melody completion based on a convolutional neural network with generative adversarial learning, Journal of Information Processing, Vol. 28, pp. 248-257, 2020.
  5. J. Fu, Y. Chiba, T. Nose, and A. Ito, Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural acoustic models, Speech Communication, Vol. 116, pp. 86-97, 2020.
  6. Y. Chiba, T. Nose, and A. Ito, Multi-condition training for noise-robust speech emotion recognition, Acoustical Letter, Vol. 40, No. 6, pp. 406-409, 2019.
  7. H. Prafiyanto, T. Nose, Y. Chiba, and A. Ito, Improving human scoring of prosody using parametric speech synthesis, Speech Communication, Vol. 111, pp. 14-21, 2019.
  8. H. Prafiyanto, T. Nose, Y. Chiba, and A. Ito, Analysis of preferred speaking rate and pause in spoken easy Japanese for non-native listeners, Acoustical Science and Technology, Vol. 39, No. 2, pp. 92-100, 2018.
  9. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Analyses of example sentences collected by conversation for example-based non-task-oriented dialog system, IAENG International Journal of Computer Science, Vol. 45, No. 1, pp. 285-293, 2018.
  10. Y. Chiba, T. Nose, and A. Ito, Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt, Journal on Multimodal User Interfaces, Vol. 11, No. 2, pp. 185-196, 2017.
  11. Y. Chiba, T. Nose, A. Ito and M. Ito, Estimating the user’s state before exchanging utterances using intermediate acoustic features for spoken dialog systems, IAENG International Journal of Computer Science, Vol. 43, No. 1, pp. 1-9, 2016.
  12. 加瀬嵩人,能勢隆,千葉祐弥,伊藤彰則,発話状態推定に基づく協調的感情音声合成による音声対話システムの評価,電気情報通信学会論文誌A,Vol. J99-A, No. 1, pp. 25-35, 2016.
  13. 鈴木直人,廣井富,千葉祐弥,能勢隆,伊藤彰則,応答タイミングを考慮した英会話練習のための音声対話型英語学習システム,情報処理学会論文誌,Vol. 56, No. 11, pp. 2177-2189, 2015.
  14. Y. Chiba and A. Ito, Estimating a user's internal state before the first input utterance, Advances in Human-Computer Interaction, Vol. 2012, pp. 1-10, 2012.

Conference Paper

  1. Y. Chiba, K. Mitsuda, A. Lee, and R. Higashinaka, The Remdis toolkit: Building advanced real-time multimodal dialogue systems with incremental processing and large language models, in Proc. IWSDS, pp. 1-6, 2024.
  2. A. Guo, A. Ohashi, Y. Chiba, Y. Tsunomori, R. Higashinaka, and R. Hirai, Personality-aware natural language generation for task-oriented dialogue using reinforcement learning, in Proc. IEEE RO-MAN, pp. 1823-1828, 2023.
  3. H. Sugiyama, M. Mizukami, T. Arimoto, H. Narimatsu, Y. Chiba, H. Nakajima, and T. Meguro, Empirical analysis of training strategies of transformer-based Japanese chit-chat systems, in Proc. SLT, pp. 685-691, 2023.
  4. R. Yahagi, A. Ito, T. Nose, and Y. Chiba, Response sentence modification using a sentence vector for a flexible response generation of retrieval-based dialogue systems, in Proc. APSIPA-ASC, pp. 853-859, 2022.
  5. M. Inaba, Y. Chiba, R. Higashinaka, K. Komatani, Y. Miyao, and T. Nagai, Collection and analysis of travel agency task dialogues with age-diverse speakers, in Proc. LREC, pp. 5759-5767, 2022.
  6. R. Yahagi, Y. Chiba, T. Nose, and A. Ito, Multimodal dialogue response timing estimation using dialogue context encoder, in Proc. IWSDS, pp. 133-141, 2021.
  7. A. Guo, A. Ohashi, R. Hirai, Y. Chiba, Y. Tsunomori and R. Higashinaka, Influence of user personality on dialogue task performance: A case study using a rule-based dialogue system, in Proc. the 3rd Workshop on NLP for Conversational AI, pp. 263-270, 2021.
  8. Y. Chiba and R. Higashinaka, Variation across everyday conversations: Factor analysis of conversations using semantic categories of functional expressions, in Proc. PACLIC, pp. 160-169, 2021.
  9. Y. Chiba, Y. Yamazaki, and A. Ito, Speaker intimacy in chat-talks: Analysis and recognition based on verbal and non-verbal information, in Proc. SemDial, pp. 1-10, 2021.
  10. Y. Chiba and R. Higashinaka, Dialogue situation recognition for everyday conversation using multimodal information, in Proc. INTERSPEECH, pp. 241-245, 2021.
  11. Y. Yamazaki, Y. Chiba, T. Nose, and A. Ito, Neural spoken-response generation using prosodic and linguistic context for conversational systems, in Proc. INTERSPEECH, pp. 246-250, 2021.
  12. Y. Chiba, T. Nose, and A. Ito, Multi-stream attention-based BLSTM with feature segmentation for speech emotion recognition, in Proc. INTERSPEECH, pp. 3301-3305, 2020.
  13. Y. Yamazaki, Y. Chiba, T. Nose, and A. Ito, Construction and analysis of a multimodal chat-talk corpus for dialog systems considering interpersonal closeness, in Proc. LREC, pp. 436-441, 2020.
  14. S. Tada, Y. Chiba, T. Nose, and A. Ito, Effect of mutual self-disclosure in spoken dialog system on user impression, in Proc. APSIPA-ASC, pp. 806-810, 2018.
  15. H. Wu, Y. Chiba, T. Nose, and A. Ito, Analyzing effect of physical expression on English proficiency for multimodal computer-assisted language learning, in Proc. INTERSPEECH, pp. 1746-1750, 2018.
  16. Y. Chiba, T. Nose, T. Kase, M. Yamanaka, and A. Ito, An analysis of the effect of emotional speech synthesis on non-task-oriented dialogue system,” in Proc. SIGDIAL, pp. 371-375, 2018.
  17. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Improving user impression in spoken dialog system with gradual speech form control, in Proc. SIGDIAL, pp. 235-240, 2018.
  18. Y. Kageyama, Y. Chiba, T. Nose, and A. Ito, Collection of example sentences for non-task-oriented dialog using a spoken dialog system and comparison with hand-crafted DB, HCI International 2017-Posters' Extended Abstracts, pp. 458-563, 2017.
  19. Y. Chiba, T. Nose, and A. Ito, Analysis of efficient multimodal features for estimating user's willingness to talk: Comparison of human-machine and human-human dialog, in Proc. APSIPA-ASC, pp. 428-431, 2017.
  20. S. Tada, Y. Chiba, T. Nose, and A. Ito, Response classification of interview-based dialog system using user focus and semantic orientation, in Proc. IIH-MSP, pp. 84-90, 2017.
  21. Y. Chiba and A. Ito, Estimation of user's willingness to talk about the topic: Analysis of interviews between humans, in Proc. IWSDS, pp. 1-6, 2016.
  22. E. Takeishi, T. Nose, Y. Chiba, and A. Ito, Construction and analysis of phonetically and prosodically balanced emotional speech database, in Proc. O-COCOSDA, pp. 16-21, 2016.
  23. N. Totsuka, Y. Chiba, T. Nose, and A. Ito, Robot: Have I done something wrong? Analysis of prosodic features of speech commands under the robot's unintended behavior, in Proc. the 4th International Conference on Audio, Language and Image Processing, pp. 887-890, 2014.
  24. Y. Chiba, T. Nose, A. Ito, and M. Ito, User modeling by using Bag-of-Behaviors for building a dialog system sensitive to the interlocutor's internal state, in Proc. SIGDIAL, pp. 74-78, 2014.
  25. Y. Chiba, M. Ito, and A. Ito, Modeling user's state during dialog turn using HMM for multi-modal spoken dialog system, in Proc. of the Seventh International Conference on Advances in Computer-Human Interactions, pp. 343-346, 2014.
  26. Y. Chiba, M. Ito, and A. Ito, Effect of linguistic contents on human estimation of internal state of dialog system users, in Proc. Interdisciplinary Workshop on Feedback Behavior in Dialog, pp. 11-14, 2012.
  27. Y. Chiba, M. Ito, and A. Ito, Estimation of user's internal state before the user's first utterance using acoustic features and face orientation, in Proc. International Conference on Human System Interaction, pp. 23-28, 2012.
  28. Y. Chiba, S. Hahm, and A. Ito, Find out what a user is doing before the first utterance: Discrimination of user's internal state using non-verbal information, in Proc. APSIPA-ASC, 4 pages, 2011.

Press & Event

  • Press

Academic Activity

  • Education Career
  • Competitive Funds
  • Committee
  • Prize

Dissertation

  • 博士論文
    Estimation of user's state for advanced communication in a multi-modal spoken dialog system
    東北大学大学院 工学研究科 通信工学専攻
    指導教員: 伊藤彰則 教授
    博士 (工学), 2015.9

Education Career

  • 東北大学 大学院工学研究科 助教 (2016.4-2020.3)
  • 日本学術振興会特別研究員PD (2015.10-2016.3)
  • 日本学術振興会特別研究員D.C.2 (2014.4-2015.9)
  • 東北大学大学院工学研究科リサーチ・アシスタント (2013.4-2014.3)
  • 東北大学電気通信研究機構リサーチ・アシスタント (2012.10-2013.3)
  • 東北大学電気通信研究機構技術補佐員 (2012.4-2012.9)

Competitive Funds

  1. 対話型AIのための音声と身体表現の同時生成に基づく自然なインタラクションの実現, JSPS KAKENHI, 研究代表者 (2020-present)
  2. 音声アシスタントとの対話を介した非接触・暗黙的なヘルスモニタリング技術の研究, JSPS KAKENHI, 研究代表者 (2018-2020)
  3. 次世代PHRサービスのための表現豊かで飽きのこない協調的音声対話システムの研究開発, JST COI, 研究分担者 (2018-2020)
  4. 深層学習に基づくマルチモーダル対話型英会話学習システムの研究開発, JSPS KAKENHI, 研究分担者 (2017-present)
  5. 平均声モーフィングを利用した日本語発音学習システムの研究開発, JSPS KAKENHI, 研究分担者 (2016-2018)
  6. 状態推定に基づく多様な音声の認識・合成による「人にやさしい」対話システムの研究, 研究分担者 (2015-2017)
  7. 日本語文難易度推定と音声合成による「やさしい日本語」作成補助システムの研究開発, 研究分担者 (2014-2016)
  8. 未習熟ユーザを補助するマルチモーダル対話システムの研究, JSPS KAKENHI, 研究代表者 (2014-2016)

Committee

  1. HCGシンポジウム運営委員(2020-present)
  2. ヴァーバル・ノンヴァーバル・コミュニケーション研究会・運営委員(2020-present)
  3. 日本音響学会 東北支部・会計幹事(2018-2020)
  4. Acoustical Science and Technology, The Commemoration of Universal Acoustical Communication Month 2018 (UAC2018)特集号・編集委員 (2018)
  5. ヴァーバル・ノンヴァーバル・コミュニケーション研究会・若手サポーター(2016-2020)

Prize

  1. 音響学会2014年春季研究発表会学生優秀発表賞
  2. 2013年度音声研究会研究奨励賞