We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.

Submit a Manuscript to the Journal
Advanced Robotics

For a Special Issue on
Multimodal Processing and Robotics for Dialogue Systems

Manuscript deadline
31 January 2023

Cover image - Advanced Robotics

Special Issue Editor(s)

Takayuki Nagai, Osaka University, Japan
[email protected]

David Traum, University of Southern California, USA

Gabriel Skantze, KTH Royal Institute of Technology, Sweden

Hiromitsu Nishizaki, University of Yamanashi, Japan

Ryuichiro Higashinaka, Nagoya University, Japan

Takashi Minato, RIKEN/ATR, Japan

Submit an ArticleVisit JournalArticles

Multimodal Processing and Robotics for Dialogue Systems

In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions.

This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems.

The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including "dialogue robot competition" at IROS2022.

In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots:

  • Spoken dialogue processing
  • Multimodal processing
  • Speech recognition
  • Text-to-speech
  • Emotion recognition
  • Motion generation
  • Facial expression generation
  • System architecture
  • Natural language processing
  • Knowledge representation
  • Benchmarking
  • Evaluation method
  • Ethics
  • Dialogue systems and robots for competition

Submission Instructions

The full-length manuscript (either PDF file or MS word file) should be sent to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal.

  • - Select " Multimodal Processing and Robotics for Dialogue Systems” when submitting your paper to ScholarOne
  • - Expected publication dates: November 2023 (Advanced Robotics vol. 37, Issue 21)

Instructions for AuthorsSubmit an Article

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.