Submit a Manuscript to the Journal

Computer Assisted Language Learning

For a Special Issue on

Multimodal Generative Artificial Intelligence in Language Education

Abstract deadline

Manuscript deadline

Special Issue Editor(s)

Bin Zou, Xi'an Jiaotong-Liverpool University, China
bin.zou@xjtlu.edu.cn

Di Zou, Lingnan University, Hong Kong, China
dizou@ln.edu.hk

Haoran Xie, Lingnan University, Hong Kong, China
hrxie@ln.edu.hk

Submit an ArticleVisit JournalArticles

Multimodal Generative Artificial Intelligence in Language Education

Since OpenAI launched ChatGPT, Generative Artificial Intelligence (GAI) has been popularized in the context of language education. Many studies have found that GAI tools, such as ChatGPT, can enhance language teaching and learning (Kohnke, Moorhouse & Zou, 2023; Liu & Ma, 2024; Mohamed, 2023). Researchers found that ChatGPT can help English as a foreign language (EFL) learners to improve their writing skills (Guo & Wang, 2023; Su, Lin, & Lai, 2023; Yan, 2023) and speaking skills (Wan & Moorhouse, 2024). GAI tools extend beyond just text generation and are capable of producing multimodal content, which enhances their applicability in various educational contexts. For instance, Midjourney, DALL-E, and Sora can generate realistic images, compose music, and create videos, making them valuable assets in multimedia learning environments. This multimodal capability of GAI means that it can synthesize and integrate multiple forms of media to provide richer and more engaging educational experiences (Collie & Martin, 2024).

Multimodal GAI refers to AI systems that can process and generate different types of data simultaneously. For example, GPT4o can understand a combination of text, images, and audio inputs to produce coherent outputs across these mediums. In language education, these multimodal AI tools can be utilized to create interactive lessons that involve textual explanations, visual aids, and audio components, thereby catering to various learning needs and enhancing overall comprehension.

Multimodal GAI tools can also be used to simulate real-life conversations and scenarios, offering learners the opportunity to practice language skills in a more dynamic and realistic environment. For instance, Sora could depict a conversation between native speakers, allowing learners to observe and practice pronunciation, intonation, and conversational cues in context. Such innovations in GAI hold significant promise for advancing language education and making learning more immersive and effective.

Some researchers have already investigated using multimodal sources including texts and images to foster EFL writing (Liu, Zhang and Biebricher, 2024). However, very few studies have looked at using generative audio or video to develop language skills. Using generated multimodal sources created by GAI tools should be further researched. This special issue aims to explore how to use multimodal GAI sources to help and motivate teachers to enhance language teaching and learners to improve their language skills including reading, writing, listening and speaking skills in the age of GAI. It will provide good samples and effective ways of using multimodal GAI sources in language education.

The topics include but are not limited to:

Multimodal GAI for writing, speaking, reading or listening skills

Multimodal GAI for critical thinking skills

Multimodal GAI for language cognition

Multimodal GAI for engagement in language learning

Multimodal GAI for motivation, emotion, enjoyment in language learning

Multimodal GAI for language lesson preparation

Submission Instructions

Abstracts should be 150-250 words, clearly and concisely written, and generally include the following:

  • Proposed article title
  • Proposed authors names, affiliations and contact details
  • An introduction of one or two sentences stating the research aims and educational context
  • For empirical reports, a brief summary of the data collection methodology.
  • A summary of the results
  • Conclusions and implications in two or three sentences including new insights, significant contribution and generalization

Abstract submission: bin.zou@xjtlu.edu.cn

Abstract submission deadline: 01/05/2025

Notice for abstract acceptance: 30/05/2025

Full paper submission deadline: 30/09/2025

Instructions for AuthorsSubmit an Article

Looking to Publish your Research?

Find out how to publish your research open access with Taylor & Francis Group.

Choose open access