Submit a Manuscript to the Journal
Behaviour & Information Technology
For a Special Issue on
Hybrid intelligence across contexts: From conceptualisation to empirical evidence
Abstract deadline
Manuscript deadline

Special Issue Editor(s)
Andy Nguyen,
University of Oulu
andy.nguyen@oulu.fi
Michail Giannakos,
Norwegian University of Science and Technology (NTNU)
michailg@ntnu.no
Vassilis Kostakos,
University of Melbourne
vassilis.kostakos@unimelb.edu.au
Alexander Richter,
Victoria University of Wellington
alex.richter@vuw.ac.nz
Matthias Söllner,
University of Kassel
soellner@uni-kassel.de
Hybrid intelligence across contexts: From conceptualisation to empirical evidence
Call for Papers:
Human-Centered AI (HCAI) focuses on creating AI technology that amplifies and augments human abilities rather than displacing them (Shneiderman, 2022). Central to HCAI is the commitment to preserving meaningful human control, ensuring that AI remains transparent, ethically aligned, and responsive to human needs and values. This orientation frames AI not as autonomous agents detached from human values and objectives, but as advanced tools designed to enhance human cognition, creativity, and problem-solving capacity. Such conceptualization also resonates with the extended cognition literature, which posits that technology has historically functioned as an extension of the human mind, with AI marking the most recent advancement in this ongoing evolutionary relationship (Clark, 2003). Recent works (e.g., Giannakos et al., 2024) highlight the importance of the complementary strength of humans and AI and emphasize the need for humans to be able to maintain a certain level of control over AI actions, and avoid risks associated with the loss of agency and intentionality, and human expertise.
At the same time, the emerging paradigm of hybrid intelligence (HI) presents a promising approach to help us understand and address these challenges (Dellermann et al., 2019). HI acknowledges that humans and machines possess distinct strengths, and by integrating these capabilities, they can achieve outcomes that would be unattainable individually (Dellermann et al., 2019). The overarching objective of HI is to bridge the gap between human intelligence and AI by creating sociotechnical ensembles that outperform humans or machines working independently. HI intends to achieve this goal by combining the strengths of humans and machines through their coevolutionary processes, enabling them to collaborate, learn from, and reinforce each other (Järvelä et al., 2025).
Multiple research communities are converging on this theme such as Human-Computer Interactions (HCI) (e.g., Davis et al., 2024; Xu et al., 2023), Information Systems (IS) (e.g., Dellermann et al., 2019; Hemmer et al., 2021), learning technologies (e.g., Giannakos et al., 2024; Järvelä et al., 2025), recognizing that future intelligent systems must be understood as socio-technical systems combining technical and human elements (Nguyen, 2025). This special issue aims to advance that conversation by providing a dedicated venue for high-quality research on hybrid intelligence. We encourage contributions that draw on diverse disciplines and theoretical lenses – including (but not limited to) HCI, IS, the learning sciences, psychology, cognitive science, AI and machine learning, organizational science, and ethics – to deepen our understanding of human–AI collaboration.
Key themes and questions of interest include: How can we design and evaluate systems where humans and AI combine their complementary strengths and effectively learn from each other? How can we align contemporary AIs to meet our human, as well as societal, needs and values? What socio-technical factors (e.g., organizational structures, user interfaces, ethical constraints) determine the success of hybrid intelligence in practice? How do we balance human control and AI autonomy to support trustworthy, human-centered AI? How human intuition blends with algorithmic precision and how does this impact decision-making, learning gains, productivity, or well-being across contexts? Prior work has largely emphasized technical innovations in hybrid AI systems, but there is a pressing need for research illuminating the human, organizational, and societal dimensions of hybrid intelligence (e.g., Hemmer et al., 2021; Li et al., 2024). For instance, understanding how design decisions and user behavior interact to influence outcomes in human–AI systems, or what new sociotechnical ensembles are being developed in HI contexts, are promising avenues for new insights (Giannakos et al., 2024).
This special issue seeks to address such questions, highlighting the interplay between technological capabilities and human factors in creating effective hybrid intelligent ensembles. Topics of interest for this special issue include, but are not limited to:
- Concepts and Frameworks: Theoretical models defining hybrid intelligence and frameworks for human–AI collaboration (e.g., identifying new human-technology dynamics and relationships between humans and AI; devising novel methodologies for human–machine teaming).
- Design of Systems: Understanding novel hybrid intelligent ensembles and proposing approaches to designing sociotechnical systems that facilitate seamless interaction between human intelligence and machine intelligence. This includes user interface design for hybrid decision support, human-in-the-loop machine learning techniques, and incentive or workflow designs that integrate human contributions (e.g., crowd+AI systems).
- Empirical Evaluations: User studies, field deployments, or experimental evaluations that assess the effectiveness of hybrid intelligent systems in real-world tasks (e.g., understanding different hybrid intelligence constellations); Case studies highlighting successes or failures of hybrid intelligence in practice and the lessons learned.
- Human-AI co-learning: Human-AI collaboration in learning for hybrid intelligence. Learning technologies, including AI tutors and learning analytics, were designed to support instructors and students. We are looking for studies that generate insights on how humans and AI complement each other, stand by each other, and engage in a process of co-learning, co-creation, and co-evolution.
- Human–AI Decision-Making: Research on decision processes involving hybrid intelligence for individuals or groups. This includes systems for group decision support, AI facilitation in meetings, hybrid recommender systems, and investigations of trust, accountability, and transparency in human-AI joint decisions.
- Ethical, Social, and Organizational Implications: Analyses of the broader implications of hybrid intelligence, including ethical frameworks for human-AI collaboration, policy or governance of hybrid systems in sensitive contexts, user acceptance of AI-augmented processes, and the impact of hybrid intelligence on work culture, learning ecosystems, or decision accountability. We also welcome work on how to ensure fairness, transparency, trustworthiness, sustainability, and human control in hybrid intelligent systems.
References
Clark, A. (2003). Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford University Press.
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid Intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2
Davis, R. L., Wambsganss, T., Jiang, W., Kim, K. G., Käser, T., & Dillenbourg, P. (2024). Fashioning creative expertise with generative AI: Graphical interfaces for GAN-based design space exploration better support ideation than text prompts for diffusion models. In Conference on Human Factors in Computing Systems (CHI’24). Association for Computing Machinery.
Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez-Leo, D., Järvelä, S., Mavrikis, M., & Rienties, B. (2024). The promise and challenges of generative AI in education. Behaviour & Information Technology, 0(0), 1–27. https://doi.org/10.1080/0144929X.2024.2394886
Hemmer, P., Schemmer, M., Vössing, M., & Kühl, N. (2021). Human-AI Complementarity in Hybrid Intelligence Systems: A Structured Literature Review. PACIS, 78, 118.
Järvelä, S., Zhao, G., Nguyen, A., & Chen, H. (2025). Hybrid intelligence: Human–AI coevolution and learning. British Journal of Educational Technology, 56(2), 455-468. https://doi.org/10.1111/bjet.13560
Li, M. M., Reinhard, P., Peters, C., Oeste-Reiss, S., & Leimeister, J. M. (2024). A Value Co-Creation Perspective on Data Labeling in Hybrid Intelligence Systems: A Design Study. Information Systems, 120, 102311. https://doi.org/10.1016/j.is.2023.102311
Nguyen, A. (2025). Human-AI Shared Regulation for Hybrid Intelligence in Learning and Teaching: Conceptual Domain, Ontological Foundations, Propositions, and Implications for Research. Hawaii International Conference on System Sciences (HICSS).
Shneiderman, B. (2022). Human-centered AI. Oxford University Press.
Xu, W., Dainoff ,Marvin J., Ge ,Liezhong, & and Gao, Z. (2023). Transitioning to Human Interaction with AI Systems: New Challenges and Opportunities for HCI Professionals to Enable Human-Centered AI. International Journal of Human–Computer Interaction, 39(3), 494–518. https://doi.org/10.1080/10447318.2022.2041900
Submission Instructions
We call for strong research with multidisciplinary methodological and theoretical perspectives, ranging from quantitative to qualitative and mixed methods studies, and controlled experimentation to field observation. Advancing hybrid intelligence requires combining insights from multiple fields, including AI/ML, HCI, IS, cognitive science, social science, ethics, and infrastructures.
Timeline
Abstract Submissions to Editors (Recommended) (email to andy.nguyen@oulu.fi): 15th September 2025
Full Paper Submissions (the system will start accepting submissions to the SI 1.5 months before the deadline): 10th January 2026
First-round Review: 31st March 2026
Revision Submission: 30th June 2026
Final Paper Acceptance: 31st August 2026