We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.

Submit a Manuscript to the Journal

Applied Artificial Intelligence

For an Article Collection on

Multiagent Systems in the Era of Trustworthy Artificial Intelligence

Manuscript deadline
23 August 2023

Cover image - Applied Artificial Intelligence

Article collection guest advisor(s)

Prof. Viviana Mascardi, University of Genova, Italy
[email protected]

Prof. Michael Wooldridge, University of Oxford, UK

Submit an ArticleVisit JournalArticles

Multiagent Systems in the Era of Trustworthy Artificial Intelligence

Background

A robust, transparent, fair, and accountable - in one word, a trustworthy - Artificial Intelligence (AI) can make citizens comfortable with the use of new technologies that can improve their wellbeing and quality of life, and can increase the safety of more and more widely used (semi-)autonomous robotic applications. Also, it can provide the means to understand and support ethical behavior of complex hardware and software systems including conversational agents, virtual agents, companion robots.

For this reason, pursuing trustworthiness of AI is the objective of many governmental initiatives all over the world. In April 2019 the Ethics Guidelines for Trustworthy AI were published by the European Commission and in February 2022 the Data Act proposal for a fair and innovative data economy was released. In October 2022, the Blueprint for an AI Bill of Rights was made available in the US, dealing with data privacy besides algorithmic discrimination protection.

The AI scientific community in general and the Multiagent System (MAS) community in particular anticipated this trend by decades: many relevant works on trustworthiness of MAS date back to the beginning of the millennium and before. This is not surprising. Indeed, agents are meant to interact with humans and the technical and ethical issues raised by human-agent interaction were understood as soon as agents were born. To address them, agent-based technologies often provide methods to simulate, validate, verify, and explain, the behavior of individual agents, of agents' organizations, and of interaction among agents and humans, hence making MAS more trustworthy.

The Field

Trust and reputation models in open MAS have been studied for a long time, and the results achieved therein helped the broad AI field in adopting well founded mechanisms for selecting reliable sources of information and service providers. Also, normative MAS have a long history behind them; by requiring that agents comply to the norms, they boosted research on how such a compliance could be verified, and hence how and why both the agents and the system could be trusted.

Cooperative and non-cooperative game theory is influenced by trust that agents have - or have not -  in other agents: the performance of a coalition may heavily depend on the trust relations among its members and the allocation of resources must be fair.

Finally, agents may learn new patterns from the data they can access and new behavioral strategies from the feedback they get on their previous actions: whatever the learning paradigm they adopt, be it by reinforcement, neural, neuro-symbolic, they should be able to understand and explain what they learned, why they learned it, and to make sure that what they learned is fair, unbiased, not harmful.

With these premises, all the most characterizing agents' activities and all the stages of the agents and MAS engineering may become a testbed for showing how trustworthiness can be injected in a socio-technical distributed AI system as complex as a MAS, and - on the other hand - may highlight what is still missing to make a MAS trustworthy.

Submissions

This collection is open to submissions that cope with MAS in the era of trustworthy AI, where trustworthiness is meant in a very broad sense. Topics of interest include, but are not limited to:

  • Engineering and simulating trustworthy MAS
  • Trust, fairness and reputation in agents organizations, normative systems, and game-theoretical approaches
  • Explainable and unbiased learning agents
  • Ethical issues in human-agent and human-robot interaction
  • Trustworthy MAS applications

Submissions are expected to target a wide audience of readers; hence, they should give room to an introduction to the proposed techniques and to their motivations, besides explaining the technical solutions that characterize the proposal.

Applied Artificial Intelligence accepts original research articles. When submitting your article, please select the Article Collection "Multiagent Systems" from the drop-down menu in the submission system.

Benefits of publishing open access within Taylor & Francis

Global marketing and publicity, ensuring your research reaches the people you want it to.

Article Collections bring together the latest research on hot topics from influential researchers across the globe.

Rigorous peer review for every open access article.

Rapid online publication allowing you to share your work quickly.

All manuscripts submitted to this Article Collection will undergo desk assessment and peer-review as part of our standard editorial process. Guest Advisors for this collection will not be involved in peer-reviewing manuscripts unless they are an existing member of the Editorial Board. Please review the journal Aims and Scope and author submission instructions prior to submitting a manuscript.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.