We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.

Submit a Manuscript to the Journal
The International Journal of Human Resource Management

For a Special Issue on
The impact of algorithmic decision-making on workplace inequality and diversity

Manuscript deadline
09 May 2023

Cover image - The International Journal of Human Resource Management

Special Issue Editor(s)

Olivia Kyriakidou, The American College of Greece, Greece
[email protected]

Dimitria Groutsis, Sydney Business School, Australia
[email protected]

Joana Vassilopoulou, Brunel Business School, UK
[email protected]

Mustafa Özbilgin, Brunel Business School, UK
[email protected]

Submit an ArticleVisit JournalArticles

The impact of algorithmic decision-making on workplace inequality and diversity

It is perhaps unsurprising that the use of algorithms in people management (PM) operations, processes, and practices have been the subject of a growing body of research within work and organization studies (Cheng & Hackett, 2021). Whether articulated through arguments surrounding a lack of regulatory measures (Ajunwa & Greene, 2019) or “good” employment data (Citron & Pasquale, 2014), a seemingly ubiquitous and unquestioning commitment to the use of algorithms in PM represents not just a problematic PM discourse, but also a powerful one. Critical research has considered the implications of algorithmic decision-making on employee control, surveillance, ethics, and discrimination (Ajunwa & Greene, 2019; Beer, 2017; Parry, Cohen, & Bhattacharya, 2016; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016). Moreover, it has highlighted how human biases can be inscribed into the code of the PM algorithms sustaining inequalities while assuming a veneer of objectivity (Raghavan, Barocas, Kleinberg, & Levy, 2020; Vassilopoulou et. al. 2021). The case of algorithmic pre-employment assessment illustrates vividly such inscription. Algorithms built on historical employment data, for example, where mostly males hold management and leadership positions, may lead recruiters to the conclusion that women do not seek such positions and therefore create inbuilt bias in favour of men as opposed to women in the process of developing recruitment job posts for managerial roles through social media. In such circumstances, the original gender bias will be reified due to the biased data that ‘trained’ the recruitment algorithm (Devlin, 2017).

Evolving from this body of work, critical scholars have called for additional theorizing of the ramifications of algorithmic decision-making in organizations (Lindebaum, Vessa, & den Hond, 2020) exploring especially the processes through which PM algorithms may mask inequality and discrimination, replicating social and organizational inequalities and in some instances even amplifying human bias (Hmoud & Laszlo, 2019). Such ramifications may even intensify with the rise of the gig economy and platform companies (Vallas & Schor, 2020) as well as new management models where algorithms deliver a wide range of managerial tasks, often while workers are not aware of the extent to which they are managed by algorithms. It is such processes, systems, and tasks that have provoked a controversy around the appropriate regulation of algorithmic decision-making, coordination, and control (Healy & Pekarek, 2020). However, critical challenges remain concerning how regulation is best achieved, raising questions about the appropriate design, scope, content, enforcement, and architecture of regulatory frameworks (Ulbricht & Yeung, 2022).

Even less is known about the impact of algorithmic decision-making and algorithmic biases on employees’ and relevant stakeholders’ cognitions, emotions, and behaviors. An urgent question pertains to whether and how discrimination, inequality and disadvantage prompt employee actions, mobilization and resistance, especially in the new business models of the gig economy and the algorithmic-based management systems (Healy, Nicholson, & Pekarek, 2017). The developing research field of dignity at work has started to examine how employees’ identities (e.g., Lamont, Beljean, & Clair, 2014) could be shaped by experiences of inequality and exclusion at work, but also how employees’ cognitions, attitudes, emotions, and resilience towards inequality, discrimination and exclusion materialise and differ. However, further exploration and understanding are needed of the (intended and unintended) consequences and impact of algorithmic decision-making on these sentiments and whether and how these sentiments affect how inequality is challenged.

The overall purpose therefore of this special issue is to advance knowledge on the impact of algorithmic decision-making on inequality and diversity in organizations. In particular, the primary goal of this special issue is to invite research that develops new theoretical and empirical insights on how algorithmic decision-making affects fundamental employment results, such as employment opportunities, job assignments, career development, pathways and promotions, wages and training opportunities, and redundancies, to name but a few points of reference. Moreover, it seeks to understand the considerably complex challenge of organizing the international regulation of algorithmic decision-making as a way of creating algorithmic transparency and accountability. Finally, it attempts to explore the impact of algorithmic decision-making on employees themselves and understand whether and how it is challenged. Such an exploration not only advances human resource management and organizational theory and research, but also has practical implications for employees, managers, organizations, human resource managers, diversity and inclusion, communities, and society as a whole.

An indicative but not exhaustive list of questions that we are interested in addressing includes:

  • What are the (un)intended consequences of algorithmic decision-making, significantly as it advantages certain individuals or social identity groups while restricting opportunities and excluding others inside and outside organizations?
  • How does algorithmic decision-making in recruitment and selection, career development, performance management and reward systems, training, and development, among organizational processes, affect employees’ careers and lived experiences in the workplace?
  • How do emergent technologies, such as machine learning, predictive and prescriptive algorithms, online platforms, etc., influence candidate screening and hiring/rejection, the allocation of tasks and jobs to employees, and therefore individual outcomes at work resulting in different outcomes for different groups of workers with a negative impact on already marginalized groups of workers?
  • How do new organizational forms, such as platform companies and network-based firms, that are based on emergent AI technologies, have an impact on the distribution of power in organizations and labor markets shaping inequality at the workplace?
  • How do organizations and decision-makers address and adjust to algorithmic decision-making and the future of work? How do algorithms affect decision-makers and their behavior in organizations, especially in the context of PM decision-making?
  • How do the myths associated with great equalizers such as globalization, flexibility, meritocracy and/or efficiency interplay with algorithmic decision making, define working practices, and deepen systemic inequalities in organizations?
  • How does algorithmic decision-making change or reshape existing norms and institutions? How does this affect individual and organizational behavior?
  • How is the emergence of algorithmic decision-making being experienced by employees? How does algorithmic bias demonstrate itself in the day-to-day and mundane employee experiences at work and how does it affect individuals’ life outside work? How do employees navigate algorithmic decision-making?
  • What strategies have proved successful in disrupting algorithmic biases in organizations? How could algorithmic decision-making help to reach an inclusive society? How can we regulate algorithmic decision-making?

A broad range of methodologies is welcome, from qualitative or quantitative analysis to simulation and experimental approaches. We are also interested in studies across industries and markets, as long as they share a concern for the role of algorithmic decision-making in understanding workplace inequality and exclusion in an international context.

Submission Instructions

Provisional Timeline:

March 15 2023: Submission due date

July 31, 2023: Deadline for revised submissions

December 31, 2023: Deadline for final revisions

April 30, 2024: Final decision about papers to be included in the special issue

Authors of prospective papers are welcome to discuss their ideas with any of the guest editors in advance.

Instructions for AuthorsSubmit an Article

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.