AI and the future of employment

We ChatGPT with Johnny Långstedt, PhD, about how intelligent technologies affect the workplace

With recent developments in technology, and interest in new artificial intelligence (AI) like ChatGPT, we chatted with researcher Johnny Långstedt to get his insights on these latest developments in relation to his research.

Johnny published his article, How will our Values Fit Future Work? An Empirical Exploration of Basic Values and Susceptibility to Automation, in our Labour and Industry journal. He published his work open access (OA), making it free to read for all, as part of the FinELib agreement between Finnish institutions and Taylor & Francis. You can read more about this agreement here.

Headshot of Johnny Långstedt, PhD.

Johnny Långstedt, PhD. Åbo Akademi University, Finland. Author of ‘How will our values fit future work? An Empirical Exploration of Basic Values and Susceptibility to Automation’.

          Please introduce yourself and your research

          • I have a PhD in the study of religions, my research revolves around cultural consequences of industry 4.0, so I’m looking at how changes in work can affect societal cultures. I explore the values that people cherish in different occupations and how these could be affected by digital technologies transforming or replacing work.

            There’s a big debate on whether work is actually becoming more innovation driven, or more algorithmic and routine. This will change which kind of emotional and psychological needs work would fulfil, which in turn will affect how meaningful work will be for people, which ultimately can affect their wellbeing in the long term.

          What got you interested in this aspect of society and technology in general?

          • My interest in this topic is really a sum of coincidences. I was doing a survey on the values of employees in the Turku region (south-west Finland) and interviewed managers, of the surveyed teams, about their challenges leading the teams. At the same time, I happened to read a report commissioned by the government about the “age of artificial intelligence”. When I was reading through the interview data and cross-referenced it to the survey results, it dawned on me that there seemed to be a connection between resisting changes in the nature of work and values. For example, managers of teams that would report that autonomous values are important would report more challenges in implementing structure and standardization of processes. This was reported in the precursor of the paper published in Labour and Industry.

            These changes didn’t seem major to me and yet there seemed to be a genuine struggle in adopting new ways of working. At this stage, I had read the commissioned report and I thought if organizations struggled with these rather minor changes what happens when there are major changes occurring as presented by the report? Whether work would become more routine or creative, a major shift could lead to a significant challenge in terms of first adopting the new ways of working with intelligent technologies and second in finding the work meaningful and satisfactory. The latter could become challenging because we tend to find acting in line with our values satisfying in contrast to behaving in ways that oppose our values. Ergo, we are more satisfied when our work doesn’t conflict with our values.

            A significant shift in the nature of work could create divergence between the values of those whose work is being automated.

          Was the choice to publish OA immediate?

          • Our institution has a contract with Taylor & Francis, all Finnish institutions are covered. So, we have a quota of articles that we can publish open access.

            I didn’t know that when I published my first article with Taylor & Francis, that’s when I discovered that it was an option. For this article, I knew that I could publish OA.

          In your opinion, what are the benefits of publishing research OA?

          • Everyone can access it. There’s always a gap between research and practice and by making it open access, there’s at least a slightly smaller gap. And then of course, there are many institutions that can’t afford to access paywalled articles so, in that sense, it’s good to have it open access – which is the point of research. It’s a common good.

          Increasing the availability of higher education

          What conclusions did you make in your research? What do you want readers to take away from it?

          • What I wanted to bring forth in this study was that the adaptation to the new working life is not only a matter of skills, the labour market, and distribution of wealth, but it has a clear cultural dimension as well.

            In the paper, I show that the values of occupations at risk to be automated (according to Frey and Osborne’s framework) are less fit for the AI era. It’s not a reflection of abilities or skills, but rather priorities and much of this is related to socio-economic status and reflected in the level of education one attains. To foster a culture where values that support the adoption of these new intelligent technologies that supposedly reduces routine work, we should invest heavily in decreasing existential uncertainty and increase the availability of higher education.

          With the recent launch of ChatGPT, we’d love to know your thoughts on this and what you think this means for the future of employment?

                    This is a tricky question! I have not tested the commercial version of ChatGPT, and I did not find the free version particularly impressive. It works really well for creating abstracts – so it will at least save me some hours of work!

                    I’m less concerned about it’s impact on future employment and more concerned about how it will affect our ability to express ourselves in written language. There’s already signs of social media having a negative impact on our writing. How bad will it get if we rely on a machine to generate our text? A slight de-skilling seems plausible, if not imminent. Generative language models can be a valuable tool in teaching, research, and many other fields – I’m certain of it. But I believe it’s augmenting rather than replacing technology as it still often requires a human finish.

                    Do you think creativity, personality, intuition, and other such human traits will ever be coded/replicated by artificial intelligence?

                    • Well, never say never. We might suffer from hubris as workers in creative fields. I tested the picture generating Dall-e service and it created quite nice pictures. Some were, of course, completely off point, but if it was something quite clear it would do a pretty good job. “Rainbow squirrel” worked quite well! There are also services that design company logos and the entire visual design for some euros. Considering the debacle with BBC and the cost of their new logo. It seems likely that at least smaller companies could rely on some form of generative AI instead of investing thousands on their visual designs. This is just an example of how creative work might also be “threatened” by the technologies. There are other studies showing technologies reducing opportunities to be creative and social.

                      Most importantly: no one knows the future, we’re limited to imagining what alternatives we could have.

                    When publishing your research, what’s most important to you and why?

                    • The readers. I try to publish in journals that I think have a readership that would appreciate the topic. So, I don’t necessarily aim for high impact factor journals or anything like that because those journals might publish articles for a more general audience. I try to find journals where researchers that are interested in similar things as I am publish. Those are probably the people that would cite my work anyway.

                    What would you say to researchers who might be nervous about publishing their work OA?

                    • You want your paper to be read by as many as possible. So why would you publish it at all if you wouldn’t want that? I still choose exactly which journal suits my work. Basically, it’s the same thing as publishing behind a paywall but you get more readers.