Emotion detection in political advertising

We chat with Michael Bossetta about how intelligent technologies can aid emotion detection

With recent developments in technology, and concerns about the threat of artificial intelligence (AI), we chatted with researcher Michael Bossetta to get his insights on these developments in relation to his research on emotion detection.

Michael published his article, FBAdLibrarian and Pykognition: open science tools for the collection and emotion detection of images in Facebook political ads with computer vision, in our Journal of Information Technology & Politics journal. He published his work open access (OA), making it free to read for all, as part of the Bibsam agreement between Swedish institutions and Taylor & Francis. You can read more about this agreement here.

Michael Bossetta, Lund University, Sweden. Author of ‘FBAdLibrarian and Pykognition: open science tools for the collection and emotion detection of images in Facebook political ads with computer vision’.

          Please introduce yourself and your research

          • I’m an assistant professor in the department of communication and media at Lund University.

            My research deals mostly with the intersection of social media and politics, looking at what citizens are discussing during elections, how politicians campaign, and trying to look at this across different social media platforms.

          What conclusions did you make in your research? What do you want readers to take away from it?

          • Although the paper is primarily methodological, there are two takeaways from the analysis. The first is that only a small number of images and videos are used in creating thousands of digital political ads. If we see this for highly-resourced American campaigns, then the number of unique images and videos in non-US contexts is likely even less.
             
            Second, the social media ads from candidate accounts does not indicate the type of negativity, polarization, or fake news that is often associated with social media. Our analysis shows these ads are mostly depicting smiling politicians, and very few ads were attack ads. It may be that divisive messages are coming from other types of accounts, but the political ads from candidates’ official Facebook accounts are largely positive and promoting the candidate.

          Why do you believe there is an increase in concern towards emotion-detection AI, particularly from legislators? Do you agree with this concern?

          • Legislators are concerned that facial recognition technology can lead to increased surveillance or social sorting, such as China’s Social Credit System. This is definitely a valid concern, especially since new forms of sensitive biodata can be collected from fitness trackers, VR headsets, or haptic suits. However, it is important to keep in mind what ‘emotion detection’ tools actually do. They classify facial patterns, but they are not collecting any physiological data such as brain activity. We have found in other research that these tools are most reliably used as ‘happy detectors,’ and they are quite unreliable at classifying any emotion other than happiness (for example: anger, sadness, or fear). Therefore, we cannot foresee any significant harms arising from the use of emotion classifications, but facial recognition more broadly is certainly a legitimate privacy concern.

          What makes Pykognition different to existing facial emotion detection tools?

                    Pykognition helps researchers use and interpret the emotion classifications from Amazon Web Service (AWS) Rekognition. We do not develop the emotion detection technology ourselves. Rekognition is primarily developed for use in software, which makes the output not very conducive for academic analysis. Our tool, Pykognition, basically simplifies the process of classifying images with Rekognition and delivers the output in a way that is easily interpretable for researchers. 

                    Facial recognition of five colleagues in an office

                    What kind of data do you collect to measure the impact of your research?

                    • This is a complex one for me as someone who studies technology. It’s easy, for example, to inflate how many times an article is downloaded by mentioning it to a class of people and having them all download on a regular basis.  

                      Citations can also be highly speculative. Google Scholar’s crawlers collect a lot of masters thesis citations. For example, at Lund, we publish our masters thesis in a university repository. These student papers can inflate citations. They are valuable, but not as valuable as a citation in a high impact journal.  

                      So, I think the honest answer to your question is Google Scholar alerts. Because then I can see what paper is citing mine and then read what exactly they are saying. For me, that’s actually the most valuable – it’s not so much about numbers because impact is when readers spend a decent amount of time on the article; fighting or criticizing or building on it. This is why conceptual arguments and theories are important, because you want to put something out into the field that people can discuss.  

                      When I look at impact, I want to see the qualitative discussion. Is my paper generating thoughts? Is it building knowledge? Are the readers looking at the conceptual argument? Or are they putting my findings into their own analysis? Or perhaps a reader is taking on board the conceptual part of my article and using it to frame their research design. That’s much higher impact than just citing me to justify their own analysis.  

                    Did you know before you published that you could do so at no cost to yourself?

                    • Yes, that’s very well communicated. A few years ago, it would sometimes be unclear if the funding only applied to a quota of articles and whether the pot had been used up – but that’s mostly gone now, and all articles are automatically covered. 

                    What do you think are the main benefits of library funded OA research? 

                    • Universities are measured and evaluated on the world stage and published research makes them more competitive so it’s money well spent. It would be a big decision for me to change university and not get access to funded OA publishing now.   

                    In your opinion, what are the benefits of publishing research OA?

                    • Open access lowers the friction between landing on an article and reading it, and I think any web developer knows the power of that. We know from social media research that introducing even a small friction can have massive consequences.  

                      We saw this in the 2020 election. Twitter introduced a functionality that asked users to read an article before retweeting. This small step lowered retweets and the spread of disinformation. 

                    What would you say to researchers who might be nervous about publishing their work OA?

                    • Nervous about what?  

                      All researchers want to increase the visibility for articles published. And with OA you benefit from a higher level of views in the beginning to stay visible in search rankings. If you’re not publishing open access, there’s a friction that lowers downloads and ultimately may filter you out of results in databases like Google Scholar. I know I’ve made a specialist sort of argument there, but all academics know that some papers fly and some don’t. And open access helps generate that immediate response that can lead to better traction in the long run.