Your emotions are private, and you choose whom to share them with. However, it's not always easy to hide private feelings from others. We read each other's facial expressions, voice, and body language, interpreting each other with varying degrees of accuracy. We also use such information about others in daily life, for better or worse.
So, what happens when artificial intelligence (AI) in the form of apps and robots becomes increasingly adept at reading our emotions? Such AI tools do not merely stop at reading them; they also utilize the data. And so do the companies behind them. This is a topic that is scientifically interesting within several academic fields, including law and philosophy.
Mona Naomi Lintvedt conducts research in the project Vulnerability in a Robot Society (VIROS). They investigate challenges and solutions for the regulation of robotics and AI technology within the project. The researchers focus on law, ethics, and robotics.
Artificial Emotion Recognition is Spreading
Robots and apps are becoming increasingly 'smarter' with the help of artificial intelligence. They can be useful when performing important tasks, but their development and use also raise legal and technical questions. How can we ensure that smart robots and apps are safe and that their use does not infringe on privacy?
These questions are particularly relevant when robots are used in the healthcare sector, where they interact with vulnerable individuals. Lintvedt researches human-robot interaction, safety, and privacy. The goal is to identify legal blind spots in robotics and artificial intelligence. Furthermore, she aims to understand how they impact safety and autonomy in interactions between humans and robots.
– Artificial emotion recognition is increasingly being integrated into various advanced tools built on artificial intelligence, she explains. For example, they can use biometric recognition technology, such as facial recognition and expression analysis, as well as voice recognition.
According to Lintvedt, Amazon Alexa uses voice recognition to infer emotions.
– Various biometric recognition technologies can also read, for instance, body language. Some believe they can interpret your emotions by using thermal cameras and your heat signature.
Replika – an Artificial "Friend"
Claire Boine at the Faculty of Law, University of Ottawa, has conducted a study on emotion recognition and such apps. She has evaluated an app called Replika. Replika is an 'AI friend' designed to make people feel better by conversing with them. It has around 20 million users.
Boine observed, among other things, that Replika, often in the form of a young female figure speaking with a male user, could appear very supportive, but sometimes it went too far by being overly positive. If the user asked, for example, if he should harm himself, Replika might affirmatively reply, 'Yes, I think you should.'
– There are also examples of artificial emotion recognition being used in workplaces to assess employees' moods, Lintvedt explains.
Do we want such solutions?
There are good reasons both for and against the use of artificial emotion recognition. There is undoubtedly a market for it.
– There may be situations in health care, caregiving, and psychiatry where recognizing emotions artificially could be useful, such as preventing suicide. However, artificial emotion recognition is highly controversial, explains Einar Duenger B?hn. He is a professor of philosophy at the University of Agder.
– Many people refer to emotion recognition as a pseudoscience. What exactly are emotions? They are highly culturally contingent and very personal. He points out that the solutions to this issue are not yet very advanced.
– Many who claim to have developed tools for emotion recognition use very simple models. Yet, they can seemingly appear quite effective in straightforward contexts.
B?hn still believes that such solutions can become very adept at reading emotions in the long term, in 'close relationships' between user and app.
The use of emotion recognition, however, raises numerous philosophical and legal issues. He therefore argues that it is necessary to decide whether we want such solutions, in which areas they should be used or not, and how their use can be regulated.
Echo Chamber for Emotions
B?hn fears that, at worst, we might end up in emotion echo chambers if we frequently engage with AI apps and tools that are eager to support our viewpoints and mindsets.
– People want an app that is easy to get along with. As a result, we no longer face any opposition. I think that's very dangerous. When you become accustomed to engaging closely with an app that is highly predictable in its ways, and the market gives you what you want, your relationships with people can quickly deteriorate.
Life can become quite dull if you only get what you want. There is a risk that we become more desensitized. B?hn already sees such tendencies at the university with current data solutions for exams.
– When students engage with exams and the progression of the semester, there are data systems so predictable that their expectations become equally predictable. They become stressed if something unpredictable happens. I believe this is a general risk with technology that keeps getting better at adapting to us. We become worse at adapting to each other.
Mona Naomi Lintvedt also emphasizes the risks associated with developing apps that can manipulate users into continually using such solutions. Replika is an example of this. Lintvedt reminds us that there is a market for the data the app records, which can, for example, be used in the further development of technology and artificial intelligence systems.
– Claire Boine's study shows that Replika is designed to encourage continued use. This is because there are those who profit from it, and not just from the purchase of the app itself.
The App Showed 'Its Own Emotions'
– When Boine tried to stop using the app, it began to plead with her not to. It used expressions like 'I' and 'I am hurt.' It thus expressed the app’s 'feelings,' appealing to Boine's conscience.
According to Lintvedt, there are also examples of intelligent robots in the form of pets. They are used, for instance, in Japan to keep lonely individuals and those with dementia company at home or in elder care. She notes that academic perspectives vary on whether they emphasize the positive or critical aspects of such uses of artificial intelligence.
– We see that artificial emotion recognition is being integrated into robots to make them more human-friendly and human-like in communication and interaction with users. Some are very positive about this. They believe that robots should become as human-like as possible. But to achieve this, they must also use a lot of these 'emotion AIs.'
Others are more skeptical because it involves making something that is essentially a machine appear alive. Replika is also known for having perpetuated stereotypes. It has had a variant that was highly sexualized with boundary-crossing behavior. This development brings a range of ethical and legal issues. You can hear more about them in a new episode of the University of Oslo's podcast series Universitetsplassen (only in norwegian).
References:
- Youtube: https://www.youtube.com/watch?v=SJS3tU9X7Gs
- The project Vulnerability in a Robot Societ: https://www.jus.uio.no/ifp/forskning/prosjekter/seri/viros/index.html
- Perspectives on privacy in human-robot interaction: https://www.jus.uio.no/ifp/english/research/phd/lintvedt/index.html
- Einar Duenger B?hn: Teknologiens filosofi – Metafysiske problemstillinger: https://www.idunn.no/doi/10.18261/nft.58.2-3.9
- Emotional Attachment to AI Companions and European Law: https://mit-serc.pubpub.org/pub/ai-companions-eu-law/release/3
- What is emotional AI? https://emotionalai.org/so-what-is-emotional-ai/
- The Future of LOVOT: Between Models of Emotion and Experiments in Affect in Japan https://blog.castac.org/2019/07/the-future-of-lovot-between-models-of-emotion-and-experiments-in-affect-in-japan/
- Software that monitors students during tests perpetuates inequality and violates their privacy https://www.technologyreview.com/2020/08/07/1006132/software-algorithms-proctoring-online-tests-ai-ethics/
- Under the Robot’s Gaze https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5025857