RISE TAG MRG awarded at the 8th AAAI Conference on Human Computation and Crowdsourcing for their app

RISE TAG MRG awarded at the 8th AAAI Conference on Human Computation and Crowdsourcing for their application “OpenTag”

RISE Transparency in Algorithms Multidisciplinary Research Group (TAG MRG) has received the “Best Demo Award” at the 8th AAAI Conference on Human Computation and Crowdsourcing for their application “OpenTag”.

The team behind OpenTag, which supports Understanding Human Perceptions of Image Tagging Algorithms, consists of Kyriakos Kyriakou (TAG MRG research associate), Pınar Barlas (TAG MRG research associate), Dr. Styliani Kleanthous (TAG MRG research associate, CyCAT/OUC), and Assistant Professor Jahna Otterbacher (RISE TAG Research Group Leader, Faculty of Pure and Applied Sciences at OUC, Coordinator of CyCAT).

Image Tagging Algorithms (ITAs) are extensively used in our information ecosystem, from facilitating the retrieval of images in social platforms to learning about users and their preferences. However, audits performed on ITAs have demonstrated that their behaviors often exhibit social biases, especially when analyzing images depicting people.
 
The RISE TAG Multidisciplinary Research Group wanted to investigate people’s perception on this matter. So, the team developed OpenTag; an online research platform for Understanding the Human Perceptions of Image Tagging Algorithms. 

RISE Interview: Questions on OpenTag
Answered by Kyriakos Kyriakou, TAG MRG



1. How big is the risk of bias and errors in Artificial Intelligence with regards to image tagging?

Image Tagging Algorithms exhibit a variety of socially biased behaviors, especially when analyzing images depicting people. For example, in previous research we conducted (on “Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images”, ICWSM 2019), we have seen Clarifai, one of the most popular ITAs with both developers and researchers, to attribute images of people with attractiveness tags (e.g., “cute,” “pretty,” “sexy”, “attractive” and “fine-looking”). In addition to that, we found out that race is highly correlated to the degree to which a person’s image will receive attractiveness tags. In Clarifai’s output, Blacks were described with significantly less such tags than images of other social groups. Of course, we discovered much more when it comes to the behavior of ITAs, especially focusing on the major players like Google Vision, Amazon Rekognition, IBM Watson Visual Recognition, Microsoft Computer Vision, Clarifai, and Imagga. This was just a brief and simple example where you can see a form of bias.

If we consider that these services are often integrated by other applications or systems that we use in our daily life, we can realize that these biases might be perpetuated through their functionalities. It’s getting even worse when these systems and applications permit opportunities to specific groups of people over others. This can be also described as an unfair treatment to individual users of the system.

Using or integrating these services is often a quick, easy, and convenient task for developers. They save precious time and money to speed up the implementation workload for their system/app, while they avoid building an ITA from scratch. Unfortunately, developers don’t necessarily know these issues up-front; they are not aware of the possible biases or stereotypes perpetuated by each ITA. During the past few years, we have seen a notable rise of these services, but in parallel, we have seen many biased system behaviors affecting people’s lives reported on media as scandals. Today, many systems and applications out there are still using these services as part of their processes without being able to address those issues, and that’s quite concerning.

 2. How can diversity issues be addressed or overcome?

Addressing and even overcoming these issues is definitely a huge challenge. I believe addressing the issues first, will make us see the general picture of what we are facing nowadays and what’s the overall magnitude of the problem. We have to understand that these issues exist and should be monitored or reported at least. ITAs should be audited in a frequent manner both from the research community as well as from the developers that are about to use them, in order to avoid the perpetuation of possible biases. When auditing these services, we have to consider a human-in-the-loop approach to try to address the diversity issues. In addition, we need to understand the user perspective because, as we found in previous work (on “What Makes an Image Tagger Fair?”, UMAP 2019), people have different ways of evaluating the fairness of the tagger behaviors.
3. What does OpenTag hope to achieve? Why was it created?

We created OpenTag as an online research platform in an effort to study the human perceptions of Image Tagging Algorithms. Basically, we want to investigate what people think and how they feel when their images are getting analyzed by these services. OpenTag lets people upload an image (e.g. a selfie photo depicting them), which is then analyzed by three popular ITAs such as Amazon Rekognition, Google Vision, and Clarifai. At the end of the study, people can see how these services described their images and answer a quick survey.

To conclude, the purpose of OpenTag is twofold:

  1. it serves as a research tool for understanding people’s perceptions of ITA outputs and
  2. as an awareness tool to help the general public in understanding the risks when using applications that are based on ITAs, by considering the results they get in response to their own images.

MORE TO READ