Begin of page section: Content

Classifying Humans – A Data Annotation Workshop By KairUs

Workshop

 

Advances in computer vision has enabled automation of classification and our everyday lives are increasingly impacted by machines which classify humans. However, automated classification is not neutral nor objective. AI powered perception is trained on visual data, and those datasets always embed worldviews. Until recently the curation and annotation of visual datasets for machine learning has received little value in modelbuilding, yet a body of artistic research has been influential in communicating how harmful biased training sets lead to discriminating classification. Biased machine vision is particularly harmful for those already marginalized by society. However, classifying is also a very human thing. Data must be classified in some way to become useful. In machine vision this typically involves image annotation. Therefore, in this workshop we explore dataset bias from the perspective of image annotation. How is a label getting attached to an image? 

As a starting point for our explorations, we take the artwork "Suspicious Behavior" – a speculative annotation tutorial ­­– and real-world visual datasets used to classify anomaly behavior in surveillance footage. We explore how the datasets are classified by embodying the rhythm of crowd sourced on-demand annotation and by reverse engineering dataset bias by prompting annotation instructions into AI image generators. 

Hands on exercises are intended to prompt critical discussion around dataset bias and AI ethics. Together we develop algorithmic literacy of how dataset classification shapes AI powered perception. The workshop does not require any previous technical expertise and is intended for all critically curious about advances in artificial intelligence.

Registration is recommended: discourse@elevate.at

End of this page section.
Skip to overview of page sections.