Don’t forget the social in sociotechnical
Venue
Newhaven Lecture Theatre13-16 South College Street
AND
Online
Description
The arrival of experimental Large Language Models (LLMs) in 2022 and their explosion in use over the last year might look like a technological wonder, but under the skin lie a huge number issues likely to cause harm in society, all of which have been well known to researchers for decades. These problems have become public in other computer systems for over the years - and been the focus on activism and regulation to help control them. In this session Eddie and Nadin explore these problems, why do they exisit, and what might be able to do about them?
Friday 22 Mar 2024 15:00 - 17:30 GMT, Newhaven Lecture Theatre, 13-15 Old College Street.
Get the ZOOM link from Eventbrite
Dr Nadin KOKCIYAN, Lecturer in Artificial Intelligence, School of Informatics
Enabling Responsible AI with Humans
Outline: It is often the case that we focus on developing tools to automate decisions. This approach is not the best to take when we are talking about sociotechnical systems. Humans would like to be part of the solutions, they just don't have the means to achieve this goal. In my talk, I will introduce some concepts about Responsible AI to develop trustworthy autonomous systems. Based on our research, I will give some examples where we include humans into the decision-making process.
Nadine is currently a Lecturer in Artificial Intelligence at the School of Informatics, University of Edinburgh, and a Senior Research Affiliate at the Centre for Technomoral Futures, Edinburgh Futures Institute.
She is the director of the Human-Centered AI Lab (CHAI Lab)a member of Artificial Intelligence and its Applications Institute (AIAI).a member of Security and Privacy group.affiliated with Technology Usability Lab In Privacy and Security (TULiPS). NAdin works on Multiagent Systems, Agreement Technologies (Argumentation and Negotiation), Privacy in Social Software, AI Ethics, Explainable AI, Responsible AI.
The CHAI (Computer Human AI) Lab applies state of the art tools from artificial intelligence to improve the lives of users taking a human first approach , focusing on building usable technologies that solve real issues faced by users.
Website: https://homepages.inf.ed.ac.uk/nkokciya/
Eddie Ungless, PhD candidate in the School of Informatics
Measuring Bias is Pointless.
Outline: It has been consistently shown that existing bias measurement methods for natural language processing (NLP) technologies, like language models, have poor validity and reliability. That is to say, they don’t measure what they claim to measure in a consistent way. Coupled with the fact that these models exist as part of socio-technical systems in which stakeholders can introduce their own biases, this means measuring bias upstream and in abstract seems a fruitless exercise. It is only within specific use contexts that we can understand the negative impact of these models and collaborate with those impacted to develop meaningful solutions. By the end of the talk I hope to have convinced you that measuring bias in abstract is pointless, and we should refocus our efforts on measuring harms in context.
Bio: Eddie L. Ungless is a final year PhD student in the Centre for Doctoral Training (CDT) in NLP, funded by the UKRI. He has an interdisciplinary background spanning linguistics, psychology, digital media strategy and computer science. His work addresses social bias in NLP technologies, wherein he champions an approach that builds on social science research to centre human experiences in our understanding of AI harms. You can find out more about his interests, along with links to his published work, on his blog: https://mxeddie.github.io/
Eddie is a member of the SMASH Social Media Analysis and Support for Humanity group in the School of Informatics His focus is on how to detect predictive biases in NLP tools, on how the public are impacted by and respond to these biases and what we as a research community can do about them! His research focuses on Intersectional identity theory, predictive bias and Algorithmic justice
SMASH (Social Media Analysis and Support for Humanity) is a research group that brings together a range of researchers from the University of Edinburgh in order to build on our existing strengths in social media research. This research group focuses on mining structures and behaviours in social networks.