Master in Design for Responsible Artificial Intelligenceback to homepage
The Master in Design for Responsible Artificial Intelligence is a part-time and low-residency programme for people like you, interested in developing skills in creative research, context analysis, critical thinking and storytelling, and strategic decision-making while investigating the multiple ways in which AI systems are impacting our daily lives and engaging with complex questions emerging from wider issue areas related to digital technologies in sustainability, ethics, media and social justice from intersectional and transdisciplinary perspectives.
*Image by Alan Warburton / © BBC / Better Images of AI / Plant / CC-BY 4.0
- Andres Colmenares
- ECTS credits
- 60 ECTS credits (400 hours)
- September – July
- Part time / Low-residence mode.
This master is offered in low-residency mode which runs for 40 weeks over 10 months. You will be expected to commit 10-12 hours per week to study, which includes teaching time and independent study. The programme is delivered via a blended part-time/remote learning journey, online and through 3 in-person sessions in Barcelona that will take place from September (Kick-off week) to February-March (10 weeks) and June (Final project week).
The Master in Design for Responsible AI is backed up by Elisava, Barcelona School of Design and Engineering, an institution with 60 years of experience sharing knowledge to design and transform the world. The school develops projects to generate and transfer knowledge, address present and future challenges and foster change.
Elisava is a space to become a professional with the skills needed to design products, services, and environments to create a more environmentally responsible, inclusive, and community-focused society.
Co-founder of IAM
Andres Colmenares is a strategist, curatorial designer and creative foresight consultant and he has led and developed partnerships with organizations such as NESTA, Tate, Red Bull, Centre for Investigative Journalism, WeTransfer, BBC, SPACE10 (IKEA’s research and design lab) and the University of Arts London. He is also co-director of The Billion Seconds Institute, a lifelong learning initiative to reimagine the digital economy and since 2015 he organizes IAM Weekend, the annual conference in Barcelona for creative professionals looking to collectively rethink the futures of the internet.
Lead facilitator of Logic School
Xiaowei R. Wang is an artist, writer, organizer and coder. Their collaborative project FLOAT Beijing created air quality-sensing kites to challenge censorship and was an Index Design Awards finalist. Other projects have been featured by the New York Times, BBC, CNN, VICE and elsewhere. Their most recent project, The Future of Memory, was a recipient of the Mozilla Creative Media Award.
Founder of the Convocation Design+Research.
Caroline Sinders is a machine-learning-design researcher and artist. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders holds a Masters from New York University’s Interactive Telecommunications Program.
Sustainable Internet Lead at the Mozilla Foundation
Michelle Thorne is interested in climate justice and a fossil-free internet. As a Sustainable Internet Lead at the Mozilla Foundation, Michelle directs research initiatives in Mozilla’s Sustainability Program and a PhD program on Open Design of Trust Things (OpenDoTT) with Northumbria University. She is a senior advisor to the Green Web Foundation and its Green Web Fellowship program.
Raziye Buse Çetin
Co-founder of the AI research, advocacy and art platform Dreaming Beyond AI
Buse Çetin is an AI researcher, consultant and creative. Her work revolves around ethics, impact, and governance of AI systems and it is grounded in intersectional feminism. Buse is the policy and advocacy lead for Icarus Salon and an advisor to the Better Images of AI project.
Co-founding member of art group IOCOSE
Filippo is a designer and artist whose work sits between critical speculation and responsible innovation. His practice focuses on the ethical, political, environmental and economical implications of emerging tech, and has been featured on Wired, Vice, The Guardian, Designboom, Neural and El Pais’ por ‘Previously he worked at the BBC as Creative Director and UX Principal, growing practices in ethical design, and influencing strategies for responsible adoption of new tech such as machine learning.
Martín Pérez Comisso
Researcher in Socio-Technical Systems
Martín Pérez Comisso is a socio-technical scholar specializing in Science and Technology Studies. In parallel dimensions, he also was a chemist, a civic organizer, and a teacher. A passionate thinker dedicated to empower people (like teachers, citizens, and decision-makers) to navigate concepts and methods related to technology, to increase the awareness and understanding of their technological realities with a transdisciplinary toolset.
Director of the Master’s Degree in Design and Management of User Experience and Digital Services
Dr. Guersenzvaig has been the executive director of the Design Observatory of the FAD (Spanish initials for Promotion of Arts and Design) and a board member of the ADG-FAD. Currently, he combines his work as a professor with professional activities as an independent consultant in the field of user experience design and service design.
Co-founder of oio
Simone is a product and interaction designer. He collaborates with companies such as Google, Ikea and the Dubai’s Museum of the Future. His works explore the implication of living and collaborating with other, not-so-human, intelligences. He has been awarded by Red Dot Design Award, Core77, Interaction Awards and his works have been exhibited in galleries and museums such as Vitra Design Museum, Triennial Museum in Milan and MAK Vienna.
Founder of AlxDesign
Nadia is a designer & researcher with a focus on AI/ML, data, tech, (digital) culture and creativity. She is currently working as Head of Creative Technology at DEPT, holds an MA in Data-Driven Design, and continues to work on freelance and self-initiated projects to challenge how we design, relate to, and interact with technology. Over the past 10 years, she’s worked freelance across a wide range of roles and industries with organizations such as Hyper Island, Pi Campus, Forbes, UN, AWWWARDS, Bit, DECODED, MOBGEN | Accenture Interactive, ICO, and more.
Digital Ethics Specialist at IKEA
Abdo is a Data science practitioner, activist and poet. His practice is multifaceted and revolves around decolonial computing and bridging critical theory with the critical practice of data. He’s a Co-founder of the Landing Space Project and the Atlas of Algorithmic (In)equality. He engages with questions related to internet geographies, socio-technical utopias/dystopias as well as exploring play-as-resistance.
Artificial Intelligence (AI) is considered one of the most important technical developments of our times. Since its early origins in the late 1950s as a scientific discipline that aimed to simulate different forms of intelligence using machines, the term AI has been equally used by computer scientists, new media artists, tech journalists, investors, politicians and science fiction writers to refer in multiple ways to a fascinating speculation: that cognitive functions as learning, reasoning, perception or even creativity can be described and modelled with such accuracy that it would be possible to reproduce them using computers.
During the last decade, the exponential growth of computational power, planetary-scale data collection technologies and a data-driven media culture have enabled a wide range of successful applications of AI systems in areas such as natural language comprehension or image and speech recognition, accelerating the adoption of these systems across industries for process, task and decision automation at unprecedented scales, giving rise to crucial interrelated ethical, societal and environmental challenges in public and private sectors, that we will explore deeper across the programme.
In recent years, different research institutions, governments and private companies around the globe, have developed robust research projects to design principles, guidelines, methodologies and tools for ethical, accountable and trustworthy AI systems and practices, often referred to as the emerging field of ‘Responsible AI’.
Responsible AI practices aim to understand AI as socio-technical systems to study the different impacts they have in society and the Planet, while designing theories, frameworks, methods and other tools for an ethical, legal, and sustainable development, deployment, governance and usage of AI systems.
There is a growing need in the public and private sectors, to provide tech workers from technical and non-technical backgrounds with a transversal set of capabilities and mindsets to collectively learn how to transform principles (ways of thinking) such as transparency, justice and fairness, non-maleficence, responsibility and privacy into action (ways of doing) to assess and mitigate risks, increase digital media literacy in the wider public and learn the best ways humanity can use these powerful technologies to address the socio-ecological implications of the environmental emergency and make possible the UN’s Sustainable Development Goals for 2030.
The programme is designed for students, alongside faculty and other guest collaborators. You all will be organised as a creative research collective and a critical design media lab around a self-assigned theme related to the Master’s core topics that will run across the three terms of the master, as a way to practice and reflect on the power of collective intelligence.
Through group discussions, creative foresight experiments, collective decision-making and reflection exercises, you will develop rigorous methods of collecting and analysing information and alternative ways of understanding, situating, and exchanging knowledge in collaboration with their peers.
Through lectures, tutorials, debates and workshops, you will explore and investigate the social, ethical, cultural and environmental impacts of AI systems, guided by specialists and researchers working in different sectors and regions.
We will learn how to develop a critical understanding of the socio-economic, socio-technical, and socio-ecological aspects of AI systems, alongside discussing the history, philosophy and ethics of AI, providing students with a broad set of perspectives on the key challenges and opportunities raised by AI systems in relation to the environmental emergency.
We will explore and interrogate, alongside specialists working with AI systems in diverse fields, the different dimensions and narratives of Intelligence, Artificial Intelligence, Automation and Automated Decision-Making Systems (ADMS), how these systems and technologies are being used by organisations across the public and private sector, and how new media artists, journalists and creative technologists are experimenting with these technologies.
We will help you develop an advanced and multidimensional understanding of AI from its origins as a scientific discipline and its philosophical connotations to how AI systems work and are being used today in a wide range of contexts. Through interactive sessions you will explore the state-of-art of AI systems to understand their true capabilities and limitations, as a way to enable you to communicate knowledge in advanced and explainable terms across technical and non-technical disciplines.
The programme aims to help you develop an advanced and critical understanding of the legal, ethical and social dimensions of the theories shaping the emerging field of Responsible AI Engage in a critical analysis of the current landscape of AI policy initiatives around the world with a strong focus in the European context.
We will engage in a critical study, analysis and discussion of the foundations of Responsible AI, the evolution of its legal dimension and different principles and guidelines for AI ethics in public and private sector, and the requirements for Trustworthy AI according to the High Level Expert Group on AI set by the European Commission.
We will investigate and analyse the practical dimensions of Responsible AI, using different design mindsets to learn how to implement the ethical guidelines and principles, discussing and testing different impact assessment methods, technical and non-technical approaches, and tools to make AI explainable to the general public.
You will become competent in identifying and critically analysing the impacts and risks of AI systems in a wide scope of contexts and develop a broad toolkit– from analytical and creative skills to design methodologies and strategies– for the implementation of Responsible AI guidelines.
→ Ethics Guidelines for Trustworthy AI by High-Level Expert Group on AI (EU Commission 2019).
→ Dreaming Beyond AI (2021) by Nushin Yazdani and Buse Çetin.
→ Nooscope.ai (2020) by Vladan Joler and Matteo Pasquinelli.
→ Anatomy of AI (2018) by Dr. Kate Crawford and Vladan Joler.
→ Monologue of the Algorithm | How Facebook turns users data into its profit (2017) by SHARE Lab and Panoptykon Foundation.
→ Branch Magazine (2019-) by Climateaction.tech.
→ The Roadmap to Sustainable Digital Infrastructure by 2030 by Sustainable Digital Infrastructure Alliance.
→ Computer Scientist Explains Machine Learning in 5 Levels of Difficulty | WIRED (2021).
→ Tools for a Trustworthy AI by OECD (2021).
→ Creative AI Lab by Serpentine Galleries (2020).
→ AI, Ain’t I A Woman? by Joy Buolamwini (2018).
→ Are We Automating Racism? by Vox (2021).
→ Is AI Biased? by BBC (2021).
→ Gender Bias in AI and Machine Learning Systems by Data Demystified (2021).