Welcome to another chapter of coexist, a place of reflections and interpretation of the coexistence of design, technology, humans, and ethics.
If you’re reading this and haven’t yet subscribed, join down below :)
Warning: this one is going to be long one :)
In the previous chapter, I shared some definitions of data and the existing data types. Due to the utility and utilization of data nowadays, data has become the -fuel- of all experiences. The intangible becomes tangible, reflected in the importance, complexity, and value of data. Still, it also demonstrated that technology should be the broadest framework that coexists in the customer journeys we deliver in the present tense. (read the previous chapter here)
Data is valuable and powerful but also very complex. The complexity of data relies on:
The considerable amount of existing data
The process of managing and engineering data
The reliability and origin of data
The accurate and useful outcome
The hybridization of experiences (more fonts of data, processes, machinery)
The impact on the environment
The power, ownership, and privacy of data
The unintended and intended bias of AI and ML
The unexpected behaviors of society are being driven or influenced by data.
The success, failure, or harm you create with data involves many layers of the process of gathering data, designing algorithms, who owns and manipulates the algorithm, people behaviors, the language and interaction between interfaces and people but also the ethical principles that intrinsically coexist in the creation and evolution of the process; and in the outcome.
Welcome to coexist | chapter 2:
The vast word Ethics
In this journey of trying to understand and gather technology knowledge, it is inevitable to prompt, to deep dive, to be curious, to understand some dynamics, and to face how technology is materializing and spreading non-ethical solutions, experiences, and products.
Before going to the subject, I want to re-share with you all:
Why am I talking about technology?
Because the 4th industrial revolution is here to disrupt experiences and create new opportunities, but the most important thing is that mainstream technologies are changing how we behave, interact, and expect.
Why am I talking about data and algorithms?
Because on these two elements rely all the opportunities and harm technology is bringing to our context.
Why am I going to talk about ethics and justice?
Because we should'nt allow technology to become one of the most significant threads for humanity and the planet.
Why is it all connected to our -being- designers?
Because we are advocates for humans and orchestrators of experiences, we must have a relevant role in this new paradigm.
Let's start by understanding how AI and ML work:
Artificial Intelligence (AI): AI refers to intelligent behaviors by machines. The source of intelligence relies on data sets.
Machine Learning (ML): ML refers to mathematical models based on data, known as "training data," in order to make predictions or decisions.
The outcomes of AI & ML models could be:
Funnels of UI (registration and login of dedicated digital environments)
Clusters for personalized content (ads, social feeds, offers)
Automation of physical/digital experiences (robots, autonomous cars, facial recognition access, virtual assistants,etc).
Content creation from AI and ML (editorial content, images)
So yes, most of the interactions are made by AI and ML. How can we compose a model that makes us understand the stages of data usage and AI and ML process? And also, in which stage AI & ML could cause harm?
Let's try to visualize it in these cases.
Note: the cases are not that new, but they are the ones that trigged my curiosity on this subject and helped me define the chapter's outline and content.
The big software, the significant failures - Facial Recognition Discrimination
Dr. Joy Buolamwini, Founder of the Algorithmic Justice League, came face to face with discrimination. From a machine. While working on a graduate school project, facial analysis software struggled to detect her face. She suspected this was more than a technical blunder, but rather than surrender, she responded with curiosity. Her MIT peers with lighter skin color didn’t have the same issues, so Joy tried drawing a face on the palm of her hand. The machine recognized it immediately. Still, no luck with her real face, so she had to finish her project coding with a white mask over her face in order to be detected. (Buolamwini, Algorithmic Justice League)
What happens here?
*The software probably was biased from its creation, and they did not evolve the algorithm with new input data for the software to respond to a broader population.
*The projects and results from Joy Buolamwini opened a considerable discussion on how the available and commercial software services are designed, maintained, and evolved. Interestingly, many of these software services are available on the market and used by many.
*AI systems can actually perpetuate racism and other forms of discrimination. When and how can we curate the process and output?
Influencing results and beliefs of a society - Cambridge Analytica
The data analytics firm, Cambridge Analytica that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box.
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use. (The Guardian, 2018)
What happens here?
*AI was the tool to influence the decision-making of a society. Powerful, right?
*From the different clusters of proto-personas, they targeted all kinds of content, creating bubbles of disinformation.
*This case demonstrated inexisting of a concrete technological code of ethics
Intentional discrimination - Deliveroo
An algorithm used by the popular European food delivery app Deliveroo to rank and offer shifts to riders is discriminatory. the particular algorithm examined by the court allegedly was used to determine the “reliability” of a rider. According to the ordinance, if a rider failed to cancel a shift pre-booked through the app at least 24 hours before its start, their “reliability index” would be negatively affected. Since riders deemed more reliable by the algorithm were first to be offered shifts in busier timeblocks, this effectively meant that riders who can’t make their shifts—even if it’s because of a serious emergency or illness—would have fewer job opportunities in the future. (Vice, 2018)
What happens here?
*Overpassing human rights and needs to achieve business goals.
*The power on who gets to decide, the modality , the priciples behind our daily digital services.
*These kinds of codes are called "Black Boxes"; only a few can have access to understand and modify the code.
The little detail - Amazon
Amazon worked on building an artificial-intelligence tool to help with hiring, but the plans backfired when the company discovered the system discriminated against women. The company created 500 computer models to trawl through past candidates' résumés and pick up on about 50,000 key terms. The system would crawl the web to recommend candidates.
A year later, however, the engineers reportedly noticed something troubling about their engine — it didn't like women. This was apparently because the AI combed through predominantly male résumés submitted to Amazon over a 10-year period to accrue data about whom to hire.
Consequently, the AI concluded that men were preferable. It reportedly downgraded résumés containing the words "women's" and filtered out candidates who had attended two women-only colleges. (Business Insider, 2018)
What happens here?
*Again, a system built with bias scaling discrimination.
*The key actors, only engineers? Who should be involved is such a process?
*After the flop of the system, Amazon canceled this recruiting tool. The are many other companies using something similar.
When I discovered those cases, it was hard to understand where to begin. So, I went to understand the definition, objectives, frameworks, and theories of Ethics. Yes, I got lost and went in circles for a bit. Reading and matching information, I also got into an interdisciplinary research called Technoethics; these folks are trying to create definitions and lenses related explicitly to ethics and new technologies. In 2021, I discovered a very inspiring book called Design Justice by Shasha Contanza Chock.
So yes, you guessed right. I'm going to take you with me into these three concepts. I will share the main definitions, objectives, key concepts, and what I've learned.
Ethics
Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior.”
Ethics is a system of moral principles. Ethics is concerned with what is good for individuals and society and is also described as moral philosophy.
The term ethics can also refer to rules and guidelines that establish what conduct is right and wrong for individuals and groups. (Exploring Ethics, Cahn)
Lenses
Virtue Ethics: “I have these traits, so I act in alignment with them” #Actions #Behaviour
Deontology: “I have these duties so I act according to them” #Guidelines #Actions
Consequentialism: “I analysed potential results here’s what I’ll do” #Consequences #Actions
Insights, questions, learnings.
*The term ethics often describes the investigations and analysis of moral principles.
*Ethics can refer to our personal beliefs, the study of moral philosophy, or rules of conduct. It goes through different dimensions and levels of perception and application.
*Ethics is a vast world, most of the time not easy to understand, interpret and apply. We need to partner with experts to re-direct those concept into strategic and pragmatic design applications.
Technoethics
Technoethics is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains to provide insights on ethical dimensions of technological systems and practices for advancing a technological society.
Technoethics views technology and ethics as socially embedded enterprises and focuses on discovering the ethical uses for technology, protecting against the misuse of technology, and devising common principles to guide new advances in technological development and application to benefit society. (Wikipedia)
Types of Technoethics
Access: enable access to empowering technology as a right
Privacy: protection of privacy rights
Transparency: Informing how technology works and what its intentions are
Biases: curate the information and algorithms that could create any inequality, disinformation, etc.
Behavior: curate the harm technology could produce within the society
Environment: curate the creation of technology that does not harm the environment
Insights, questions, learnings.
*The ethics of technology goes from how it's produced, coded, passing through the different impact, influence, and rights it has on individuals, communities, and society.
*The ethics of technology should respond and anticipate subjects that go along the rapid evolution of mainstream technologies
*The ethics of tehcnology are develped by pholosophers and theorists. How can we expand knowledge and framworkds to the people producing, coding, designing technologies in agile ways?
Design Justice
Design justice rethinks design processes, centers people who are normally marginalized by design, and uses collaborative, creative practices to address the deepest challenges our communities face.
Design justice is a framework for analysis of how design distributes benefits and burdens between various groups of people. Design justice focuses explicitly on the ways that design reproduces and/or challenges the matrix of domination (white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality). Design justice is also a growing community of practice that aims to ensure a more equitable distribution of design's benefits and burdens; meaningful participation in design decisions; and recognition of community-based indigenous, and diasporic design traditions, knowledge, and practices. (Design Justice, Chock)
Key concepts
Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief.
Algorithmic Bias is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process. The types of algorithmic bias are:
Preexisting Bias, bias that exists in broader society, culture, and/or institutions and is reproduced in the computer system, either intentionally or unintentionally, by systems developers.
Technical Bias, bias that emerges through limitations of a program, computational power, its design, or other constraint on the system
Emergent Bias, a system that may not have been biased given its original context of use or original user base comes to exhibit bias when the context shifts or when new users arrive. (Value Sensitive Design, Friedman)
Matrix of domination is a sociological paradigm that explains issues of oppression that deal with race, class, and gender, which, though recognized as different social classifications, are all interconnected. Characteristics such as race, age, and sex, may intersectionally affect an individual in extremely different ways, in such simple cases as varying geography, socioeconomic status, or simply throughout time. (Wikipedia and Design Justice, Chock)
Intersectionality is an analytical framework for understanding how aspects of a person's social and political identities combine to create different modes of discrimination and privilege. Intersectionality identifies multiple factors of advantage and disadvantage. Examples of these factors include gender, caste, sex, race, ethnicity, class, sexuality, religion, disability, weight, physical appearance, and height.These intersecting and overlapping social identities may be both empowering and oppressing. (Wikipedia and Design Justice, Chock)
Insights, questions, learnings.
*As designers, we must deeply unpack subjects, concepts, and definitions of social justice. We need to gather literacy on these subjects to comprehend better how we can bring value and justice.
*Intersectionality is a systemic framework. Design justice focuses on how to think, assess and test with a systemic approach to create different kinds of intersections between subjects and characteristics of people. This is another added layer on systemic design practice and approach.
*The levels of collaboration in design need to evolve. The level of empathy required is not the same as we know and applied until today. We need to understand how design becomes helpful in new contexts and communities. We need to create new collaboration mechanisms to empower people of these communities to become designers together with us. This principle goes along with the approach mentioned by Ezio Manzini on Design When Everybody Designs. We will get deeper in the following chapters.
How can our design process evolve?
Designers advocate for human values, principles, and rights. To do so, we use empathy as the primary approach. We apply empathy through human/user-centered design. We do it when we assess people behaviors, expectations and needs, we co-design with the stakeholders involved, we test our prototypes, interfaces, etc.
In our process, we travel between strategy, design, and output. Strategy usually is the first part of the project where we understand the context, business goals, trends, user's needs and behaviors. In this step, we set the direction either for a specific product or more complex ecosystems such as customer experiences, where a CX could be the design and development of interconnected touchpoints or services. Design is about bringing that vision into some concrete and usable, it could be a service, experience or product. In this phase, we co-design with the main stakeholder and perform the first test of our idea prototype. Output is about development and continuous testing. Testing to understand if our product idea is functional but also we test in the market to gather insights on the main KPI's we framed in the strategy and design phase.
What if we divide these three macro phases and understand how can we make a more ethical and justice design that involves mainstream technologies?
Strategy
Link users' expectations and behaviors with the technological value you aim to bring. For Example, if we are talking about hyper-personalization, understand what's the meaning for both: the organizations and the final user. Try to identify a win-win for both actors; but also identify the threats and constraints of the system. Map the type of data involved. This will give you a bigger picture of the product strategy you are developing. It's needed to collaborate with other actors (data experts, IT, Privacy experts)
Add clear objectives of the technological experience you want to achieve to your design strategy. Try to understand the mechanisms and the data sets needed and where to gather the data.
Map all the users that need to be included in your proposition and understand why. We have always included a broad portfolio of users due to business goals. We must open the spectrum to add justice to our designs in this case.
Assess bias that might impact your design. Involve people from different communities and sectors. Understand what specific dimensions you want to tackle with your design, but also have a broader view on how you can guarantee a product or experience that include as many people as possible.
Design
Co-design with communities that you include in your proposition. Understand how you can have a more active role in their context and how the design process in co-guided equally between you and them. This will allow more profound empathy and more effective results. We will have a chapter dedicated to new ways of collaboration to tackle justice in the design process.
Declare the data set needed in your design, and share the mechanism of interactions and data that you would like to create. This will allow influence the creation and design of the algorithm.
Output
Co-assess algorithm bias and test the interaction of your product service taking into account the principles of the matrix of domination and intersectionality.
Curate your product, experience, or service continuously to intersect bias and improve your design iteratively from a justice and ethical point of view.
These are the first reflections on how we can combine design, technology, and ethics more practically. Though this discussion is open. I will continuosly research and discuss on this, and I will appreciate feedback :)
See you in our next chapter, and thank you for reading.
Cheers,
Marihum
PS: If you want to be part of the discussion, sign up to coexist newsletter. If you want to debate about those topics, drop me a line.