Taking a broader view on ethics and risks in the age of generative AI
What AI leaders can do today to assess and mitigate risks effectively
Despite guidelines, frameworks, and policies, implementing AI risk management and ethics remain highly individualized and subjective. And even though both have become more relevant over the past 5-6 years, the definition and adoption of related policies has been driven by individual companies on an individual scale, and their individual interpretations of ethics and risks. As AI leaders in business receive more and more questions by their senior leadership about where and when to integrate generative AI, AI risks and ethics are elevated to a new level.
» Watch the latest episodes of “What’s the BUZZ?” on YouTube or listen to it wherever you get your podcasts. «
From inferring to generating information
Over the last few years, companies have defined guidelines for AI ethics and risks and incorporated them into their development process. But both, building AI models as well as building them ethically sound have been focused on just one organization. AI teams have largely worked with the data they know and algorithms that infer information, for example “Is this an invoice? Is this a contract?” AI teams work with their company’s data and are involved across a range of data science tasks — from data preparation to model building, bias detection and removal, etc. Generative AI rings in a new era in which generating information and transforming information between different types of media (text-to-text, text-to-image, audio-to-text, etc.) are the dominant value proposition. As generative AI becomes more popular, data and transparency are going to change.
Especially if the underlying models are based on proprietary technology, AI leaders are reintroducing a black box into their products. This means that they cannot reproduce how an AI model creates an output. Foundation models, a category of large AI models that power the latest AI-enabled applications such as ChatGPT, are are prime example. These models have been trained on vast amounts of data, for example by scraping data from the internet. They require minimal fine-tuning by experts and can be used for many different purposes. Hence, they represent a broad foundation upon which AI teams can build additional scenarios. However, exactly what data has been used to train these foundation models, how the data has been sourced and prepared, and to what extent there is bias in the data remains hidden from the customers looking to incorporate them in their AI products.
The risks of foundation models
One of the most popular kinds of foundation models are so-called large language models (LLM). These models generate text based on written user input: blog posts, web copy, task lists, meeting agendas, interview questions, summaries, code, text in the style of well-known writers or artists; the list goes on. In essence, these models predict the next word that is most likely going to be used in a sentence. Although this technology itself is proven, its use on a broader scale and its ease of use for anybody are new. Hence, the following risks have surfaced just in the last few weeks:
Breaking the model (Names of Reddit users counting to infinity in a subreddit)
Circumventing safety precautions (Jailbreaking ethics and safety standards)
Reconstructing training data (Extracting training data from Diffusion models)
Violating intellectual property rights (Creators need to pressure the courts, the markets and regulators)
Obtaining (biased) training data from indiscriminate sources (The radicalization risks of GPT-3 and advanced neural language models)
Obtaining training data through legally non-compliant methods (Getty alleging Stability AI of misusing copyrighted images)
Despite the current hype in the market and the pressure to innovate, AI leaders should weigh the benefits and risks of using such foundation models in business and the types of scenarios in which the current risks can be mitigated more easily.
What AI leaders can do now
AI risks have historically had a local focus: One company creating an AI model based on data they have access to — typically their own data. As companies consider introducing generative AI into their products, AI leaders need to consider risks more broadly. Typical questions that AI leaders ask during the development lifecycle of an AI product include:
Is the purpose aligned with existing regulations?
Is this scenario aligned with our values and principles?
What is the risk of negative outcomes for the company and individuals?
At a minimum, AI leaders should extend the third question and clarify:
What’s the nature of the AI scenario?
What’s the impact on the end users?
What are the legal and reputational risks for the company?
While organizations are exploring the use of generative AI in their products, having humans fact-check, review, and revise the AI-generated output before it is published can be an effective risk mitigation strategy. Whether it is an employee- vs. customer-facing scenario will have an impact on the decisions as much potential intellectual property infringement and biased results. This will allow organizations to balance speed of innovation and productivity increases with the risks of generative AI.
How does generative AI change your own approach to risk management?
» Watch the latest episodes of “What’s the BUZZ?” on YouTube or listen to it wherever you get your podcasts. «
What’s next?
Appearances
February 22 - Join me today on Daniel Goodstein’s Digital Leaders Show of The Digital Enterprise Institute as we discuss how leaders can use generative AI in business.
Join us for the upcoming episodes of “What’s the BUZZ?”:
February 28 - Mary Purk, Managing Director AI & Analytics Center, Wharton School of Business at the University of Pennsylvania, will provide insights into Accelerating Your AI Adoption Across The Business.
March 14 - Tom H. Davenport, Professor and Author, and I will talk about Being All-in On Generative AI — and what that entails.
Follow me on LinkedIn for daily posts about how you can set up & scale your AI program in the enterprise. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas