Article
Responsible use of GenAI
GenAI ethics, accountability, and trust
Explore Content
- Generative AI is dominating public interest in Artificial Intelligence
- New bots on the block
- 1 | Managing hallucinations and misinformation
- 2 | The matter of attribution
- 3 | Real transparency and broad user explainability
Generative AI is dominating public interest in Artificial Intelligence
Generative AI is dominating public interest in artificial intelligence. By some estimations, generative AI is the end of the Internet search and the tool that will revolutionize many aspects of how we work and live. We’ve heard that before in AI. The newest applications often conjure public excitement.
Yet, generative AI is different than most other kinds of AI in use today. Large language models, for example, can respond to user prompts with natural language outputs that convincingly mimic coherent human language. What is more, there is effectively no barrier to using some of these models because they do not require any knowledge of AI, much less an understanding of the underlying math and technologies.
In the business realm, there is growing intrigue around how generative AI can be used in the enterprise. As with all cognitive tools, the outcomes depend on how they are used, and that includes managing the risks, which for generative AI have not been as deeply explored as the capabilities. Some primary questions are, can business users trust the outputs of this kind of AI application, and if not, how can that be achieved?
New bots on the block
To this point, AI has broadly been used to automate tasks, uncover patterns and correlations, and make accurate predictions about the future based on current and historical data. Generative AI is designed to create data that looks like real data. Put another way, Generative AI produces digital artifacts that appear to have the same fidelity as human-created artifacts. Natural language prompts, for example, can lead the neural network to generate images that are in some cases indistinguishable from authentic images. For large language models that create text, the AI sometimes supplies source information, underscoring to the user that its outputs are factually true, as well as persuasively phrased. “Trust me,” it seems to say.
CIOs and technologists may already know that generative AI is not “thinking” or being creative in a human way, and they also likely know that the outputs are not necessarily as accurate as they might appear. Non-technical business users, however, may not know how generative AI functions or how much confidence to place in its outputs. The business challenge is magnified by the fact that this area of AI is evolving at a rapid pace. If organizations and end users are challenged just to keep up with generative AI’s evolving capabilities, how much more difficult might it be to anticipate the risks and enjoy real trust in these tools?
1 | Managing hallucinations and misinformation
A generative model references its dataset to concoct coherent language or images, which is part of what has startled and enticed early users. With natural language programs, while the phrasing and grammar may be convincing, the substance may well be partially to entirely inaccurate, or sometime, when representing a statement of validity, false. One of the risks with this kind of natural language application is that it can “hallucinate” an inaccurate output in complete confidence. It can even invent references and sources that are non-existent. The model would be forgiven as its function is to generate digital artifacts that look like human artifacts. Yet, coherent data and valid data are not necessarily the same, leaving end users of large language models to contend with whether an eloquent output is factually valuable at all.
There is also the risk of inherent bias within the models, owing to the data on which they are trained. No single company can create and curate all of the training data needed for a generative AI model because the necessary data is so expansive and voluminous, measured in tens of terabytes. Another approach then is to train the model using publicly available data, which injects the risk of latent bias and therefore the potential for bias in the AI outputs.
2 | The matter of attribution
Generative AI outputs align with the original training data, and that information came from the real world, where things like attribution and copyright are important and legally upheld. Data sets can include information from online encyclopedias, digitized books, and customer reviews, as well as curated data sets. Even if a model does cite accurate source information, it may still present outputs that obscure attribution or even tread across lines of plagiarism and copyright and trademark violations.
How do we contend with attribution when a tool is designed to mimic human creativity by parroting back something drawn from the data it computes? If a large language model outputs plagiarized content and the enterprise uses that in their operations, a human is accountable when the plagiarism is discovered, not the generative AI model. Recognizing the potential for harm, organizations may implement checks and assessments to help ensure attribution is appropriately given. Yet, if human fact-checking of AI attribution becomes a laborious process, how much productivity can the enterprise actually gain by using generative AI?
3 | Real transparency and broad user explainability
End users can include people who have limited understanding of AI generally, much less the complicated workings of large language models. The lack of a technical understanding of generative AI does not absolve the organization from focusing on transparency and explainability. If anything, it makes it that much more important.
Today’s generative AI models often come with a disclaimer that the outputs may be inaccurate. That may seem like transparency, but the reality is many end users do not read the terms and conditions, they do not understand how the technology works, and because of those factors, the large language model’s explainability suffers. To participate in risk management and ethical decision making, users should have accessible, non-technical explanations of generative, its limits and capabilities, and the risks is creates.
Business users should have a real understanding of generative AI because it is the end user (and not necessarily the AI engineers and data scientists) who contends with the risks and the consequences of trusting a tool, regardless of whether they should.
Accountability on the road ahead
Even as generative AI becomes better able to mimic human creativity, we should remember and carefully consider the human side of this equation. Everyone will be affected by generative AI in one way or another, from outsourced labor to layoffs, changing professional roles, and even potentially legal issues. Generative AI will have real impact, and because an AI model has no autonomy or intent, it cannot be held accountable in any meaningful sense.
At scale, the possibility of transparency with generative AI becomes elusive and “keeping the human in the loop” becomes a growing problem. It is also unclear at this point the degree of consequences that may result from mass adoption of generative AI, such as the proliferation of fake facts to the detriment of objective and complete truth. These challenges are unlikely to hinder generative AI’s adoption.
No matter how powerful it becomes, we still need the analysis, scrutiny, context awareness, and the humanity of people at the center of our AI endeavors. This AI era is the Age of WithTM, where humans work with machines to achieve something neither could do independently. Now is the time to derive viable methods of accountability, trust, and ethics, linking the generative AI product and its outcomes with its creator, the enterprise.