Artificial intelligence (AI) is widely used in software, and is likely used to some degree in software that you use every day. Some use is innocuous: a form of AI is used to respond to voice commands you give to your phone. Other types of AI have the potential to mislead people or create real-world harm.
We recognize our responsibility as software developers and designers to inform end users when there is the potential for AI to do as much harm as good. To that end, we are introducing a simple evaluation framework for deciding when and how to disclose the use of AI.
The AI Disclosure Decision Matrix (AIDDM) guides you in choosing whether to disclose the use of AI in your product.
If we feel so strongly about disclosing certain uses of AI, why do we not recommend always disclosing your use of AI? Some use cases for AI are simply not that interesting or impactful to end users and can clutter your product’s interface with irrelevant information. We wish to reserve space for contextually relevant information in user interfaces, especially since a majority of software end users are using mobile devices with small screens.
Ultimately, deciding if or how to disclose your use of AI is a user experience decision. We hope that this framework helps you make informed decisions that empower end users to make the best use of your technology.
Decision Criteria
The AIDDM aims to educate your end users without overwhelming them with endless disclosures. We have identified six criteria to consider when making a disclosure decision on your use of AI:
1. Agency
Agency refers to the degree of control or influence that the AI system has over user actions, decisions, or outcomes. This criterion considers how much autonomy the AI has in performing tasks or making choices that affect the user.
Actions have consequences. Agentic AI (AI systems which perform actions on your behalf) may introduce digital- or real-world harm. Data can be compromised or lost. Incorrectly executed financial transactions may be irreversible. Incorrectly assembled parts may damage inventory and drive up costs.
2. Creativity
Creativity assesses the extent to which the AI system generates, modifies, or manipulates content. This includes text, images, audio, or any other form of output that the user interacts with or consumes.
Generated or modified content can be misleading. AI-generated or AI-enhanced content may contain misinformation that influences its audience. Content may reinforce biases of the AI model’s creators or training data. Generated works may alter the perspectives or behaviors of people, intentionally or unintentionally. Further, generated works may be misrepresented as human-authored works, inappropriately enhancing or degrading an individual’s reputation.
3. Privacy
Privacy evaluates how much personal data the AI system collects, uses, or shares with others. This criterion includes consideration for the volume of data, its sensitivity, and how it is processed by the system.
Regulatory and ethical requirements in many organizations regarding data collection or use of end user data may prevent end users from being willing or able to adopt your AI system. As an example, many artists and entertainers do not consent to their works, name, image, or likeness being included in AI training datasets. Other individuals wish to control the spread and use of their personal data.
4. Impact
Impact measures the potential influence or harm that the AI system could have on end users or society as a whole. This criterion should include your assessment of both positive and negative effects of the system, such as shaping opinions, affecting decision-making, or introducing unintended consequences.
Systems with real-world consequences should only be adopted after conducting an adequate risk assessment of their impact. The impact of influence, outputs, or outcomes from the use of AI must be understood for informed, responsible adoption of AI systems.
5. Opacity
Opacity refers to how hidden or unapparent the AI system’s presence and role are to end users. This criterion considers whether users are aware of the fact that they are interacting with an AI, and whether they understand the AI system’s capabilities and limitations.
Non-obvious use of an AI within the system may lead to incorrect assumptions about the reliability or factuality of outputs. End user trust may be violated by systems which fail to disclose their use of AI to simulate human conversation or human behavior.
6. Integrity
Integrity encompasses both the legal and ethical considerations surrounding the use of an AI system. This includes compliance with regulations, adherence to your organization’s ethical principles, and potential societal implications of the AI’s deployment.
AI models face no consequences for mistakes or violations of legal or ethical considerations. Therefore, AI applications in regulated or sensitive environments must be built with integrity guardrails in mind. An AI system that writes or fulfills contracts for encryption products may violate your country’s export laws without these guardrails.
Severity Levels
For each of the six criteria, we outline three levels of severity:
- Low: disclosure is likely unnecessary.
- Medium: strongly consider disclosing your use of AI.
- High: use of AI and its limitations must be disclosed.
Below, you will find an AIDDM worksheet and interactive webpage that will help you determine the appropriate severity level for each criterion. Once you assign the severity level for all six criteria, these tools will offer an overall recommendation for disclosure.
Try the Interactive AIDDM
Try out our interactive AIDDM webpage to evaluate your AI application: AI Disclosure Decision Matrix
Download the AIDDM Worksheet
Do you prefer working in spreadsheets? Here is a Google Sheets version of the AI Disclosure Decision Matrix. You can make a copy of this worksheet to evaluate your AI application.
Have Twin Sun Evaluate Your AI Application
Contact us to discuss what you are building. We have experience creating and scaling a broad range of enterprise and consumer-facing AI systems, and can help you identify when AI use and disclosure are most appropriate for reaching your organization’s goals.