AI Code of Ethics
Artificial Intelligence (AI) must put people and planet first.
When considering potential applications for AI, we commit to the design, development, and use of AI with the following ethical principles.
Respect the Law and Act with Integrity
We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.
Trustworthy, Not Invasive
In addition to legally complying with data privacy and security regulations, we will build systems that are trustworthy. We respect users' privacy, are explicit in if or how their data will be used, and seek user consent before using their data for any purpose outside of their own direct use of the system. When information is needed, it will be explicitly requested from the user. When data is collected, it will be explicitly authorized by the user.
Consensual, Not Dismissive
We will honor our users by requesting consent prior to sharing data with third-party AI services and outlining in plain terms if or how their data will be used.
AI Must Serve People and Planet
AI should be socially beneficial, remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as fundamental human rights. We will constrain the use of AI capabilities to avoid harm and aim to provide benefits to those who use or indirectly engage with AI capabilities. AI applications should empower users to accomplish their goals, but not at the expense of diminishing or harming others. We take care not to reinforce biases, discount human-centered experiences, or minimize reported problems with our AI capabilities.
Transparent, Not Ambiguous
We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses in when and how AI is being leveraged. We will outline the scope and limitations of AI capabilities to end users. Customers will be consulted on AI systems’ implementation, development and deployment.
Accountable, Not Irresponsible
We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. AI systems should provide ample opportunities for feedback, explanation, and appeal. Users will maintain control of the system, and we will listen to them when considering how AI capabilities should be managed or improved.
Objective and Equitable
We will take affirmative steps to identify and mitigate bias. In the design and maintenance of AI and artificial systems (AS), it is vital that the system is controlled for negative or harmful human bias, and that any bias–be it gender, race, sexual orientation, age–is identified and is not propagated by the system. We take care not to reinforce biases, discount human-centered experiences, or minimize reported problems with our AI capabilities.
Human-Centered Development and Use
We will develop and use AI to augment our applications and enhance our trusted partnerships by tempering technological guidance with the application of human judgment. The development of AI must be responsible, safe and useful, where machines maintain the legal status of tools, and legal persons retain control over, and responsibility for, these machines at all times. AI tools are not a replacement for human beings. As they exist today, AI models cannot reason like a person and fully understand or appreciate ethical concerns in decision-making. AI capabilities are best used in an assistive context to reduce workloads on human operators. Humans ultimately will be explicitly responsible for decision-making and empowered to operate without being required to utilize AI.
Secure and Resilient
We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. We will develop AI systems which are equipped with an “ethical black box”. The ethical “black box” will not only contain relevant data to ensure system transparency and accountability, but also include clear data and information on the ethical considerations built into the system.
Informed by Science and Technology
We will apply rigor in our development and use of AI by actively engaging with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.
Additional Guidance
We will not design or deploy AI capabilities that are not explicitly covered by the above guidelines but are deemed to be more harmful than beneficial, are likely to injure people, will be used to surveil people, will violate human rights or international law, or is otherwise determined to be out of alignment with our company’s principles.
Looking for help with your AI product?
Let's discuss what you're building and ways that we may help.