I was enamored with the capabilities of large language models (LLMs) the moment I first tried OpenAI’s ChatGPT. The potential for generative AI felt momentous. Twin Sun’s business is building software, so I was excited to have a new set of tools for us to lean on for solving real-world problems for our clients and end users. Summarizing and drafting documents, translating text from technical jargon to something more accessible, personalizing marketing messaging on an individual level, and so much more suddenly became as simple to build as a login screen.

AI Capabilities

That momentous potential has become reality in a lot of our day-to-day work. The things LLMs excel at today felt unattainable five years ago. GitHub Copilot has greatly reduced my cognitive load when writing code. I use an LLM to remix our blog post content into social media posts. ChatGPT generates idea lists for various things, ranging from blog post topics to product ideas.

Outside of work, I ask an LLM to write personalized bedtime stories for my kids with little more than a sentence describing my desired plot. Personally tailored meal plans, vacation ideas, and family activity recommendations appear on my screen in seconds. I used to spend hours coming up with these things on my own.

Our clients have benefited from generative AI as well. BiteSlice’s fully automated content syndication program is powered by several AI tools working in tandem. The program gives new life to creator content, sharing human-authored media in new places to new audiences. Objective Zero supplements pathfinder training with role-playing chatbots in a sandbox environment. The sandbox offers zero-risk scenarios to practice and refine pathfinders’ crisis intervention skills, preparing them to support veterans during critical moments.

Then there are products we’ve made whose core functionality would have been completely infeasible just a few years ago. Podcraftr, for instance, repackages newsletter content into a fully produced podcast, complete with a cloned voice for the narrator. Email your newsletter to Podcraftr and it will generate your next podcast episode in about 90 seconds flat. I helped build the thing and it still feels like magic to me.

Plenty of people around the world are building a brighter future with generative AI. One man helped a dyslexic contractor compose professional emails with AI. A Barcelona studio brings memories to life with AI-generated photos. In England, a chatbot helped thousands of people gain access to mental health services. Even more capabilities look like they will soon be within our reach: from brewing better beer to saving more lives during surgery.

AI Limitations

Despite all of my adoration for these new capabilities, I understand that generative AI has its limits. I’ve tried to fully automate writing articles like this one with no human intervention. There’s just no soul in it; it’s easier to write everything myself than it is to coerce a large language model into writing something insightful and decent. The common theme among all of the generative AI success stories so far is that AI works best when it assists people, not when it attempts to replace them outright.

A lot of research leans this way, too. Retrieval-augmented generation (RAG) improves LLM performance by giving models supplemental (human-authored) data from existing documents and data stores focused on specific areas of knowledge. Fine-tuning techniques like low-rank adaptation (LoRA) attempt to enhance model performance in specific use cases. Reinforcement learning from human feedback (RLHF) teaches LLMs to generate output that closely aligns with explicit human preferences.

The driving force behind these types of innovations is the fact that foundational models, as impressive as they are, are at best echoes of human intelligence. LLM output plays to the averages, in a way, which becomes apparent the moment you ask most popular LLMs to form an opinion on anything. “Hallucinations” (fabricated but authoritative-sounding replies to your requests) are simply a part of using LLMs that you kind of have to accept. So far, even our best efforts do not fully suppress undesirable outputs.

Unfortunately for end users, these limitations are not always clear. And in tasks where generative AI usually excels, it’s not clear that AI carries uncertainty and risks until users witness a spectacular failure.

AI Risks

Why does it matter if the limitations are not clear? LLM output can drive dangerous human behavior. Google’s AI model recently instructed users to put glue in their pizza and instructed one person to dangerously prepare other food. Glue on pizza is an obvious enough error that most people won’t trust it, but how many people might fall victim to subtly lethal recipes?

Those examples arise despite AI and its engineers having no intent to do harm. There is another class of risk, though: malicious operators utilizing AI to intentionally mislead or harm people.

Why is the risk of intentionally misusing AI any greater than misusing some other tool? Generative AI enables small groups of people with limited resources to broadcast near-limitless quantities of disinformation. Bot accounts can persuade people (and outperform people at it, by the way) on important societal or civic issues. Perhaps malicious bots reduce voter turnout in swing states during a federal election, or amplify division within communities. Some state actors are already using LLMs to influence the politics of people on social media. Any of these threats can materialize to some degree by a lone programmer with a bit of time and money on their hands.

LLMs can be used to build hacking tools. They can even assist with discovering new security vulnerabilities in software. An automated pipeline of zero-day exploit discovery could power quickly evolving ransomware that cripples corporations, hospitals, transit systems, and utilities to an even greater degree than ransomware does today. AI capabilities that already exist could very well upend infrastructure for entire communities, should a motivated group of bad actors truly desire to do so.

So far I have mostly discussed one type of generative AI: text produced by LLMs. Text can take many forms: chats, documents, code, etc. But there are other types of generative models, similarly capable of great benefits and great harms. Voice cloning can give those who have lost their voice a close replica, but has almost cost at least one person their job. A few schools around the world have already faced deepfake crises, where students have generated and distributed sexualized images of other children. Celebrities have similarly had their likenesses exploited. The threats of targeted harassment, destruction of personal reputations, and real-world harm powered by generative AI are already playing out.

The Age of AI

In 2008, I began working for an agency that mostly built BlackBerry applications. I thought these devices were incredible. I became addicted to my company-issued BlackBerry Curve within weeks, browsing the web at all hours wherever I went (something that used to require sitting at a desk). The first iPhone had launched the year prior, and the first Android devices hit the U.S. market around the same time that I joined the agency.

I felt then about smartphones the same way that I now feel about generative AI. Like it or not, AI is beginning to change how we interact with computers and media, and soon enough it will change how we interact with one another. In many ways you will see AI. In many others, you may not even realize it’s there. AI has and will continue to impact all of our lives.

But just like the mobile era, the Age of AI comes with choices we each can make.

Our Place in the World

In my nearly 20 years of professional software development experience, most of which has been in consulting and agency work, I have faced plenty of ethical challenges. There have been things I have refused to do for various reasons. There have also been things I have decided that I must do, consequences be damned. Sometimes I’ve failed to live up to my own standard of ethics. But even in those moments, I don’t discard my own personal code due to a failure. I continue striving to live up to my ideals.

I feel like recent innovations in AI are too big for us to ignore as technology professionals. We have and will continue to work on systems that utilize generative AI and other AI technologies. When I think about the potential of generative AI and weigh it against the dangers, I reflect on our place in the world as software developers who use these terrific (in every sense of the word) tools. What will we build? What will we refuse to build?

Software developers tend to underestimate their role in society. Most people in America spend hours each day staring at screens that follow them everywhere. The average smartphone user likely interacts with hundreds of services each day at a minimum, all working together to present an interesting news feed or recommend a new album or remind you to leave on time for your son’s soccer practice. Even before recent generative AI advancements, I felt strongly that software developers shape more of our modern world than they realize, oftentimes in ways that they themselves do not recognize as particularly consequential. But their work has very real consequences. Developers persuade, enable, and amplify the best and worst of us in countless ways with the technology they choose to build.

During my time serving the Vanderbilt University Electrical Engineering and Computer Science (EECS) Industry Advisory Board, I made my case for including ethics in computing as part of the department’s mission. I had no illusions about that contribution. Tweaking a mission statement doesn’t change the world. But I chose to use the influence I had to make a positive change where I could. (By the way, the university now guides EECS students in shaping their own personal ethical framework during their time at Vanderbilt. I like to pretend that my minor contribution encouraged this development.)

Twin Sun is a small company in a big industry. I know we can not control everything that people will do with AI. We can, however, hold ourselves to a high standard in how we work, what we choose to build, and how we advise our clients in their use of generative AI. We have always been guided by two core principles at Twin Sun: we will do the right thing, and we will do what we say we will do. Our code of ethics aligns with these principles. We will do our best to live up to our ideals.

What follows is a copy of our AI Code of Ethics as of the publication of this article. Changes to the code may not always be reflected in this article, but the most up-to-date copy of our AI Code of Ethics will always be available on our site.

Twin Sun’s AI Code of Ethics

Artificial Intelligence (AI) must put people and planet first.

When considering potential applications for AI, we commit to the design, development, and use of AI with the following ethical principles.

Respect the Law and Act with Integrity

We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.

Trustworthy, Not Invasive

In addition to legally complying with data privacy and security regulations, we will build systems that are trustworthy. We respect users’ privacy, are explicit in if or how their data will be used, and seek user consent before using their data for any purpose outside of their own direct use of the system. When information is needed, it will be explicitly requested from the user. When data is collected, it will be explicitly authorized by the user.

Consensual, Not Dismissive

We will honor our users by requesting consent prior to sharing data with third-party AI services and outlining in plain terms if or how their data will be used.

AI Must Serve People and Planet

AI should be socially beneficial, remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as fundamental human rights. We will constrain the use of AI capabilities to avoid harm and aim to provide benefits to those who use or indirectly engage with AI capabilities. AI applications should empower users to accomplish their goals, but not at the expense of diminishing or harming others. We take care not to reinforce biases, discount human-centered experiences, or minimize reported problems with our AI capabilities.

Transparent, Not Ambiguous

We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses in when and how AI is being leveraged. We will outline the scope and limitations of AI capabilities to end users. Customers will be consulted on AI systems’ implementation, development and deployment.

Accountable, Not Irresponsible

We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. AI systems should provide ample opportunities for feedback, explanation, and appeal. Users will maintain control of the system, and we will listen to them when considering how AI capabilities should be managed or improved.

Objective and Equitable

We will take affirmative steps to identify and mitigate bias. In the design and maintenance of AI and artificial systems (AS), it is vital that the system is controlled for negative or harmful human bias, and that any bias–be it gender, race, sexual orientation, age–is identified and is not propagated by the system. We take care not to reinforce biases, discount human-centered experiences, or minimize reported problems with our AI capabilities.

Human-Centered Development and Use

We will develop and use AI to augment our applications and enhance our trusted partnerships by tempering technological guidance with the application of human judgment. The development of AI must be responsible, safe and useful, where machines maintain the legal status of tools, and legal persons retain control over, and responsibility for, these machines at all times. AI tools are not a replacement for human beings. As they exist today, AI models cannot reason like a person and fully understand or appreciate ethical concerns in decision-making. AI capabilities are best used in an assistive context to reduce workloads on human operators. Humans ultimately will be explicitly responsible for decision-making and empowered to operate without being required to utilize AI.

Secure and Resilient

We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. We will develop AI systems which are equipped with an “ethical black box”. The ethical “black box” will not only contain relevant data to ensure system transparency and accountability, but also include clear data and information on the ethical considerations built into the system.

Informed by Science and Technology

We will apply rigor in our development and use of AI by actively engaging with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.

Additional Guidance

We will not design or deploy AI capabilities that are not explicitly covered by the above guidelines but are deemed to be more harmful than beneficial, are likely to injure people, will be used to surveil people, will violate human rights or international law, or is otherwise determined to be out of alignment with our company’s principles.

Acknowledgements

Thank you to Sarah Maginnis for her recommendation that we formalize an AI code of ethics. She heard my unbridled excitement about the capabilities of generative AI and rightly tempered it with a sense of responsibility for what we do with these capabilities. Sarah guided development of our code of ethics and proofread the final copy.

My three business partners–Caleb Hamilton, Chris Wraley, and Jami Couch–reviewed, discussed, and agreed with this code of ethics. I am grateful for our alignment and shared commitment to doing the right thing.