AI? written on a whiteboard

We Need a Code of Ethics for AI Before It's Too Late

April 28, 20253 min read

Recently, I’ve been working with my parents to uncover our family history. My great-grandfather could never have imagined that I’d be using AI to help decipher and piece together hundreds of pages of his diary from the Spanish-American War. And if you told me that even five years ago, I couldn’t have imagined it either. This creative use of AI is a powerful reminder of what the tool can do when used thoughtfully and ethically.

But thoughtfully is doing a lot of the heavy lifting in that sentence.

AI is no longer a futuristic concept—it’s a daily tool many use. At Leadli, we rely on AI to automate workflows, organize client pipelines, and accelerate internal processes. It’s efficient, helpful, and, for many businesses, practically essential. But as companies rush to adopt these tools in the name of productivity, we keep overlooking a glaring question: Just because we can use AI, does that mean we should?

AI learns from us. It is trained on massive amounts of human-created content from across the internet. It doesn’t understand credibility, ethics, or context. It mirrors what it finds, including biased opinions, misinformation, and, in some cases, outright falsehoods. Yet, we trust it to write marketing copy, engage with customers, and make decisions with real-world consequences.

We’ve already normalized using AI in subtle ways—spellcheck, search engines, self-checkout. But as tools become more advanced, so do the risks. Chatbots now deliver incorrect information in convincing tones. AI-generated content is flooding the internet, often with no human review. For example, AI is even used in healthcare to approve or deny insurance claims. With no human oversight, the stakes become life-or-death.

Even more troubling are the less obvious implications. AI is even being used to create video and audio to mislead and misinform consumers.  Is that efficiency, or is it deception? What happens to consumer trust when audiences discover they were persuaded by an illusion? Marketing relies on credibility. When AI crosses the line from helpful to manipulative, that trust erodes.

Then, there’s the emotional toll. AI companions, designed to validate users and mimic intimacy, are causing some individuals to retreat from real relationships. In extreme cases, these interactions have led to devastating outcomes.

Yet, most businesses are moving forward without clear ethical guidelines for using AI. That’s a mistake. How we use this technology impacts not just our bottom lines but also our employees, customers, and the social fabric around us.

It also impacts creators. Large language models are trained on content that, in many cases, was scraped without consent from writers, researchers, and artists. Businesses benefit from this intellectual labor without compensating or crediting the people behind it. Whether or not it’s legal, we should ask: Is it right?

We need a new approach. One that acknowledges AI’s power while setting limits on its reach. At Leadli, we’ve begun drafting a simple framework for ethical AI use. It’s not perfect, but it’s a start.

Man writing a checklist in a bullet journal

A Code of Ethics for Business AI Use

Transparency – Be honest about where and how AI is being used.

Consent – Don’t use AI tools that scrape from creators without permission.

Human Oversight – Always involve a person in high-impact decisions.

Empathy – Use AI to support people without replacing connection or compassion.

Context Awareness – AI outputs should always be reviewed for nuance and accuracy.

Accountability – Have transparent processes for when AI gets it wrong.

Sustainability – Choose tools with a lower environmental impact whenever possible.

Inclusion – Actively check for and correct biased outputs.

Purpose Alignment – AI should reflect your company’s core values, not just efficiency goals.

Limits – Just because AI can do something doesn’t mean it should. Set clear boundaries around using AI to make human jobs easier instead of replacing them.

This isn’t a checklist. It’s a mindset. A way to ensure we use AI to move forward without leaving people behind.

The future of AI isn’t just about how fast we can go. It’s about how responsibly we can lead. The companies that thrive long-term won’t be the ones that adopt AI the fastest. They’ll be the ones who use it the most ethically.

— Adam


Adam is a systems thinker with a knack for creatively solving operational challenges. He has a Master’s in Social Innovation from the University of San Diego and a soft spot for anything that mixes strategy with impact. Originally from Michigan, he now calls San Diego home. Adam spends his free time chasing bears with his camera, carving up the water on a slalom ski, exploring National Parks (almost 30 so far), or catching live concerts (he’s racked up nearly 300).

Adam Campbell

Adam is a systems thinker with a knack for creatively solving operational challenges. He has a Master’s in Social Innovation from the University of San Diego and a soft spot for anything that mixes strategy with impact. Originally from Michigan, he now calls San Diego home. Adam spends his free time chasing bears with his camera, carving up the water on a slalom ski, exploring National Parks (almost 30 so far), or catching live concerts (he’s racked up nearly 300).

Back to Blog