I’ve been in business for more than 30 years. I’ve seen technologies come and go. I’ve watched companies make decisions that looked brilliant on spreadsheets but destroyed trust in real life. And I’ve learned something that Silicon Valley seems determined to ignore: you can’t engineer your way out of ethical problems.
Right now, companies are racing to deploy AI systems without asking the questions that experienced professionals instinctively ask. Not because we’re more moral, but because we’ve seen what happens when you move fast and break things. We’ve lived through the consequences.
Let me be blunt: the AI ethics crisis we’re facing isn’t a technical problem. It’s a judgment problem. And judgment is exactly what we bring to the table.
Why Tech Companies Can’t Solve AI Ethics Alone
I’ve worked with enough technology companies to know their strengths. They’re exceptional at building systems, optimizing processes, and scaling solutions. What they’re not good at—what they’ve repeatedly proven they’re not good at—is anticipating how their technologies will affect real people in complex situations.
The problem isn’t malice. It’s inexperience.
When a 28-year-old engineer designs an AI hiring system, they’re solving an optimization problem. When you and I look at that same system, we see something else entirely. We see the person who gets filtered out because they took time off to care for aging parents. We see the bias embedded in “years of experience” requirements that favor people who’ve never had to start over. We see the automated discrimination we’ve spent decades fighting.
That’s not pessimism. That’s pattern recognition across decades.
The Experience Advantage in AI Ethics
Here’s what I’ve learned from three decades of watching technologies transform industries: the questions that matter most aren’t technical. They’re human.
When I evaluate an AI system, I don’t start with “Does it work?” I start with “What could go wrong?” Not because I’m a cynic, but because I’ve been around long enough to know that unintended consequences aren’t rare—they’re inevitable.
You have this same advantage, even if you don’t realize it yet.
We’ve Seen Consequences Play Out
I was in the room when companies made decisions that looked smart in the moment and disastrous five years later. I’ve watched “move fast and break things” break actual people. I’ve seen the difference between what systems are designed to do and what they actually do in complex, messy reality.
That history isn’t baggage. It’s insight.
When someone proposes using AI to automate performance reviews, you and I both think about the manager who uses metrics to hide bias. When they suggest AI-powered hiring, we remember the qualified candidates who didn’t fit the pattern. When they pitch predictive analytics for workforce planning, we see the people who become data points in an optimization equation.
We ask different questions because we’ve lived through the answers.
We Understand Stakeholder Impacts
Here’s something I’ve noticed about younger colleagues: they’re brilliant at thinking through first-order effects. They can tell you exactly how an AI system will improve efficiency or reduce costs. What they miss are the second- and third-order effects—the impacts on people who aren’t in the room when decisions get made.
We don’t miss those impacts because we’ve been on the receiving end.
We’ve been the employee affected by policy changes. We’ve been the customer frustrated by automated systems. We’ve been the stakeholder who discovers that “streamlining” means “eliminating the human judgment that made the system work.”
That’s not just empathy. It’s data from decades of experience in complex organizational systems.
We Assess Real-World Risk
I learned risk assessment the hard way—by watching decisions blow up. By seeing strategies that worked on paper fail in practice. By understanding that the biggest risks aren’t the ones you model; they’re the ones you don’t see coming.
When a company wants to deploy AI for customer service, someone fresh out of school calculates cost savings. We calculate reputational risk. When they propose AI-driven compliance monitoring, they see efficiency. We see regulatory exposure and liability.
This isn’t caution. It’s competence.
We know that the most expensive problems are the ones you create by moving too fast. We understand that trust takes years to build and minutes to destroy. We’ve seen enough corporate scandals to recognize the warning signs before they become headlines.
We Bring Cross-Generational Perspective
I work with people in their 20s who’ve never known a world without smartphones. They’re brilliant, but they have a blind spot: they assume everyone thinks the way they think, uses technology the way they use it, and trusts systems the way they trust them.
We know better.
We remember life before social media, before algorithmic feeds, before surveillance capitalism. We’ve watched technology transform from tool to infrastructure. We understand both the benefits and the costs in ways younger colleagues literally can’t—they weren’t there.
That perspective matters when you’re designing AI systems that will affect multiple generations. Someone needs to ask: “How does this work for someone who didn’t grow up with AI? How does this affect communities that don’t trust automated systems? What happens to people who can’t or won’t adapt?”
We ask those questions naturally because we span the divide.
We Have Professional Reputation Stakes
Here’s something nobody talks about: younger professionals can afford to make mistakes. We can’t.
I’m at a point in my career where my reputation is my currency. I’m not going to risk it by implementing AI systems that fail spectacularly. I’m not going to champion technologies that harm people. I’m not going to prioritize speed over responsibility because I don’t have time to rebuild my credibility.
That’s not risk aversion. That’s wisdom.
When you’ve spent 20 or 30 years building trust, you think very carefully before you do anything that might destroy it. That built-in caution—that unwillingness to break things—is exactly what AI implementation needs right now.
The 7 Critical AI Ethics Questions We Must Ask
I’ve developed a framework for evaluating AI systems. These aren’t theoretical questions—they’re the ones I actually use when someone pitches me an AI solution. You should be asking them too.
1. Bias and Fairness: Who Gets Left Out?
Every AI system makes predictions based on patterns in historical data. The question isn’t whether bias exists—it’s whose bias we’re encoding.
Ask: Who was excluded from the training data? Whose experiences aren’t represented? What assumptions are baked into this system? If we automate this decision, who loses access, opportunity, or voice?
I’ve seen AI hiring systems that filtered out career changers because they didn’t match the historical pattern of successful hires. I’ve seen credit scoring algorithms that discriminated against women because historical data reflected historical discrimination. I’ve seen “objective” systems that automated existing bias at scale.
We need to ask these questions because we know that “objective” often means “replicating past discrimination faster.”
2. Privacy and Data Protection: What Are We Really Collecting?
Companies love to claim their AI needs vast amounts of data. Sometimes that’s true. Often it’s data hunger disguised as technical necessity.
Ask: What data does this system actually need? Who has access? How long is it retained? What happens if it’s breached? What could someone infer from this data that we didn’t intend to reveal?
I’m old enough to remember when companies promised our data would be safe and then suffered massive breaches. I’ve watched privacy policies change after acquisition. I’ve seen training data leak into model outputs.
We should be skeptical of “trust us” because we’ve seen what happens when that trust is misplaced.
3. Transparency and Explainability: Can You Explain the Decision?
Black box systems are dangerous. Not because they’re wrong, but because you can’t tell when they’re wrong or why.
Ask: Can this system explain its decisions? If it denies someone a loan, rejects a job application, or flags content as problematic, can it tell us why? If we can’t explain it, how can we appeal it? How can we know if it’s working as intended?
I’ve been in too many meetings where someone says “the algorithm decided” as if that ends the conversation. It doesn’t. If you can’t explain how you reached a decision, you can’t defend it, improve it, or know if it’s ethical.
4. Accountability and Responsibility: Who Owns the Outcome?
AI doesn’t make decisions. People who deploy AI systems make decisions. Someone needs to be accountable when those systems fail or cause harm.
Ask: Who’s responsible if this goes wrong? Can we turn it off if needed? Is there human review for high-stakes decisions? What’s the appeals process? Who gets hurt if this fails, and what recourse do they have?
I’ve watched companies hide behind “algorithmic decision-making” to avoid accountability. That doesn’t fly. If you deploy the system, you own the outcomes.
5. Safety and Security: What’s the Worst That Could Happen?
Every system has failure modes. The question is whether we’ve thought through what happens when—not if—things go wrong.
Ask: What happens if someone games this system? How could it be misused? What’s the worst-case scenario? Have we tested for adversarial inputs? What safeguards exist?
I spend more time thinking about failure modes than success scenarios because failures are what destroy companies. We need to assume bad actors will try to exploit any system and ask whether we’ve made that easy or hard.
6. Human Agency and Oversight: Are We Removing Human Judgment?
Some decisions should never be fully automated. Some processes need human oversight. Some situations require context that no system can capture.
Ask: Are we removing human judgment where it matters most? Can humans override the system? Are we deskilling workers or empowering them? What happens when edge cases don’t fit the model?
I’ve seen companies automate away the judgment that made their service valuable. I’ve watched AI systems fail on edge cases that any experienced human would catch. I’ve seen employees reduced to rubber-stamping algorithmic decisions they don’t understand and can’t question.
That’s not efficiency. That’s abdication.
7. Societal and Environmental Impact: Beyond Our Organization
AI systems don’t exist in isolation. They affect communities, industries, and society.
Ask: What are the broader implications of this technology? Who benefits and who pays the cost? What precedent are we setting? What happens if everyone does this? What’s the environmental cost?
I’m thinking longer-term than quarterly earnings. If we deploy AI that works for us but harms our industry, our community, or our planet, we haven’t succeeded—we’ve just delayed the consequences.
Your Role in Ethical AI Adoption
You might think AI ethics is someone else’s job—the CTO’s problem, the compliance team’s responsibility, the board’s concern. It’s not. If you’re using AI or recommending AI systems, ethics is your job too.
Here’s what I recommend.
Become the Voice of Ethical Questions
You don’t need technical expertise to ask ethical questions. You need the judgment to know which questions matter.
When someone proposes an AI solution, slow the conversation down. Ask about bias, privacy, explainability, accountability. Ask who could be harmed and how. Ask what happens when things go wrong.
You’ll get pushback. People will call you a blocker, a skeptic, a Luddite. Let them. We’ve been called worse for asking reasonable questions.
Document Your Concerns
I keep a simple log: date, AI system, concerns raised, response. Not because I’m paranoid, but because memory is selective and organizations have short attention spans.
When something goes wrong—and eventually, something will go wrong—you want a record that you raised concerns. Not to protect yourself, but to create accountability and learning.
Build Alliances With Other Ethics-Minded Professionals
You’re not alone. Other experienced professionals are asking the same questions. Find them. Compare notes. Support each other when the pressure to move faster conflicts with the need to move carefully.
I’ve learned that ethical implementation isn’t heroic individuals standing against the tide. It’s groups of thoughtful professionals who collectively insist on asking the right questions.
Stay Educated on AI Ethics Developments
AI ethics isn’t static. New risks emerge. Regulations change. Best practices evolve.
I spend time every week reading about AI failures, regulatory developments, and ethical frameworks. Not because it’s my job, but because I’m using these systems and recommending them. Ignorance isn’t an excuse.
Make Ethical AI a Competitive Advantage
Here’s the business case: companies that implement AI ethically build trust. Companies that implement AI recklessly destroy it.
Position ethical AI as a competitive advantage, not a compliance burden. Show how asking hard questions upfront prevents expensive problems downstream. Demonstrate that responsible AI implementation attracts customers, partners, and talent.
We’re uniquely positioned to make this case because we understand long-term value creation, not just short-term optimization.
The Competitive Advantage of Ethical AI Fluency
I didn’t expect ethical AI to become a career advantage. But I’m watching it happen.
Companies are discovering that moving fast and breaking things is expensive when you’re breaking people’s trust, violating regulations, and creating liability. They’re learning that “deploy now, apologize later” doesn’t work when the costs are existential.
And they’re realizing they need people who can spot problems before they become disasters. People with judgment. People with experience.
That’s us.
When I talk to hiring managers and C-suite executives, they’re not looking for more AI engineers. They have those. They’re looking for people who can evaluate AI systems critically, implement them responsibly, and prevent ethical disasters.
They’re looking for the perspective that only comes from decades of watching technologies succeed and fail in real-world conditions.
If you develop fluency in AI ethics—not just the technology, but the judgment to deploy it well—you become invaluable. Because technical skills are abundant. Ethical judgment is rare.
We’re Essential, Not Optional
Let me bring this back to where we started: tech companies can’t solve AI ethics alone.
They need us. They need our pattern recognition, our stakeholder awareness, our risk assessment, our long-term thinking, and our hard-won wisdom about the difference between what systems are designed to do and what they actually do in complex reality.
We’re not obstacles to AI adoption. We’re essential to getting it right.
The question isn’t whether experienced professionals have a role in AI ethics. The question is whether we’re going to step into that role—to ask the hard questions, to slow things down when needed, to insist on responsible implementation even when it’s inconvenient.
I’ve decided that’s exactly what I’m going to do. Because I’ve been around long enough to know that the technologies that endure aren’t the ones that move fastest. They’re the ones that move thoughtfully.
Join me. We’re building a community of experienced professionals who are claiming our role in ethical AI implementation—not by blocking progress, but by ensuring it’s progress we can be proud of.