What it looks like when an engineer's AI objection is principled, and why most leaders aren't ready for that conversation.
A few years ago, I volunteered as a mock interviewer for a Girls Who Code cohort. I was running a practice session (behavioral questions, technical prompts, the usual) when one of the students asked something unexpected: Does your company have a Climate Pledge? Reducing environmental impact matters to me, and I want to make sure the organization shares my values.
I remember thinking: what an excellent question. This student understood that an interview is your opportunity to evaluate the organization as much as they're evaluating you.
I gave her the company line on the climate pledge I'd heard in all-hands meetings. I had no idea I was practicing for a much harder version of the same conversation.
The 1:1 You're Not Prepared For
Fast forward to now. AI adoption is no longer optional in most tech organizations. It's a strategic imperative, a board-level priority, and increasingly, a performance expectation. If you're an engineering leader driving AI enablement, you're probably spending a lot of time thinking about tools, rollout strategies, measurement frameworks, and how to move your team up the adoption curve.
What you're probably not spending enough time on, and what almost no one is talking about publicly, is what happens when a member of your team shows up to a 1:1 and says: I have a real moral objection to the AI tools we're adopting, and it's about the environment.
That's a different kind of pushback from skepticism or fear, and the concern isn't unfounded.
The data centers powering large language models consume enormous amounts of energy and water. Published research has estimated training runs for frontier models in the range of hundreds to thousands of megawatt-hours, though the companies running these systems rarely disclose precise figures. Water usage for cooling is substantial and largely self-reported. Hyperscalers have made renewable energy commitments, but "carbon neutral" and "carbon free" are very different claims, and the accounting is murky even in the best-case scenarios. Microsoft and Google have both publicly acknowledged that their AI ambitions are creating tension with their sustainability goals. This isn't conspiracy theory. Most organizations promoting AI adoption simply aren't reconciling it with their ESG commitments.
So when an engineer on your team raises this concern, they're not being obstructionist. They're being perceptive. You can acknowledge the legitimacy of their objection and still need them to ship. In most leadership situations, those two things coexist just fine. In an AI-first world, they might not, and engineers can tell when you're pretending otherwise.
Two Kinds of Objection, and Why Conflating Them Is a Leadership Failure
Here's where I want to be precise, because the distinction matters.
Some objections to AI are about a story someone is telling themselves, and that story deserves curious examination, not dismissal, but also not unquestioned acceptance. Brené Brown's framework is useful here: the story I'm telling myself is... When an engineer expresses fear that AI is making them less valuable, less employable, or less skilled, that's often a narrative worth getting curious about. What does the research actually say? What's the evidence? What would it mean for them if that story were true, and what would it mean if it weren't? A good leader creates space to examine that narrative together, without either dismissing the fear or uncritically reinforcing it.
But an environmental objection is categorically different. You cannot Brené Brown your way through this one. The energy consumption is real. The water usage is real. The tension between your organization's climate pledges and its AI roadmap is real. A leader who responds to that concern with "let's examine the story you're telling yourself" isn't practicing psychological safety. They're weaponizing a coaching framework to manufacture consent, and that's a betrayal of trust.
My job as a leader is not to convince someone that the environmental cost of AI is acceptable or worth it. That's their value judgment to make, not mine. What I can do is listen, acknowledge the legitimacy of the concern, be honest about what I don't know, and think carefully together about what it means for how we work.
When It's Both Things at Once
People are not always experiencing one thing at a time. An engineer might have a real and legitimate environmental concern and be simultaneously navigating fear about their own relevance in an AI-accelerated industry. Those two things can coexist. The environmental objection might be the one that's easiest to articulate, while the identity threat underneath it is harder to name.
A leader who dismisses the environmental concern as cover for something else is being arrogant. A leader who never creates space for the identity dimension is leaving the harder conversation on the table. The harder and more valuable skill is holding both: taking the stated concern seriously on its own terms while also staying attuned to what else might be present, and letting the person lead you to it when they're ready.
This is not a technique you can script. It requires actually caring about the person in front of you, and you won't find a framework for it in any AI adoption playbook.
We Are Failing at the Human Side of This Transition
The volume of content about AI adoption is staggering. Frameworks, benchmarks, vendor case studies, productivity metrics, prompt engineering guides, maturity models. The feeds are full of it.
What's nearly absent is honest conversation about what it feels like to be an engineer living through this transition: the disorientation, the professional identity questions, the moral weight of working with tools that have real-world costs, the exhaustion of continuous tooling change, the quiet anxiety about whether the skills you've spent years building still matter.
These aren't soft concerns or obstacles to be managed. They're the human experience of technological transformation, and leaders who ignore them aren't accelerating adoption. They're eroding the trust they'll need when things get hard.
I'm convinced that taking people seriously is what makes AI adoption actually work, not something that slows it down. The teams willing to raise hard questions are also the teams capable of doing the careful, rigorous work that AI adoption requires. You don't get good judgment, honest feedback loops, or careful verification from people who've learned to keep their concerns to themselves.
The best leaders I know can hold the strategic imperative and the human cost together without pretending either one doesn't exist. That's a harder balance than it sounds, and it's exactly what we need right now.
What I Don't Have Figured Out
I don't want to pretend that I have a clean answer to the environmental objection question. I know what a good response looks like in that conversation: listen, don't dismiss, be honest about the tradeoffs, respect the values at stake. But the organizational question is harder.
What do you do when someone's principled objection is in genuine tension with the direction the team, and the industry, is moving? How do you honor someone's values while also being honest with them about what non-adoption might mean for their career trajectory in a field that is accelerating away from them? How do you reconcile a sustainability commitment with an AI mandate without producing something that's just corporate doublespeak?
I don't think there are clean answers yet. I think we're early, the industry hasn't worked this out, and most organizations are quietly hoping the question doesn't come up.
It will come up. The engineers who are going to raise it are often the most thoughtful ones on your team, precisely because they care enough about the work to want to do it with integrity.
The only question is whether you're prepared for that conversation.