The Impossible AI Strategy
- Brian Fleming Ed.D

- Jul 17, 2025
- 7 min read
Why Godlike Technology is Really Hard to Talk About

The conference room went quiet as seven college leaders sat around the table, laptops open, all looking at the same presentation about TechFlow, a new "AI-powered campus efficiency platform."
They'd gathered because they needed to make a decision: move forward with this obviously powerful new tool, or wait until the college had a firmer grip on its "AI Strategy."
Everyone brought a unique perspective to the table:
The computer science professor who'd been studying machine learning for years. She understood how the algorithms worked and thought TechFlow could genuinely help students succeed. She was impressed.
The philosophy professor had different concerns—he kept asking questions about consent and whether students would know when they were interacting with AI. What happened to their essays and personal information once it went into the system?
The award-winning teacher everyone consulted about pedagogy was cautiously optimistic. She could see how TechFlow might help her give better feedback to more students, but worried about losing the human connection that made teaching meaningful.
The administrator who'd streamlined half the campus's processes was completely sold. He saw efficiency gains everywhere and couldn't understand why anyone would hesitate.
The librarian was intrigued by TechFlow's research capabilities but felt overwhelmed by the decision—this seemed bigger than just choosing a new research tool.
The student affairs director kept thinking about campus culture. Would students feel like they were being watched and analyzed? Would this make the college feel more corporate and less personal?
Legal counsel had a stack of questions about data privacy, liability if the AI made mistakes, and compliance with federal regulations that she wasn't even sure applied yet.
Why TechFlow? Why Now?
Three months earlier, TechFlow had landed on the Provost’s desk with what sounded like a simple value proposition. They were working with universities across the region, and…oh yeah…their new AI features could do anything a human writer could do, like:
Write emails to students
Summarize meetings
Draft policy documents.
The productivity gains were enormous. The cost savings were compelling.
The Provost could think of a thousand different applications so she asked this group to look into it.
Even the ethics professor had to admit he was impressed.
TechFlow's AI could generate a personalized outreach email to struggling students in thirty seconds—something the college desperately needed since they couldn't afford enough advisors and retention was declining.
It could turn a rambling two-hour committee meeting into a crisp action-item summary. That was promising given the sheer volume of committees and task forces that gathered across campus every day.
It could draft an entire faculty handbook revision overnight. What normally took three years for even a simple update could now be done way faster and more efficiently.
But Something Still Felt Off
But now, sitting in this meeting, something wasn't making sense.
"So," one of them said, breaking the silence, "what do we think about this proposal?"
The pause that followed was telling. You could see it in their eyes. They knew they should have an opinion. They knew this mattered. But somehow, the words wouldn't come.
Finally, someone around the table raised their hand.
"Look," someone said, "I'm just going to be honest here. I don't even really know how to make sense of AI in general, let alone whatever TechFlow is doing."
Someone else chimed in: "What about our AI strategy? Should we just wait for that?"
Then someone else: "What is our AI strategy going to be?"
When someone finally asked the honest question—"What exactly is an AI strategy anyway?"—the relief in the room was palpable.
Heads nodded. Someone closed their laptop. Everyone had been thinking the same thing.
The Missing Piece
I've watched scenes like this play out with increasing frequency for the past two years, and I admit I have the same questions. On one hand, yes, every college should have an AI strategy. On the other hand, what does that even mean? What should it include? What should it not include?
But on a much deeper level: How do we even begin to come together and talk about building an AI strategy?
Recently, I've turned to an unexpected source for guidance: Edward O. Wilson, the legendary Harvard sociobiologist. A few years ago, The New York Times asked Wilson whether humans would be able to solve the major crises facing us over the next century.
"Yes," Wilson replied, if we recognize one thing...
“We have Paleolithic emotions, medieval institutions and godlike technology."
In other words, we come to any discussion about our “AI strategy” with three things battling us every step of the way:

The more I’ve thought about Wilson's framework, the more I’ve realized it explains everything about why those AI conversations feel so strange. And I think it really could have helped that group around the table.
The Problem Is Our Emotional Mismatch
Back in that TechFlow meeting, Wilson's framework explained everything that was happening.
The administrator shifts in her chair. TechFlow's AI could automate budget reports, they'd said. That should be exciting. Instead, they're wondering: if a computer can do this analysis, what does that say about human expertise? Ancient warning systems are firing.
The professor having her own struggle. TechFlow showed how their AI could generate course materials in minutes. Amazing, right? Except she spent years learning to craft the perfect lesson. Now she’s supposed to be enthusiastic about a machine doing it instantly? Every instinct says this threatens something fundamental about their work.
The legal counsel is quietly panicking. TechFlow's system would integrate with student records, predicting which students need intervention. But what if it's wrong? What if there's a data breach? Loss-aversion circuits are screaming warnings about all the things that could go catastrophically wrong.
These aren't character flaws. These are features of human psychology that kept our species alive. But they make it really hard to think clearly about artificial intelligence.
Committees Alone Can't Keep Up
Then there are our institutions. And here's where things get really interesting.
Someone suggests they form a task force to study TechFlow's proposal. Everyone nods. It's what colleges do. When faced with something new and complex, we “committee” it to death.
But think about what this actually means. They're trying to evaluate technology that evolves every few months (sometimes weeks) using a decision-making process designed for a world where the biggest change might be a new bell tower.
TechFlow probably releases updates faster than this task force can schedule meetings. By the time they write their report, TechFlow 2.0 will be out. By the time they implement their recommendations, TechFlow 3.0 will have features they never imagined.
It's like trying to analyze a moving train from a horse and buggy.
I once watched a university spend eighteen months forming committees to study whether to adopt an AI writing assistant. Eighteen months. The technology had fundamentally changed three times during their deliberations. Their final recommendation was to study it for another six months.
Meanwhile, their students were already using more advanced AI tools in every class.
The God Problem
And then there's the technology itself.
TechFlow's AI can predict which students will drop out before those students even know they're struggling. It can generate course materials that adapt to how each student learns. It can spot patterns in campus data that no human would ever notice.
This is impressive—and genuinely unsettling.
When someone asks TechFlow's rep how their algorithm makes predictions, the answer is essentially: "It processes millions of data points in ways we can't fully explain."
The system works, but nobody around that table really understands how.
How do you have a rational institutional conversation about adopting something you can't understand?
It's like being asked to vote on whether to harness lightning when you're not entirely sure what lightning is.
Back to the Conference Room
So there they sit. Seven intelligent, experienced leaders trying to make a decision about TechFlow.
Their emotions are telling them this is either a threat or a miracle—and they're not sure which. Their institutional processes are telling them to study it until they understand it completely. The technology is telling them understanding isn't really possible, but they need to decide anyway.
No wonder the conversation feels impossible.
The silence stretches. Someone suggests forming a subcommittee.
And TechFlow? They're probably meeting with twelve other colleges tomorrow. One that's figured out how to have this conversation. Or at least figured out how to make decisions without having it.
What to Do Next Time
Here's what I've learned from watching dozens of these meetings: The impossible feeling isn't going away. But you can work with it instead of against it.
Next time an AI solution lands on your desk—and there will be a next time—try this simple framework. Think of it as your survival guide for the impossible conversation.
Name what's happening. Start the meeting by acknowledging that everyone probably feels a little lost. Say it out loud: "This is one of those AI conversations where none of us really knows what we're talking about." You'll be amazed how much tension that releases.
Separate the emotions from the decision. Those ancient warning signals going off in your brain? They're normal. Your fight-or-flight response to change? Totally expected. Don't try to logic your way past them. Just acknowledge they're there and keep going.
Time-box the institutional response. Give yourself a deadline that's shorter than the technology's evolution cycle. If the AI tool updates every three months, make your decision in six weeks. You can always adjust later, but you can't “un-miss” this opportunity.
Ask the human question. Instead of "How does this AI work?" ask "What does this mean for the people who work here?" Instead of "Is this secure?" ask "How will this change what we do all day?" The technology will always be a black box. The human impact is something you can actually understand.
Make a small bet. You don't have to transform your entire institution. Pick one department, one process, one problem. Try it for a semester. See what happens.
The goal isn't to solve the AI puzzle. It's to make progress while the puzzle keeps changing.
Because here's the thing: While you're forming committees to study artificial intelligence, artificial intelligence isn't waiting for your committee to wrap up and recommend its findings.
It's already here. It's already changing things. The only question is whether you're going to be part of that change or surprised by it.
And honestly? After watching all those impossible conversations, I think being part of it—even if you don't fully understand it—beats being surprised by it.



