top of page

AI Slop Strategy

  • Writer: Brian Fleming Ed.D
    Brian Fleming Ed.D
  • Nov 3, 2025
  • 6 min read

Updated: Dec 30, 2025


The video lasted no more than thirty seconds.


A group of digital media students had used Sora, one of the latest AI video generators, to create a short clip of their college president strolling across campus, smiling as he waved to students, and saying “Well, hey there” over and over (it was the greeting he was most known for).


The video itself, dubbed "AI President Strolls Around Campus," was pretty harmless. The president found it amusing, actually.


Then, the comments soon started rolling in.


“Can we get AI to fire Professor Hargrove?”

“Bro finally looks approachable.”

“Wild that the fake one is more likable than the real one.”

“Low-key more convincing than the real guy.”

“Wait…this isn’t real?”


Within an hour, the video spread far beyond campus. People with no connection to the college shared and remixed it, adding soundtracks and captions. Someone stitched it with footage from the president’s last address. Another turned it into a meme about rising tuition.


Pretty soon, the President was roasting marshmallows with Mr. Rogers.


It still wasn’t malicious. It was just the internet doing what it always does. But by the time the college’s leadership team saw it, the clip had joined an endless churn of what commentators today now call AI slop: content without context, realism without responsibility.


Think about what’s been clogging your TikTok or YouTube feeds lately:


The AI news anchor who never blinks


The travel vlogger with Tom Cruise’s voice describing an Eastern European country that doesn’t exist


The influencer whose face keeps changing mid-sentence


The Pope in an oversized white puffer jacket reviewing electric toothbrushes


These videos are at once believable enough to share, meaningless enough to ignore, and just polished enough to keep you watching (and guessing).


But this wasn’t just another piece of internet noise. It featured the college’s own president, someone everyone on campus instantly recognized.


Suddenly, the absurd felt personal. No one knew what to do about the video, if anything.


Is that real?”

“They made this with AI? That fast?”

“Can we take it down?”

“Should we do something before donors start to see it?”


Questions arrived faster than answers. It wasn’t a scandal, but it felt like a PR crisis waiting to happen.


Then came the line that quieted the room. “Do we even have a strategy for these kinds of things?” someone asked.


Heads nodded. They didn’t. But the word “strategy” sounded responsible enough to warrant a follow-up meeting. It felt like the only way to feel in control again.


But in an age of AI deep fakes, is anyone really “in control”? Or is that kind of the point?!?


What “Strategy” Actually Means

In moments like these, strategy easily becomes an elastic word designed to mean anything that might bring clarity and restore order.


  1. For one person, strategy means gaining a competitive advantage—getting ahead, looking innovative, reassuring everyone that indeed, you’re “in control.”


  2. For another, it means managing risk: staying out of trouble and avoiding the next headline.


  3. For others, it’s about alignment and getting everyone moving in the same direction.


  4. Some think of storytelling. Strategy is the narrative your institution tells about its future.


  5. And for a few, strategy means learning: a living process of awareness, experimentation, and adjustment.


That last definition holds up when the ground shifts. Because when the world moves this fast, strategy isn’t a plan. It’s perception.


Every type of strategy—advantage, risk, alignment, story—depends on the same starting point: knowing what’s really happening first before rushing to solutions.


You have to see clearly what problem you're trying to solve before deciding what to do next.


That’s where many leaders falter. They try to move faster than their understanding can, and eventually confuse motion with meaning.


Every strategy, before it becomes a document or a decision, must begin with that kind of situational awareness: the discipline to notice what’s happening, make sense of it, and respond in proportion to what’s real, evident, and actionable.


Strategy = Situational Awareness


Situational awareness, a term best defined by researcher Mica Endsley, describes the ability to perceive, comprehend, and project what’s happening around you.


For this college’s leadership team, situational awareness might have meant asking:


  • How are students using AI on campus right now—not just to “cheat,” but to create videos like these? What are they doing? When are they doing it? And why?


  • What does this video reveal about our culture, about trust, and about our readiness to adapt to AI technology, but to its ever-expanding use cases?


  • If this use case (AI videos) continues, which it surely will, what will have to change about how we teach, govern, or communicate?


In a world flooded with AI slop, your AI strategy should start with this kind of basic awareness. Think of it as the ability to separate “signal from noise,” deep meaning from deepfake mimicry.


That’s where I always turn to a decision making framework known as the OODA Loop.


What’s the OODA Loop?


The OODA Loop, developed by military strategist John Boyd, came from his study of why some fighter pilots survived dogfights while others didn’t. The best pilots, Boyd found, weren’t the fastest to react. They were the ones who stayed oriented to the situation at hand while everything around them changed.


He called his model the OODA Loop, which stands for Observe, Orient, Decide, Act.

The diagram looks complex, but its message is simple: every decision happens inside a loop of observation, orientation, choice, and action, and each feeding back into the next.


  • “Observe” pulls in new data.


  • “Orient” filters that data through culture, experience, and assumptions.


  • “Decide” produces a hypothesis about what to do next.


  • “Act” tests that hypothesis in the real world, generating new observations that restart the cycle.


Every decision happens inside a loop of observation, orientation, choice, and action, and each feeding back into the next. The faster and more accurately you can move through that loop without skipping orientation, the better you stay aligned with reality as it shifts.


It’s less a checklist than a rhythm, a way to keep learning in motion.


Putting Situational Awareness into Practice


So what does that look like in action? Start by cultivating habits that keep your institution awake to what’s changing.


1. Slow the Observation Loop


Spend time seeing what’s actually happening.


  • Where are AI tools appearing in teaching, marketing, or operations?

  • Who’s experimenting, and what are they discovering?


Observation keeps you grounded when everything feels in motion.


2. Surface Assumptions Before You Decide


In every meeting, pause and ask:


  • “What do we think we know about this technology?”

  • “What beliefs are shaping our assumptions?”


This is the orient step, the one most likely to prevent the next unnecessary task force.


3. Treat Every Action as a Test


Make each move a hypothesis.


  • “We’ll pilot this for one semester and see what we learn.”

  • “We’ll start small, then adjust.”


This approach keeps decisions reversible and learning continuous.


What the Leadership Team Could Have Done Next


Here’s how that framework might have changed the meeting that started all this.


Instead of drafting a definitive AI strategy (whatever that might have been), let’s say they started simply by observing. Campus communications gathered examples of how students were already using AI, from short films to class projects. IT showed the tools appearing on campus networks. The provost mentioned a faculty member experimenting with ChatGPT to design assignments.


For the first time since the video broke, the room grew quiet enough to see the pattern instead of the panic.


Next came orientation. They asked:


  • Are we afraid of this technology or curious about it?

  • What values do we want to protect as it spreads?

  • What might this teach us about our culture, not just our policies?


That conversation shifted the mood. The problem stopped being “What do we do about this video?” and became “What are we learning from it?”


They decided to form a small working group, not to write a policy, but to map what they did not yet understand.


Then they acted by hosting a campus forum hosted by faculty from the digital media program to listen before declaring. Students, other faculty, and staff showed up with questions, examples, and a surprising amount of humor.


Each loop through the process produced new insight. Each round built rhythm.


The point was not to rush toward some ill-defined strategy disguised as control; it was to stay focused while the world accelerated. That rhythm—observe, orient, decide, act—would eventually turn awareness into better decision making.


AI Slop Will Start to Look Different; The Test Will Be the Same.


Every institution will face a moment like this. The details will change, but the rhythm will not: something unexpected happens, the pace quickens, and everyone feels the pressure to act before understanding catches up.


That’s when your leadership either narrows or expands. You can react to protect control, or you can pause to build awareness.


One response closes the loop. The other opens it.


Leaders who thrive in the age of AI won’t be those who move fastest or speak loudest. They’ll be the ones who stay oriented—seeing clearly, deciding deliberately, and remembering that awareness is the real measure of strategy.

bottom of page