Humans logo

The New Fog Of Footage

AI videos, and public trust

By Dr. Mozelle MartinPublished about 3 hours ago 5 min read

Modern life now includes a daily guessing game nobody asked for. Every time a video appears on X, Facebook, TikTok, or any other platform, you have to pause and ask a question that should never have been necessary.

  • Is this real or AI?

That pause says more about society’s current condition than the clip itself.

Before going any further, the language deserves clarity. When I use the word synthetic, I am talking about footage that did not come from a real event in the real world. Synthetic simply means constructed. Most synthetic clips today are generated by artificial intelligence or AI, and the ones that are not still function the same way. They imitate reality without being a part of it.

  • AI-generated is the method.
  • Synthetic is the classification.

For the purpose of this article, they point to the same problem.

The scale of that problem keeps getting underestimated. Too many people still treat synthetic video as a toy or a novelty. That belief expired years ago. What used to require software, training, and time can now be fabricated in minutes by someone with nothing more than an app.

The barrier between truth and imitation is so thin you could slide a hair through it. That erosion did not happen by chance. It happened because confusion creates engagement, and engagement is currency.

You do not need conspiracy theories to understand why authenticity has become optional. The incentive structure is loud enough.

  • Misinformed viewers are more reactive.
  • Reactive viewers stay online longer.
  • Longer engagement means higher revenue.

The more disoriented the public becomes, the easier it is for platforms, opportunists, and professional agitators to steer attention. Confusion is not a glitch. It is the deliberately created environment.

AI video is not simply altering what people see. It is altering how people interpret what they see.

Real footage carries the irregular fingerprints of real environments:

  • Light behaves with stubborn imperfections.
  • Shadow angles shift.
  • Human reactions include hesitation and micro-tension the body cannot hide.

Synthetic footage tries to mimic all of that but often smooths the world to the point of sterility. Even the advanced systems still have trouble with mouth-to-audio synchronization, how fingers interact with objects, weight distribution during motion, reflections on glass or water, and the way environmental sound settles into space. They can fabricate emotion on demand, but they cannot fully replicate the biology behind it. Yet these imperfections are easy to miss once the emotional circuitry of the viewer has been triggered. Most people do not see first. They feel first, then assume the feeling came from an authentic source.

The human nervous system is not engineered for this. When exposed to visual information it cannot trust, the brain responds with patterns already familiar in early relational trauma. Doubt. Irritation. Mental fatigue. A quiet strain that feels like something is slipping out of alignment.

  • Decisions become more difficult.
  • Confidence in one’s own judgment starts to thin out.

In forensic and therapeutic settings, this looks like cognitive overload and emotional wear, even when the cause is digital rather than interpersonal.

And this is where synthetic media becomes more than a technological issue. It becomes a behavioral one.

There is an expanding economy of hater-creators who operate on YouTube, Reddit, Discord, and similar platforms. These are individuals with zero training or education in the fields they attack, yet they market themselves as authorities while dismissing the actual experts. The pattern is predictable.

  • Projection,
  • distortion, and
  • outright dishonesty.

They generate content designed to look investigative while knowing nothing about the discipline they claim to expose. Their audiences grow because they present themselves as truth-tellers, not because they possess any real insight. The so-called proof they circulate is usually edited, reframed, or fully manufactured. Some of it is synthetic from the start. Some of it is AI-generated. Some of it is real footage twisted into false narratives.

The public rarely sees the difference. I have lived that contradiction directly. People have attacked my work online while privately admitting they respect it, but that is a future article. The important thing here is that the contradiction is not personal. It is a symptom of a culture that values performance over accuracy.

This is why viewers must stop treating every viral video as fact.

  • Outrage videos spread quickly because they bypass logic. They rely on the viewer’s instinctive reaction, not on the presence of real evidence.
  • Fabricated proof can look convincing. So can manipulated recordings.

Without training or context, even intelligent viewers can get pulled into someone else’s agenda.

The deeper risk is not just that people will believe lies. The deeper risk is that people will stop believing anything.

  • If every clip can be faked and every voice cloned, the public begins to rely on emotional truth instead of factual truth.
  • People start to trust whatever aligns with their preexisting worldview.

In that environment, courts struggle, institutions weaken, families fracture, and community trust collapses. Society needs a shared baseline of reality to function.

Synthetic media is eroding that baseline with quiet efficiency.

Disclosure is the ethical line that should not blur or be optional. When a video is synthetic or AI-generated, people have the right to know. Not because entertainment is harmful. Because consent matters. People deserve clarity about the conditions under which they form judgments. If synthetic media is released with no labeling, it becomes psychological interference disguised as content.

It alters perception without permission.

The way forward is not paranoia. The way forward is literacy. Viewers need to understand how real footage behaves.

  • They need to study light, sound, motion, and human behavior instead of relying on the narrative attached to the clip.
  • They need to recognize that physics do not lie and that the body reveals truths even when a synthetic face tries to hide them.
  • They need to understand that hater-creators, attention-chasers, and drama merchants often rely on manufactured credibility. Their proof is only proof until examined.

Reality has not disappeared. It has just become harder to identify at a glance. The real world still leaves signatures. Human behavior still leaves tells. The brain is still a capable analyst when taught what to look for.

Facts still matter, even when synthetic noise tries to drown them out.

This is why, if I could issue two mandatory survival tools to the entire population, they would be emotional intelligence and critical thinking. Both are disappearing faster than anyone wants to admit. During my lifetime, both used to be common. Now they feel endangered.

Sources That Don’t Suck:

Harvard Kennedy School Misinformation Review

The Royal Society – Online Information Environment Reports

National Institute of Mental Health

MIT Media Lab – Misinformation and Deepfake Research

Pew Research Center – News and Information Online

fact or fictionfeaturehumanitysocial mediaStream of Consciousnesspop culture

About the Creator

Dr. Mozelle Martin

Behavioral analyst and investigative writer examining how people, institutions, and narratives behave under pressure—and what remains when systems fail.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.