I’ve been making videos of myself doing absurd things I’ve never done. Competing in a Japanese Game Show running away from chickens. Pitching a product on Shark Tank. Speaking in cadences that are mine but perfected. In some cases they are smoothed of the verbal tics that make me human. In other cases, it’s a case of gibberish in absurd levels.
I’m using Sora. The prompts are mine. The face is mine. The ideas are mine. But I didn’t animate a single frame.
Am I creating? Or am I commissioning?
This isn’t an academic question anymore. It’s happening in real time, in our hands, reshaping what we mean when we say someone “made” something. And we need to talk about it before the answer gets decided for us by people who profit from our confusion.
The Architect Who Cannot Build
I’m also building (or “building”) an app using Lovable. I describe what I want. The AI generates the code. I refine my descriptions. It iterates. Eventually, something functional emerges that matches my vision.
I cannot code. Not really, anyway. I can read it enough to understand what’s happening, but I couldn’t write it from scratch. Does that disqualify me from calling myself the creator?
We’ve answered this question before, just not in this context. Architects have been considered creators for millennia despite not physically constructing buildings. The vision, the decisions about space and form and purpose: that’s the art. The construction workers executing blueprints, no matter how skilled, aren’t considered the authors of the building.
Film directors don’t operate cameras, edit footage, or compose scores. Kubrick didn’t paint each frame of 2001. Hitchcock famously said actors should be treated like cattle. He cared about his vision, not their method. We don’t question their authorship.
But there’s a difference, and here’s where it matters: those directors worked with other humans. Humans who brought their own interpretation, skill, and creative problem-solving to the collaboration. The cinematographer makes choices. The editor finds rhythms the director didn’t know were there. (and sometimes the director let’s someone else’s perspective influence the final product. The actor discovers something in a line reading that transforms the scene in a way the screenwriter may not have anticipated.
When I prompt Sora or Lovable, what’s the nature of that collaboration? The AI isn’t bringing creative interpretation. It’s predicting patterns based on training data. It’s sophisticated pattern matching, not artistic partnership.
Or is it? And does that distinction actually matter?

The Michelangelo Problem
Let’s do the thought experiment: If Michelangelo had conceived David in perfect detail: the contrapposto, the expression of concentrated determination, the anatomical precision..but lacked the physical ability to carve marble, would he be any less of an artist?
My first instinct is to say yes, of course, because part of David’s genius is in the execution. The way Michelangelo understood how marble catches light. The technical mastery that let him render veins beneath stone skin. The fact that the proportions are realistic even when enlarged to larger than life. The decades of practice that made his hands extensions of his imagination.

But then I think about Rubens, who ran what was essentially an art factory. Many paintings sold as “Rubens” were largely executed by apprentices and assistants. He would sketch the composition, maybe paint the faces, and his workshop would complete the rest. We still call them Rubens. His authorship isn’t questioned.
The difference is that we knew the arrangement. The market understood what it was buying. The value was in Rubens’s compositional genius and his supervision, not necessarily in every brushstroke being his. This is a part of the debate and subsequent controversy around Dale Chihuly’s work. If an apprentice is doing the glass blowing, who is the artist?
What bothers people about AI isn’t really about the tool. It’s about the opacity and the speed. When I generate a video with Sora, I’m collapsing what would have traditionally required a team of animators, modelers, and technical directors into a prompt. The craft that would have been visible in the credits, that would have taken months, happens in minutes behind an inscrutable black box.
We’re uncomfortable because we can’t see the work. And we’ve been taught that creation “is” work, that value comes from visible effort.
The Death of Craft, or Its Liberation?
If intention alone can produce output indistinguishable from skilled execution, what happens to craft?
Photography faced this question. Painters initially dismissed it as mechanical reproduction, not art. It didn’t require the years of training to mix pigments and render light. Point and click. Anyone could do it.
But photography didn’t kill painting. Instead, it freed painting from the obligation to represent reality. Without photography, we probably don’t get impressionism, cubism, or abstract expressionism. We might still be churning out realistic portraits and landscapes because that’s what the market demanded and what demonstrated mastery.
The technology liberated artists to ask different questions. Not “can I represent this accurately?” but “what can I express that a camera cannot?”
Synthesizers sparked similar panic in music. “Real” musicians said they weren’t instruments because they didn’t require the physical discipline of strings or breath control. Now they’re just… instruments. Tools that expanded what was musically possible.
But here’s where AI feels different: those tools still required significant skill to use well. A great photographer isn’t just someone who can press a button. They understand composition, light, timing, editing. Synthesizers require musical knowledge to create something compelling.
The skill didn’t disappear. Instead, it shifted. You needed different knowledge, but you still needed knowledge.
With AI, the skill floor has dropped dramatically. It didn’t drop to zero. Good prompting is genuinely harder than it looks. But it is low enough that the gap between novice and expert output is narrower than it’s ever been.
Is that democratization? Or is it the devaluation of expertise?
Both, probably. And we don’t know yet which effect will dominate.
The Authenticity Trap
When I create videos using my own face, doing things I’ve never done, there’s a strange question embedded in the output: Is that me?
It’s my likeness. My facial features, my proportions, my expressions translated into movement. But I never made those movements. The video is simultaneously completely me and not me at all.
This feels newer than it is. Actors have always embodied characters they aren’t. Special effects have long shown people doing impossible things. Stunt doubles have made actors look capable of physical feats they cannot perform.
The difference is disclosure and intention. We know movies are constructed realities. The question with AI-generated content is: when does the synthesis become deceptive?
If I post these videos on social media without disclaiming they’re AI-generated, am I lying? If I do disclaim it, does that diminish their value or increase it?
I think the answer depends on what I’m claiming. If I present them as documentation of things I did, that’s deception. If I present them as creative expression (visual ideas that happen to use my face as a canvas) that feels authentic.
The authenticity isn’t in the method. It’s in the transparency about what’s being claimed. For the record, all of my videos, created by AI, are disclosed as AI when I post them.
Who Owns the Vision?
Here’s where it gets legally and ethically thorny: ownership.
When I create something with AI trained on millions of copyrighted works, what am I actually creating? The AI didn’t spontaneously develop the ability to generate coherent images or functional code. It learned from human-created examples, most of which were used without explicit permission or compensation.
Every Sora video I generate contains echoes of other work. Every line of code Lovable writes reflects patterns from developers who built the work it trained on. I didn’t steal their work directly, but I’m benefiting from a system that did.
Is my authorship legitimate if it’s built on uncompensated and uncredited efforts of those before me?
The traditional answer would be: yes, because all creation is derivative. Every artist learns by studying others. Every writer is influenced by what they’ve read. We build on what came before. That’s how culture works.
But there’s a difference between influence and ingestion. When I learn to paint by studying Caravaggio, I’m not extracting and recombining his brushstrokes. I’m developing my own hand, informed by his principles.
AI training is more like creating a painter who has Caravaggio’s muscle memory without ever having seen his paintings consciously. It’s a strange form of knowledge transfer that sidesteps understanding.
I don’t have a clean answer here. I know I’m using these tools. I know they’re built on ethically complicated foundations. I know that my discomfort doesn’t stop me from using them, which is mainly because I am trying to understand where MY place is in this world where these tools are available to express ones self.
The Question We’re Really Asking
Underneath all of this is a deeper anxiety: What is human creativity for?
If machines can generate images, write code, compose music, and edit video, what’s left that’s distinctly ours? What’s the point of developing skills that can be automated?
This is the wrong question, but it’s the one we’re asking.
The right question might be: What do we want to do ourselves, regardless of whether machines can do it?
People still bake bread despite industrial bakeries. Still knit scarves despite cheap manufacturing. Still play guitar despite perfect digital synthesis. Not because it’s more efficient, but because the act of making things with our hands connects us to our humanity in ways that consumption cannot.
The issue isn’t whether AI can replace human creativity. It’s whether we’ll let the existence of AI convince us that human creativity without technical mastery is worthless.
I think that would be a profound loss.
New Language for New Creation
We need better vocabulary for what’s happening. “AI-generated” lumps together wildly different levels of human involvement. The person who types “cool sunset” into an image generator and the person who iterates through hundreds of prompts, adjusting parameters and curating outputs, are doing different things.
Similarly, “creator” no longer cleanly distinguishes between people who execute and people who envision.
Maybe we need terms like:
“Conceptual author”: Someone who provides the vision and intention
“Technical executor”: Someone or something that realizes the vision
“Curatorial creator”: Someone who generates multiple outputs and selects what matters
These aren’t perfect, but they’re more precise than collapsing everything into “the AI made it” or “I made it.”
The goal isn’t to create a rigid hierarchy (as some forms of creation being more legitimate than others) but to develop shared understanding of what happened when something was made.
Why This Conversation Matters Now
We’re establishing norms in real time. The frameworks we develop now about attribution, credit, and value will shape creative work for decades.
In five years, these tools will be ubiquitous. The question won’t be whether people use AI in creative work because they will. The question is whether we’ll have developed thoughtful, nuanced ways to think about different types of creative contribution, or whether we’ll have defaulted to binary thinking: either you did everything by hand or you’re a fraud.
I fear we’re heading toward the latter. I see it in comment sections, in creative communities, in the reflexive dismissal of anything AI-touched as “not real art.”
That instinct is understandable. People are afraid. They are afraid their skills will become obsolete, afraid the market will be flooded with cheap content, afraid that years of dedicated practice will be devalued.
Those fears are legitimate. We should take them seriously.
But we also can’t let fear become dogma. The relationship between tools and creativity has always been dynamic. The printing press, the camera, sampling in music, digital audio workstations: each sparked similar anxieties. Each ultimately expanded what was possible rather than destroying what came before.
My Tentative Conclusion
I don’t know if what I’m doing with Sora and Lovable is “real” creation in the way that term has traditionally been understood.
But I know I’m making choices that matter. After spending some time playing around, the videos I now create pay off a concept that I devised. The app I’m building solves a problem I’ve identified and reflects my understanding of how users should interact with information.
The technical execution isn’t mine. The vision is.
That matters. How much it matters (compared to manual execution) is something we’re still figuring out.
What I’m certain of is this: dismissing all AI-assisted creation as illegitimate is as wrong as claiming that using AI makes you automatically an artist. The truth is more complex and more interesting.
We’re in a transition period where the old rules don’t quite apply and the new rules haven’t been written. That’s uncomfortable. It’s supposed to be. But isn’t that also fucking exciting?!
The worst thing we could do is stop talking about it, to let these questions be answered by default through market forces and platform policies rather than through collective deliberation about what we value and why.
So yes, I’m going to keep making weird videos of myself. I’m going to keep building things I don’t technically know how to build. And I’m going to keep thinking about what that means.
Because figuring out what’s authentic in an age of synthesis isn’t just an interesting intellectual exercise. It’s how we’ll understand what it means to be creative in the decades ahead.
And that conversation is too important to leave to the machines.
Podcast theme song by The Never Project