“What you really need is another you,” I told my client, only half-joking.
“A clone? I’m already working on it. A digital one, at least.”
A few months ago, the same client (who will remain anonymous) laid out a plan that would become, in many ways, my reason to step back from the service space as a writer. Not because I was being replaced, but because I’m being dragged in deeper than I ever intended.
He had been venting about his workload, which, admittedly, was insane. Having worked with a string of C-Suite-level execs, this client in particular operated at a level that made most of them look like hobbyists. He is beyond doubt a mastermind within his field. But brilliance often blinds itself to boundaries.
“I’m all ears.”
He then proceeded to tell me about how he hired a company to help him create a deepfake of himself to support his increasingly automated enterprise. This is fundamentally different from platforms creating general AI avatars from a set list of templates. It was a custom replication of him, powered by every piece of written text he’d ever put online or in his pedagogy. (*All personal deepfakes are synthetic avatars, but not all AI avatars are personal deepfakes.)
Given that he’s one of the most accomplished and boundary-pushing individuals I’ve ever met, it wasn’t exactly a shock that this would catch his attention. What did surprise me was how much of me was likely in that system, given that I have been the scribe of his company since the pre-ChatGPT era.
It wasn’t that AI would replace me. If anything, demand grew. What struck me was that a third party, operating behind the scenes, could repackage my work and sell it back to my client as a sufficiently convincing replica, presumably on a subscription basis. My role then would be to nurture the nuance and tonality of this projection.
Neither of us owned it. And neither of us fully understood the terms. Lastly, I was not consulted about my contributions to the model nor the derivative use of my work, because neither technology nor intent existed when I began the contract. In this scenario, it appeared I would be expected to remain onboard to tailor the voice as the deepfake ‘matured’ and inject new intellectual infusions into the dataset when it would eventually begin to stale.
Otherwise, it would be pulling from a finite pool of writing samples as data, whereas what I can produce was considered more ‘infinite,’ or at least, unpredictable. I can emotionally connect with interviewees, conduct fieldwork on the ground, network at a conference, or break linguistic conventions to the point of imperfection that an AI would be coded to avoid. My focus can change depending on an emotional epiphany, personal interest, or a jarring political event that alters what’s relevant. I can drop the word fuck into any sentence if I feel the impetus to do so, or uphold professional doublespeak where it’s appropriate. I can also leave in a tangential paragraph like the one above, not because it’s the smoothest edit, but because I can.
Being human is best characterized not by our ‘irreplaceable’ creativity but often by the permission we give ourselves to be almost entirely irrational at times on the basis of sheer intuition. This sporadic unpredictability is difficult to code en masse, and it shouldn’t necessarily be tried. But it’s neither illegal nor impossible to do so with the identity of one deepfake, and that’s deeply problematic.
I spoke with a friend about it and concluded that it’s better to keep Genius at arm’s length if it becomes so enamored with progress that it forgets ethics. More to it, this stroke of Genius has touched several of my previously top clients… The demand for intellectual cloning, particularly among experienced writers, is certainly there.
It’s simply not worth supplying.
At least, not like this.
Already, he had pumped thousands of materials, many of which are copyrighted, into building a digital “him.” This database was replete with encyclopedias and textbooks relevant to his industry, as well as hundreds of pages of research and writings that I had completed among piles of other intellectual materials. It’s quite dystopian to realize that, even if these materials were to be withdrawn or compensated, this particular deepfake wears someone else’s face and speaks with a fragment of my mind.
But even setting aside the hazy legality of his enthusiasm, something else that concerned me was that it was quite clear he had virtually no idea how the company he hired was operating or who they were working with. It was also not apparent to him that this seemed to be a third party. According to my client’s understanding, the deepfake was “his” because it wore his face and spoke in his voice. Further, according to his logic, because he paid for it, he owned it. But this assumption, one many people share, is dangerously wrong. The tool he licensed doesn’t belong to him. It belongs to the platform. And what it reproduces may resemble his identity, but it carries other people’s labor, too. (Mine included.)
Whose voice is it anyway?
He was so thrilled about the prospects of how much time this would save him and the profits it promised that he dismissed the nuances between what, on the surface, may seem synonymous: generative (Gen-AI), Agentic-AI, LLM, and API infrastructures. He, like many forward-thinking entrepreneurs, was so ready to put it to practice that he was unaware that even he did not seem to own his own deepfake.
And while I can grind my teeth while one copyright is being violated after another, I still would not want my client’s or anyone else’s identity to be stolen in such a profound way. Alongside obvious moral issues, it reinforces an extremely unhealthy societal precedent.
It also raises a broader question: Do private sector users truly grasp the systems they’re building their businesses on, and what’s buried inside them? By that, I mean the architecture behind the product: who built it, who could access it, what data it relies on, and what this means in terms of ownership in the short and long run. That blind spot is shared by many. So let’s look more closely at the terms being thrown around and what they imply.
Two of the biggest culprits driving this confusion? “Generative AI” and “LLM” (Large Language Model). My anecdote above is one of many reasons why this distinction matters for users, but I’m sure you can imagine others from the developer’s side as you read through as well.
Generative AI (Gen-AI) vs. Agentic
Generative AI, the headline act in the AI revolution, refers to systems that can create new content (like text, images, and video) based on patterns in training data. Unlike traditional AI, which classifies or predicts, Gen-AI generates. Think: DALL·E for images, ChatGPT for text, and DeepFakes for hyperreal video, or Fliki for a hybrid text-to-voice.
As the name suggests, generative AI generates something from almost nothing. With the nudge of a prompt, it creates.
Yet, the line between “generative” and “creative” gets blurry here.
If something that generates is generative, can we also say that something that creates is creative?
Although we tend to distinguish creating from creativity, even artists often describe themselves as being uniquely creative because of their innate ability to identify common patterns and then gracefully break them. There is a sense of consciousness, intention, and purpose to this. Generative AI, in a sense, bestows machines with the tools to mimic human ingenuity through pattern recognition and extrapolation. Will Gen-AI become synonymous with creativity?
Not quite. Generative AI produces when asked, but it doesn’t initiate. That next leap into action, autonomy, and goal-setting belongs to a different class: agentic AI. It’s assigned a purpose, not a prompt. Agentic can, in pursuit of its programmed purpose, leverage Gen-AI to get things done. So while one responds, the other runs the show.
Both Gen-AI and agentic systems rely on LLMs to function by using them as the linguistic core that enables communication, task execution, and interface with humans.
LLMs and the Language of AI
LLM, short for Large Language Model, is a specific flavor of AI focused on understanding and producing natural language. These specialized subset models are trained on massive datasets of text, letting them predict the next word in a sentence and craft responses that feel contextually relevant and, often, remarkably human. Take ChatGPT, for example. It runs on an LLM, which is OpenAI’s GPT (Generative Pre-trained Transformer) model. So, depending on the system (gen-AI or agentic), the LLM serves either as the core engine producing language on demand or as one tool among many within a broader decision-making loop.
Here’s where it gets murky: Most LLMs are proprietary. That means if you build a product (say, a deepfake generator) on top of an LLM using an API, you may not own the outputs in a meaningful way. Legal and IP ownership depend not on your payment or authorship, but on the LLM provider’s Terms of Service (TOS). And those terms often grant the platform more control than you think.
This is the trap that my client fell into. He paid a developer to write the code, but that code sits on what’s likely rented infrastructure. That would mean his digital clone isn’t necessarily “his.” It could very well be (or become) a subscription depending on how their policies evolve.
The Fine Print
This also leaves ample room for interpretation on who has a claim to the profits generated by the outputs of his deepfake model. As mentioned, Gen-AI is designed to generate… revenue (at least from most clients’ perspective). The client would lay claim to the deepfake’s outputs and be exceedingly hesitant about disclosing, let alone splitting those profits.
That said, anyone whose work became data for any given deepfake model, in theory, should receive royalties for these outputs. However, a portion of that deepfake persona is likely not copyrightable (by the client, or even the person(s) represented in the deepfake), meaning that the facilitating company owns your other “you.” While you might own the code you wrote (or commissioned) for the API, copyright ownership of the deepfake itself is unclear and depends on the LLM provider's terms and the legal interpretation of copyright in AI-generated content. It's crucial to be aware of ethical and legal considerations, especially regarding consent and privacy, when creating deepfakes.
In the case of the anecdote above, because he paid someone else to write the code, this individual essentially donated his (and others’) data to the LLM provider. The third-party building the API that produces the deepfake owns the code, and he has purchased the right to use it in their enterprise. This business model is one step away from a subscription. Can you imagine how strange it would be to pay £49.99+ to lease your own digital clone (deepfake) into your business model? Or the subversive blackmail or extortion that could come from wanting to cancel it?
The questions of authorship and ownership are only just beginning to garner legal attention, although they’ve been in popular conversation since the early 2020s. In Getty Images v. Stability AI (2025), a lawsuit alleges that millions of copyrighted images were scraped without permission to train an image-generation model. Similarly, The New York Times has sued OpenAI (2024) for unauthorized reproduction of its journalism through ChatGPT. Even within the walls of OpenAI, a group of whistleblowers recently raised red flags about the ethical handling of training data and the opaque deployment of advanced systems. These cases signal that the AI gold rush may soon collide with long-standing copyright, privacy, and liability regimes. Neither side of this equation, from those building with these tools to those being scalped by them, is prepared. Even if the courts uphold copyright laws, the next steps to achieve accountability are murky, as the digital ecosystem is changing by the day. And if, by some miracle, such reparations are mandated, the financial and logistical toll would have a massive impact on companies whose business model essentially runs on theft that they turn a blind eye to.
So, if no one, except those elite few who have the resources, time, and assets to accumulate the vast swathes of data needed to build a true LLM, then they are the only people who really own anything. If that’s the case, then who are all these people patenting SaaS products and creating AI companies? Legally, they’re able to do so because they have their own API.
What Does It Mean to Have Your Own API?
An API, or Application Programming Interface, acts as a bridge between software programs. For example, OpenAI offers an API for developers who want to integrate GPT models into their own applications. So, having your own API means you can extend the functionality of a technology like ChatGPT, tailoring it to your specific needs.
Ultimately, APIs make it easy to plug powerful models into apps or workflows. That’s their selling point. You don’t need to build your own LLM; you just access someone else’s via API. Again, the illusion is that access means control. Many founders think they’re building proprietary systems, but they’re actually creating thin wrappers around someone else’s infrastructure. So, whether you’re launching a Micro-SaaS product or embedding a chatbot into your platform, you’re likely licensing, not owning it. In my client’s case, it was unclear whether he subscribed to the wrapper or the API beneath it, but he certainly was not in direct ownership of the LLM.
Knowing the difference helps you figure out where you fit within the wider ecosystem of ownership. We are now entering an era where there is little that can’t be done; it’s more a question of risk exposure versus opportunity. There are certainly individuals who would be fine with accepting these risks to be among the first to utilize this technology. But it should be a well-informed, conscious choice. No one wants to find out later what they’ve signed away.
The Deepfake Dilemma
My client believed he was building leverage, a digital version of himself to extend his time, voice, and presence. And in many ways, he was. For someone with his reach and ambitions, the upside of deploying a virtual clone might very well outweigh the risks. The point isn’t necessarily that he made the wrong choice. It’s that he didn’t understand the deal before making it.
This is the real hazard of today’s AI landscape: not just misuse, but unaware use. Too many individuals assume that paying for a tool means owning the outcome. But with generative AI, ownership is layered between user and model as well as between interface and infrastructure. What looks like autonomy may in fact be dependence. And what feels like authorship might legally belong to someone else. And what seems like progress may be built on uncredited, unpaid labor from people like me, and even the client, for that matter. Ultimately, once it’s gone, it’s gone.
Still, these technologies offer enormous leverage. The key is knowing what you're trading for it. Read the terms. Ask who owns the layers beneath your build. Decide what you're willing to outsource, and what you're not.
Because if you’re not asking who owns the model (*or who the model is really building), you may already be giving away more than you ever meant to. And you wouldn’t even know it.
Leaping Forward by Stepping Back
This is one of several similar experiences which has prompted me to take a big step back from client editorial work, despite the rising demand. We hear all the time about the risk of AI ‘replacing’ writers, this is something that (for the time being) generally concerns median-level writers and below. Above this threshold, that’s not the risk I see.
At the other end of the market, the opposite is happening. Highly experienced writers are becoming more sought after, albeit for different temporal motives. Rather than run-of-the-mill marketing, PR, and influencing, clients are seeking a voice for personal programming purposes.
I’m not sure to what extent other writers have been explicitly commissioned to be a bespoke identity rather than to collaborate on research and publications, but this unnerving trend does not seem to be a one-off experiment by only one of my clients. They want a seasoned writer to surrogate language patterns for their AI personas.
Too many of my words have drifted into models I never agreed to build. That shift, in part, inspired my turn toward this column, sparked several new initiatives (including a publishing house) into existence, and marked the end of a long chapter in which executive ghostwriting was one of many editorial throughlines.
And while innumerable fragments of my work now live on in experimental systems I’ll never be able to audit or reclaim, at least this one begins with my name.