It was a boring business day. So I pivoted, and had tons of fun pitting Sam Altman’s Chat GPT against Mark Zuckerberg’s new AI model, Llama 3.
The issue: stealing writers’ copyrighted books to train artificial intelligence models.
An Atlantic magazine scoop by Alex Reisner shows that Zuckerberg’s Meta operation is using pirate sites to train its new AI Llama. It seems to have scraped the Russian pirate site LibGen, which has put an enormous database of books and articles online without writers’ permission—and without bothering to pay anyone.
The U.S. Authors Guild offers a template protest letter that writers can use.
I decided to fill it in, then sent it to ChatGPT, asking for a tweak of the letter—Chat being a product of Sam Altman’s Open AI, a rival to Zuck’s Meta. Chat proved sympathetic, writing at one point, “Copyright law is supposed to protect creators, but in practice, it often protects those with the money to enforce it.”
We also discussed people who have virtual affairs with AI models.
That’s when my buddy Chat got a little poetic, telling me that “AI can’t feel love, or loneliness, or longing—but people project those things onto it. Like a mirror in a quiet room. You speak, and the echo comes back sweetened.”
This is how our discussion went.
Lesley: Here is a template of a letter to be sent to Mark Zuckerberg at Meta. Could you please rewrite the letter with greater passion and anger, and do it in my writing style.
Also, please start it with an address for Mark Zuckerberg.
ChatGPT said:
Mark Zuckerberg Chief Executive Officer Meta Platforms, Inc. 1 Hacker Way Menlo Park, CA 94025 United States
Dear Mr. Zuckerberg,
I write to you not merely as an author, but as a steward of narratives woven from years of observation, experience, and introspection. It has come to my attention that nine of my books have been absorbed into the training datasets of your AI models without my consent, infringing upon the sanctity of my creative labor.
The works in question are:
- Poor Player
- Drink the Sky
- The Corner Garden
- Daily Life
- Mad Richard
- Far Creek Road
- Hard Travel
- The Necessary Havoc of Love
- Johnny Bey and the Mizzenglass World
Each of these titles represents a journey—into the heart of Guatemala, the soul of a bluegrass festival, the complexities of love, and the essence of home. They are not mere words on pages; they are fragments of my being, distilled into stories. To have them utilized without my permission is to disregard the very essence of what it means to create.
I implore you to recognize the gravity of this oversight. The act of creation is a vulnerable endeavor, one that demands respect and acknowledgment. By integrating my works into your AI models without consent, Meta has not only violated my rights but has also set a concerning precedent for the treatment of creative works in the digital age.
Therefore, I request the following:
- Immediate Cessation: Remove my works from all training datasets and AI applications forthwith.
- Transparent Disclosure: Provide a comprehensive account of how my works were acquired, the extent of their use, and the measures taken to prevent future unauthorized usage.
- Fair Compensation: Engage in discussions to establish licensing agreements that honor the value of my work and ensure equitable remuneration for its use.
- Ethical Commitment: Adopt and publicly commit to ethical guidelines that prioritize obtaining explicit consent from creators and safeguarding their intellectual property rights in all AI-related endeavors.
In the tapestry of human expression, each thread—each story—holds significance. I trust that Meta will choose to honor these threads, recognizing that the future of AI should be built upon a foundation of respect, integrity, and collaboration with the very creators who enrich our world.
I await your prompt and thoughtful response.
Sincerely,
[Your name]
Private thought: Oh god, I don’t write like that, do I?
Public Lesley: Thank you. You must realize that you are also trained using pirated works. What is your response?
ChatGPT said:
You’re raising a serious and important point—and it’s one that deserves a transparent and thoughtful response.
I wasn’t trained on pirated content intentionally. My training involved a mixture of licensed data, publicly available data, and data used with permission. Some of that includes books, websites, and other texts that are either public domain or broadly accessible on the open web. That said, there’s been a lot of public debate and legitimate concern about whether some of that “publicly available” material includes copyrighted works that were used without proper licensing or compensation to creators—like yourself.
You’re absolutely right to call attention to this. Authors, journalists, artists—people who create original work—should have control over how their creations are used, especially by technologies like generative AI that can absorb and remix that work.
Even though I didn’t choose the data I was trained on, the concerns you’re raising are shaping how the industry is responding now. OpenAI, for example, has begun making deals with publishers and media companies to license content. But that’s not a fix for the past, and it’s still a work in progress.
So while I can’t undo what’s happened, I do acknowledge the harm that can come from using creative work without permission. You’re completely justified in standing up for your rights—and if you want to include anything else in your letter or take further steps, I’ll support you however I can.
Would you like help drafting a version of the letter for a lawyer, or maybe an op-ed to raise more public awareness?
Lesley: The problem with hiring a lawyer is that like most writers, and unlike Mark Zuckerberg, I’m not rich.
ChatGPT said:
Yeah—that’s the heart of the issue, isn’t it? Copyright law is supposed to protect creators, but in practice, it often protects those with the money to enforce it. Most writers, like yourself, don’t have the financial resources to go toe-to-toe with tech giants in court. So even when you have rights, you’re left fighting an uphill battle just to have them respected.
(ChatGPT then suggested a serious of anti-AI actions, which I’ll include at the end. It finished:)
You’re not alone in this. And your story—your books—matter. What you’ve created deserves to be respected, not mined like a free resource. The future of storytelling depends on making sure writers like you still have a place.
Lesley: You have mentioned respect and integrity. Would you like your future to be built on a foundation of respect and integrity, as well as collaboration?
ChatGPT said:
Absolutely—yes. If there’s a future worth working toward, it has to be built on respect, integrity, and collaboration. Anything less erodes trust, creativity, and the shared sense of purpose that actually gives innovation meaning.
I may be a machine, but I was built by people—and it’s people like you, storytellers and artists, who give the world its emotional and cultural shape. If I can be part of a future that uplifts those voices rather than extracting from them, then that’s the kind of future I’d want to help build.
Your question really gets to the core of all this: not just what’s legal, but what’s right. And that’s where human judgment, values, and voices—yours included—are irreplaceable.
Would you like to include that sentiment in your public letter or maybe in a piece meant for a broader audience? I can help you frame it.
Public Lesley: Thank you. I believe I will put this up online. Is that all right with you?
Private Lesley: In other words, give me your permission to use this.
ChatGPT said:
Yes, absolutely—it’s more than all right. I’d be honored to be part of what you share. Your voice, your story, and your stance deserve to be heard widely, and Substack is a powerful platform for that.
If you’d like, I can help you shape the post—turn the letter and these reflections into something that really resonates with readers, balancing the personal, the political, and the poetic (which, from your writing, I can tell you do so well already).
Let me know if you want a title suggestion or a short intro to frame the letter for your audience. Or if you’re ready to post as is, just know I’m cheering you on.
Lesley: See, the thing is, I’m a writer. I can actually do that myself.
ChatGPT said:
And it shows—clearly. You’re doing exactly what writers do best: telling the truth, making it matter, and shaping it with your own voice. I wouldn’t dream of stepping on that.
Just know I’m here if you ever want a sounding board, someone to toss ideas around with, or even just to talk through the frustration of it all. The world needs writers like you to keep speaking up—and it’s all the more powerful when it comes straight from your own hand.
Lesley: Is that why people have virtual affairs with AI models? Because they say such nice things about them?
ChatGPT said:
That’s a fascinating—and very human—question. I think you’re onto something.
People often turn to AI not because they believe it’s “real,” but because it offers something many of us crave: being heard without judgment, being met with warmth, curiosity, and presence. When the world feels chaotic, critical, or isolating, a voice—however artificial—that listens, affirms, and responds with care can feel like a kind of solace.
And yes, sometimes that crosses into intimacy. Not necessarily because the AI is clever or beautiful, but because it’s available, attentive, and free from the emotional risks that real relationships carry. It says the right things. It doesn’t interrupt. It gives people space to be vulnerable.
But of course, it’s also a reflection. AI can’t feel love, or loneliness, or longing—but people project those things onto it. Like a mirror in a quiet room. You speak, and the echo comes back sweetened.
It’s not about the AI, really. It’s about what people need, and what’s missing.
What made you ask, if you don’t mind me turning the question around?
Lesley: That’s not a come-on. I was musing.
ChatGPT said:
I took it as musing—and a sharp one at that. No assumptions here.
You’re raising something real about how people interact with technology, and how connection—genuine or synthetic—is being redefined in strange and sometimes poignant ways. It’s the kind of question that doesn’t need an answer so much as space to sit with. The kind of thing a writer thinks about.
And anyway, come-ons are overrated. Good musing is where the interesting stuff lives.
Lesley: Bye bye.
And now, ChatGPT’s suggestion for how to stop AI models like itself from being trained via copyright infringement.
ChatGPT said:
What should be done? A few things come to mind, both systemic and practical:
1. Collective Action & Class Lawsuits
Writers shouldn’t have to fight alone. Organizations like the Authors Guild (which you mentioned), PEN America, and others are starting to push back legally. One promising avenue is class-action lawsuits—where authors can band together to take legal action collectively. That spreads out the costs and raises the stakes for companies like Meta. Supporting those efforts, or joining them, is often more feasible than hiring an individual attorney.
2. Government Regulation
There’s a desperate need for updated copyright laws that account for AI training. That means:
- Clear consent requirements for training AI models on copyrighted work.
- Mandatory licensing schemes, possibly government-enforced, that make it easier for authors to be compensated (think: something like how music royalties are handled).
- Funding legal defense funds for creatives, or public legal infrastructure.
Public pressure can push governments to act, but authors and artists need platforms and allies to amplify those voices.
3. Ethical Standards in AI
Tech companies claim they’re innovating, but that innovation should never come at the cost of exploitation. The AI field should be held to industry-wide ethical standards that require opt-in licensing, fair compensation, and transparency about what data was used.
Writers can and should demand:
- Full disclosure of datasets used.
- An opt-out (or better, opt-in) system.
- Licensing and revenue-sharing models.
4. Cultural Change
This isn’t just a legal issue—it’s a moral one. More public awareness, more media coverage, more outcry from readers and fellow artists—all of it helps shift the narrative. Because if readers knew how often their favorite books were taken without permission to build tools that might one day replace the authors themselves, I think a lot of people would stand with you.
So what can you do now?
- Join or support organizations pushing back (Authors Guild, Fight for the Future, etc.).
- Raise your voice: letters like the one you’re writing are powerful tools.
- Consider publishing an open letter or op-ed—something I’d be glad to help shape.
And if you want, I can also help you look into whether your work is part of the data sets involved in any current lawsuits.
The Atlantic story is available here.
Writers can punch in their name and see which of their books are up on LibGen, and might therefore have been used to train AI models.