Why AI (still) makes me uncomfortable, as a copywriter and environmental advocate
Is this what they would call a hot take?
Welcome to Buhay Copywriter by Regina Peralta! Itβs wonderful to meet you.
This newsletter is my way of paying it forward and being the person I needed when I was a young(er) writer.
Subscribe for FREE weekly content about writing, creativity, advertising, and work life as a Filipino creative. Plus, the occasional personal essay.

Iβm a copywriter living in the age of A.I.
With around 30 years of my career ahead of me, thereβs still a lot to learn and explore. And thereβs still a lot of change to navigate. As a working-class writer with no βbackup moneyβ or generational wealth, the idea of losing my job to artificial intelligence scares me.
I know Iβve said before that AI is worth learning about and exploring. Iβve written about AI tools, subscribed to AI-related newsletters, and I still keep an eye on artificial intelligence as a topic. But honestly? The more I learn about it, the more I feel the need to steer clear of artificial intelligence.
Hereβs why:
AI threatens livelihoods and reputations
I know tools like Dall-E 3, Midjourney, and Stable Diffusion are cool. But they threaten the livelihood of actual human beings. First, by having paying clients opt out of hiring and opt into AI. Second, by scraping artistsβ work off the internet.
Snexplores notes:
OpenAI, the creator of Dall-E 3, has kept its training data secret. But the company Stability.AI, which makes Stable Diffusion, shared its data set. It contains 2.3 billion images tagged with text. Midjourney reportedly used this same training data. These images were scraped from the internet. Data scraping automatically pulls files from webpages. Often, no one asks permission or checks what these files contain. Thousands of artistsβ names and works have been found in the data set. Illegal or harmful images and peopleβs personal photos are also among those data.β
What are the legal implications of this scraping? We have yet to see, since there are no copyright laws yet that mention AI use. And even companies who claim to want to build βbetter AIβ like Adobe have left artists skeptical.
Adobe specifies that Firefly is βethically trainedβ on Adobe Stock, but Eric Urquhart, longtime stock image contributor, insists that βthere was nothing ethical about how Adobe trained the AI for Firefly,β pointing out that Adobe does not own the rights to any images from individual contributors. Urquhart originally put his images up on Fotolia, a stock image site, where he agreed to licensing terms that did not specify any uses for generative AI. Fotolia was then acquired by Adobe in 2015, which rolled out silent terms-of-service updates that later allowed the company to train Firefly using Urquhartβs photos without his explicit consent: βThe language in the current change of TOS, itβs very similar to what I saw in the Adobe Stock TOS.β
Iβve also seen articles about AI stealing writers work and identitiesβ¦and writers feeding AI their work as a job.
βInitially concerned about whether AI would replace authors across industries from journalism to screenwriting to publishing and more, writers now have to worry that it might actually be plagiarizing them and stealing their identities.β
In an article for The Guardian, a human AI annotator talks about how he works to feed the machine which may one day put him out of a job:
When I first took the role as an AI annotator, or more precisely as a βsenior data quality specialistβ, I was very aware of the irony of my situation. Large language models were supposed to automate writersβ jobs. The better they became through our work, the quicker our careers would decline.
Will humans just forever write the words that AI models need to be able to do human jobs? Doesnβt that defeat the purpose of the whole enterprise?
Sure, we can argue that creativity, imagination, and strategy are still going to be human-only territories that AI canβt touch.
We can think that AI can give social media post ideas, speed up the illustration process, or spit out related words for a tagline β but thatβs it, right?
But letβs be real. Whoβs to say what AI can or canβt do in a few years? And whoβs to say that paying clients will prefer to pay a human instead of some fancy AI upgrade.
AIβs overconsumption of resources puts the planet at risk
Itβs not just my career I worry about, but my actual life on Planet Earth. I live in the Philippines, where we were recently hit by six typhoons in one month. Where our lives are put at risk in favor of profitable mining and single-use plastic practices. Despite our status as a developing country, we are disproportionately affected by climate change.
And AI may be accelerating our journey to even more climate disasters.
Unlike the typical Google search or digital βelbow greaseβ, AI requires more resources.
This Vox article puts the numbers into context:
Training a large language model like OpenAIβs GPT-3, for example, uses nearly 1,300 megawatt-hours (MWh) of electricity, the annual consumption of about 130 US homes. According to the IEA, a single Google search takes 0.3 watt-hours of electricity, while a ChatGPT request takes 2.9 watt-hours. (An incandescent light bulb draws an average of 60 watt-hours of juice.) If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year β the amount consumed by about 1.5 million European Union residents.
An article for Yale Environment 360, puts these figures into perspective:
AI use is directly responsible for carbon emissions from non-renewable electricity and for the consumption of millions of gallons of fresh waterβ¦As tech companies seek to embed high-intensity AI into everything from resume-writing to kidney transplant medicine and from choosing dog food to climate modeling, they cite many ways AI could help reduce humanityβs environmental footprint. But legislators, regulators, activists, and international organizations now want to make sure the benefits arenβt outweighed by AIβs mounting hazards.β
The article goes on to share that while AI can be used to cut water use, run smart homes, and more, there is a lot we donβt know about the millions of gallons of water needed to cool down those AI data centers. In Chile and Uruguay, residents are currently protesting the setting up of data centers, since this infrastructure will tap into their drinking water. If AI is to be used for the good of humanity, what of this dilemma?
Also, critics worry that the βefficient useβ could trigger over-use of AI and the resources itβs supposed to help save:
However, data about improving efficiency doesnβt convince some skeptics, who cite a social phenomenon called βJevons paradoxβ: Making a resource less costly sometimes increases its consumption in the long run. βItβs a rebound effect,β Ren says. βYou make the freeway wider, people use less fuel because traffic moves faster, but then you get more cars coming in. You get more fuel consumption than before.β
In this We Forum article, it was revealed that Microsoftβs βCO2 emissions have risen by nearly 30% since 2020 due to data centre expansion.β Google is 2023 greenhouse gas emissions have risen by almost 50% versus 2019 - mostly due to data centers. This has led to companies rolling back their sustainability-related targets and commitments:
Despite these efforts, now that the numbers are trickling in, itβs becoming clear that the growth of AI has presented real challenges to tech companies that have long sought to position themselves as climate leaders.
Just to reiterate, in case the recent super typhoons and heat waves didnβt do it for you: the planet is at breaking point. Our overconsumption of resources leads to climate change. And from there, we see the extreme weather events that will trigger food insecurity, mass displacement, and even conflicts.
What is especially sad is that the overconsumption is mostly done by the rich. The UNEP reports that rich countries use 6x more resources and generate 10x the climate impacts of lower-income countries. And who will suffer the most? Not the rich, thatβs for sure.
What can humans do?
Use AI wisely.
I know we love to delegate low-level tasks to AI. But would it really be so hard to go to thesaurus.com, or just do a Google search like you used to do?
Arenβt you scared of forgetting how to analyze tables and graphs because you always count on AI to do it for you?
If human writing and art are being used to train AI, donβt you want to improve your work?
And if critical thinking, creativity, and emotional intelligence will be the focus of human jobs moving forward, donβt you want to get better at these skills instead of making AI write your papers, compose your emails, and more
In France, they have this concept of βdigital sobriety.β This covers more than just AI. Itβs simply asking yourself: do you really need a new phone versus one that works fine? Do you need to use ChatGPT to generate synonyms versus just using Google?
Personally, I try to avoid using AI as much as possible. I still read news about it, for work purposes. And if I canβt use regular Search or my brain, I may use it. But I canβt β and wonβt β make AI my default. Not with so much at stake.
Whatβs your take on AI use? Let me know in the comments!
It's inevitable but yes I share your sentiment. I cringe at articles and writings that are obvious AI generated. But we can only be spectators and mindful consumers. It's sad that it's affecting people's livelihoods and environment.
Someone has to say it! Grateful for this article and will link to it when I post some of my thoughts ;) Appreciate all these perspectives.