Generative AI & Exploitation
Art by @mattmcgillvray
It’s barely been 2025 long enough to overcome that weird feeling you get when writing or typing the year out and putting the wrong number at the end and I have already seen enough articles about how it will be the “year of AI” to make me physically ill. That and the current hubbub about China’s DeepSeek AI model outperforming options from American companies at a lower cost, has made it pretty clear to me that it is probably time to finally put together the entry on “Generative AI,” which I have been telling people has been coming for a few months now.
I want to preface this discussion by acknowledging that A LOT is going on right now—in the US and around the world—but it is important to recognize those details that we do have control over so that we can set realistic and achievable goals for ourselves (for more, see this previous entry). AI isn’t the most pressing issue, especially not among the host of issues that contribute to climate change, but, as you’ll see, it is an issue that we can have some impact on and one that can help us gain some much-needed control, as designers, in seeking to make positive change on the climate/design front.
With all of that said, I understand that discussing Generative AI (basically just Gen AI going forward) will be polarizing, especially as I am linking it to words like “exploitation” so I would like to be as direct and to the point as possible, as quickly as possible. First, I’ll be looking at two basic stances that I have heard associated with the topic in this piece: 1) Generative AI is worth it because it increases productivity and 2) The roll-out of Generative AI is just like introducing any other older technology (i.e. Photoshop) and it is unfairly maligned. And second, we’ll touch briefly on what a non-exploitative AI might look like. By the end of this entry, I hope you’ll understand why designers should have a wary attitude (at best) about the technology and how it operates—and even thrives—off of exploitative practices that we should all be concerned about. So, without further prompting*:
The productivity stance
AI advocates often tout the potential for increased productivity as a chief reason why Gen AI is an unalloyed Good Thing™ for businesses to invest in. But is increased productivity an actual problem that needs to be solved? It seems reasonable to think that not everyone is convinced that efficiency or productivity levels need improvement. Probably most would say that finding ways to be responsible for producing even more designs, elements, objects, experiences, or gadgets isn’t on our top list of concerns (especially if we aren’t getting a raise). So, who is that productivity promise even for? I think we all know who it's for—our bosses. So, with context, the stance looks like this: Gen AI companies seem to be promising greater productivity to those individuals who would benefit most from it—those employees whose work isn’t threatened. What we need to ask ourselves instead is this: Is increasing productivity enough of a benefit to justify an industry-wide shakeup?
A quick lesson: According to the Economic Policy Institute¹, a nonpartisan think tank focused on creating economic opportunities for low- and middle-income American families, since 1979, national productivity has risen by 80.9%, while hourly pay has lagged, only rising by 29.4%; in other words, since ‘79, productivity has increased 2.7 times more than our paychecks have.² What does that mean? It means that as the corporations that employ us have enjoyed more of our fruitful labor to profit from, those of us doing the work have not. Again, according to the EPI³, over roughly that same period, inflation in the US has outpaced any growth in hourly wages, which means that in the last three-plus decades many of us have nothing to show for all of the extra work that we’ve done. Probably unrelatedly, compensation for CEOs has risen by 1,085% in that same time frame.⁴
So, do we really need to increase productivity?
We’ve got production levels covered, so why would so many C-suites want to invest in Gen AI rather than continue to pay creatives and designers a premium for their years of expertise and know-how? (Editor’s note: I’ve just re-read the previous paragraph and I think I know why they might lean toward increasing productivity while finding ways to avoid paying for skilled labor.) So what is really happening here and how would we describe it? What word comes to mind when we think of instances in which one party treats someone else unfairly in order to benefit from that other party’s work? Exploitation. It’s exploitation, plain and simple.
Let’s take a second to remember how Gen AI works at a basic level: a program is trained—by humans, as per this report by 60 Minutes—to identify objects or concepts and then perform a task (write an essay, create a logo, generate images) by looking at and absorbing information gleaned from millions of hours of creative labor—our labor—to produce a facsimile of that work in very little time, and, crucially, at very little cost to those that want it. It’s not hard to see a trajectory here where humans are removed from the process of creating as much as possible. That certainly doesn’t have to be the case, and it isn’t our only possible future—as we’ll see later on—but history seems to make clear that labor is always more highly prized than the actual laborers themselves. So, with that in mind, how can we, with open arms, run toward Gen AI as a potential boon to our industry? And knowing that exploitation is already baked into the process, how could we even imagine that that exploitation won’t extend to our own jobs someday?
Ask yourself this: “If we use AI to increase productivity even more, will it actually improve our lives or further widen the disparity between the rich and the poor?” History has told us the answer: it will not benefit those who do the work. And how could it? Unlike a machine that, say, makes shaping sheet metal easier, improving the experience for the factory worker—even if it does not make the job more lucrative to that same worker—AI is actively taking people's jobs away and generating billions of dollars for venture capitalists and CEOs, none of whom perform the work that they are seeking to automate. This leads us to the next common stance:
The “it’s just like Photoshop” stance
When is a tool no longer a tool? I’d argue that occurs when the tool itself replaces the person using it; in some instances, this can be a good thing—telephones eliminated the need for telegraph operators, for example, but in doing so, everyone benefitted from quicker communication, and the ability to hear a human voice on the other end of the line—but as we can see with Gen AI, this “tool” has the very real possibility of eliminating our jobs entirely, and with no benefit to those who have been replaced. I’ve heard it said that Gen AI is no different than when the automobile replaced the horse-drawn carriage or when Photoshop replaced traditional photo editing processes, but this argument is fundamentally ahistorical and falls apart on closer inspection, so let’s do just that by looking closer at those two examples.
Context matters: the Industrial Revolution sped our lives up considerably. Goods could be produced more quickly and efficiently and the engines that powered these machines could be made to power transportation, speeding it up (and thus providing even more of a reason to produce things more quickly). Speedy distribution became a priority and horse-drawn carriages couldn’t fill that need. What was needed was other vehicles—other tools—and those were created to do so. Eventually, as the technology became accessible to everyone, enough people could realize the basic desire for generally quicker (non-business) transportation so that those carriages could be retired. The horse-drawn carriage was a tool that became antiquated, and eventually nearly non-existent in the present, not because the automobile that replaced it was a better version of the horse-drawn carriage but because what people desired (faster and more frequent travel) required a different tool entirely.
Photoshop first became available in 1987; the context here is that by the late ‘80s, our lives were quickly becoming dominated by digital processes. The need for software to perform digital tasks became necessary and the functions they performed were replicated from what are now called “analog” ones. In terms of photography, a digital photo’s chief benefit was that the image could be cheaply and quickly produced, easily stored, and easily copied and reproduced. With the emergence of Photoshop, photographers could now digitally alter digital photos, however, the software (at its inception at least) still required its users to understand those analog processes necessary to produce high-quality images; the knowledge needed to improve an image was still derived from traditional practices. Like with the carriage above, the needs of the people were changing, and it was this change that necessitated new tools. Unlike the carriage, however, traditional photography exists to this day as it never became antiquated; two realms, the digital and the analog, exist side-by-side.
Gen AI is unlike both examples.
The automobile and Photoshop were technologies that arose in specific contexts and in response to specifically new needs that people had. Cars and digital editing software allow users to accomplish their desired tasks themselves and with varying levels of skill. Gen AI does not; it replaces people who still want to achieve the same set of goals—writing essays, creating logos, generating images—but it takes the person out of the equation. It’s a tool that removes the person using it, making their skill or knowledge superfluous or even useless altogether, and it does that by learning from all of our years of exercising those skills as paid laborers. In the last couple of years, AI has raised hundreds of billions of dollars in funding while marketing and creative budgets have fallen. Designers are ripe for exploitation in this scenario and we need to be aware of that.
Imagining AI as a creative ally
It doesn’t have to be all bad, however. Gen AI is marketed as a tool, and used correctly—and with caution—it certainly can be. The key is recognizing the system's inherent exploitation and minimizing it as much as possible.
It comes down to our insistence on keeping AI as a tool. That’s why I’m making a distinction between what we have referred to as Generative AI and what I will call “Additive” AI. Where Generative AI creates from top to bottom, Additive AI supplements the creative process by performing time-saving or iterative work. For example—though this isn’t an example of AI—variable fonts can be designed by establishing keystone styles (extra bold, regular, ultra-wide, etc) and then letting a program figure out all the in-between, or interpolated, styles. If AI was used as a tool that allowed designers to free up time by iterating, interpolating, remixing, or rearranging designed content so that we can look upon a design with “fresh eyes” then that could be legitimately helpful in ways that don’t endanger a designer’s job. Copyediting tools like Grammarly help me write these entries, but they don’t threaten to do it for me (because, where’s the fun in that?).
However, you can see how interest in any AI tool would decrease if it were limited to these types of situations. And why? Because the value is in the exploitation; AI could be an additive ally that even increases productivity—as we talked about earlier, a supposed important reason to consider supporting the technology—but we already know that venture capitalists wouldn’t be able to fundraise off of that type of program anymore. So, in a sense, imagining AI as a non-destructive tool, rather than human-replacement software, provides more than enough proof that the driving value that is being sought out when developing the programs is that of exploitation.
Why exploitation matters for climate designers
Ok, but The Field Guide to Climate Design is about climate issues, right? So where does climate come in? Here’s one way that Gen AI is a big climate problem: it is extremely resource-intensive. From the Harvard Business Review:
“...the training process for a single AI model, such as a large language model, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon. This is roughly equivalent to the annual carbon emissions of hundreds of households in America. Furthermore, AI model training can lead to the evaporation of an astonishing amount of freshwater into the atmosphere for data center heat rejection, potentially exacerbating stress on our already limited freshwater resources.”⁵
But, as this is an entry about human exploitation, my focus has been on our relationship to AI. Understanding exploitation in this context helps us better understand how that same exploitation fuels—and indeed even causes—the climate change we are so concerned about. To that end, I’ve written before about design’s connection to and relationships with sacrifice zones and marginalized communities, however, AI offers us another tangible connection to those subjects: exploitation makes them all possible.
Gen AI has the potential to turn the design industry into a sacrifice zone and designers into marginalized communities. Let me explain: one method that capital consistently utilizes to undermine the importance of individual laborers is to invest in ways that separate the skill involved in any kind of specialized job from the person employed to perform that job; if a specialized task can be performed without the need to invest in training someone to do it, that money can be allocated somewhere else (usually into the pockets of the wealthy). Because Gen AI is becoming a storehouse for our skills, it devalues the designers we rely on to utilize those skills. Actual human designers become antiquated and expensive when a machine can produce an object based on those same skills for less time and money. While AI is capable of exploiting creatives, it also serves as a lens through which we can examine and address the systemic issues that have contributed to climate change. In advocating for ethical AI practices and sustainable solutions, we can all work towards a future that values both creative expression and environmental integrity, ultimately fostering a more equitable world for us all.
I hope that this brush with exploitation will cause us to rethink our relationship to or reliance on Gen AI; exploitation should always be called out, not just because it can make for good reading, but because it is so important to remember that a threat to some of us (designers, citizens, etc.) is a threat to all of us eventually. AI has its uses and as long as it can be used to work for us and not instead of us, we may be able to find a place for it. I, however, doubt it; the environmental cost is too high; weakened relationships to our careers, skills, and livelihoods are too high of risks for me to feel comfortable. Exploitative practices are destroying our planet and its people all for the benefit of a wealthy few. Can we now see that AI isn’t merely doing the same thing, but is, in fact, just the latest extension of that exploitation? And, honestly, isn’t that the core of AI? To learn from the past so that it can more quickly and easily be generated into the future?
So, call-to-action time
Has AI affected your career or role? How so? Was there anything you could do, or—if you haven’t been affected yet as a designer—is there anything you can do in the present, to avoid being exploited? Think about what other kinds of exploitations we can find as designers and how we can work to protect ourselves and others from their harm.
“Injustice anywhere is a threat to justice everywhere.” — Martin Luther King Jr.
Looking for ways to help the Field Guide and Climate Designers grow at the same time? Last year, the FGCD released an ebook and a portion of the proceeds go directly back to Climate Designers. Pick it up here: https://climatedesigners.gumroad.com/l/fieldguide
* Pun very much intended
¹ https://www.epi.org/about/
² https://www.epi.org/productivity-pay-gap/
³ https://www.epi.org/publication/charting-wage-stagnation/
⁴ https://www.epi.org/publication/ceo-pay-in-2023/
⁵ https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
Be part the conversation
Perspective is a gift and with each new perspective the Field Guides get better.
Whether you are a prospective writer/contributor, a commenter, or a reader: new experiences, new connections, and ways of seeing the world leave us richer than before.
This entry was written by
Matt McGillvray
Matt is a designer and illustrator living near Portland, Maine, and has been working for more than a decade doing branding, illustration, web design, print design, social media posts, and even a little SEO. He’s the creator of the Field Guide to Climate Design (and author of the companion ebook) and is trying to establish an International Panel on Climate and Design.
When not designing he’s usually reading, writing, or running. His current big writing project is a book on design’s intersectionality with climate change via its relationship to waste. It will be called, What we design to throw away. He loves puns, his cats, Star Wars, and typography—possibly even in that order.