🌎 Yes, We Use AI

Here's why

INI logo - default, no tagline


Welcome back.

As promised, every other week we will be featuring a guest essay instead of an essay from Quinn.

This week we are cheating a little bit because the essay below was written by me, Willow. I’m not a guest per se, since I work for INI full-time, but I’m also not Quinn, so it counts (more essays from actual guests are coming up!)


A few weeks ago, we ran a poll asking if you consider AI art to be “art”, and boy did you all respond with gusto (that poll has the highest number of respondents than any other poll we’ve run, by far).

Your answers were so thoughtful, and so varied, that we decided to dedicate an entire essay to the topic.

Full disclosure: we use AI tools at INI.

Keep reading for more insight into why we do, and my mixed feelings about this contentious issue.

I’m Quinn Emmett, and this is science for people who give a shit.

Every week, I help 28,000+ humans understand and unfuck the rapidly changing world around us. It feels great, and we’d love for you to join us.

Brain content break

Why I Use AI

By Willow Beck
(real Willow Beck, not AI Willow Beck)

Let’s get this out of the way at the start: I am by no means an expert in AI or ethics or technology or art for that matter.

Most of my education (both formal and informal) was spent in the forest or on the ocean thinking about moss and mycelium and invertebrates in the Pacific Northwest (surprise, I’m a nerd), and the rest was spent thinking about how to best communicate the importance of all of that stuff, and more, to more people.

Like many of you, I’m grappling with trying to keep up with a technology that is changing every day (in the time between us running the AI poll and now text to video dropped, so), while also trying to untangle the mass of information and construct some semblance of an informed opinion (and if I may be so bold, hopefully, to help you do the same).

And also like many of you, when it comes to categorizing AI art as “art” and AI in general, I fall into the “it’s complicated” camp.

To be clear, in my (unpopular, based on the poll results) opinion, AI-generated art is art. Or at least, it can be.

But, it’s complicated because the critiques and concerns that come up around generative AI are 100% valid, and need to be taken seriously, and yet…I still use it.

Let me explain before we get into the art debate.

As I mentioned up top, we have been using generative AI tools to help us run this small, independent, bootstrapped business.

In full transparency (a key component of AI ethics, both in building AI tools and in using them), I use Midjourney to create the episode art for our podcast conversations and audio essays, and we both have experimented with Claude and Perplexity to assist us with brainstorming and in some cases, copywriting.

When you have a business of two people, inevitably both of you end up wearing many different hats and there’s always a million different projects to focus on.

Using AI tools helps us unlock many of these projects by freeing up our time to focus on recording more podcast episodes, writing more essays, developing our Membership and community, and creating new tools to help us reach more Shit Givers and deliver resources that help them answer the question, “what can I do?”

And to me, (despite some caveats that I’ll go into below), that’s worth it.

What You Said

As I mentioned briefly before, most of you (51%) said that calling AI art “art”, is complicated, but there were also some of people who said it definitely is art (18%), and others who said it definitely isn’t (31%).

I did my best to pull out the main themes or arguments from your responses, and will respond to them noooow.


Even without AI, what makes art “art” is subjective.

Art itself is difficult to define, and just because a piece might not qualify as art to you, doesn’t mean that someone else doesn’t find meaning in it.

That's one of the great things about music. You can sing a song to 85,000 people and they'll sing it back for 85,000 different reasons

Dave Grohl

It’s why some people go to a modern art museum, scoff at a canvas painted plain red, and say “Anyone could make this” and then their friend says “But the point is only this person did make it.”

Is that answer kind of pretentious? Sure, but somebody somewhere will shell out thousands of dollars for it, and good for them.

If a created work can communicate something or provoke a thought or feeling, isn’t that art, regardless of the method of creation?

Does something only qualify as art if it is “good” art? Who decides if it’s good?

As one reader said, “What is art is the wrong question. Why do we make art?”

I think this is true, and I think particularly in the case of AI, the “why” (and the “how”, and the “who decides”) is especially important.

Creativity and The Prompt

Many of you suggested that for art to be “art”, there needs to be some level of human skill, creativity, and imagination applied to the piece.

In other words, art is human because to create is human.

I think in many ways this is why AI can be so divisive — we humans like to think of ourselves as unique, and AI can feel like a threat to that, even though AI is a tool created by humans.

In the case of generative AI, the creative, human element lies in The Prompt.

One could argue that what makes AI art “art” is the prompt. Creating a good prompt takes skill and practice, especially when trying to convey a specific concept or idea rather than a simple image.


Generative AI is trained on billions of pieces of existing data. Does that mean that the output is original? Or is it all plagiarism?

In the art world, originality is blurry. Artists across mediums adapt other works, are influenced and inspired by previous works, restore original works, make a living off impersonating other works, and so on.

The 2024 Wicked movie is a movie based on a musical based on a book based on another book.

Vegas has hundreds of Elvis impersonation shows you can pay to see.

The “Edge of Seventeen” by Stevie Nicks has been sampled by both Destiny’s Child and Miley Cyrus.

Andy Warhol’s The Last Supper is based on da Vinci’s version, maybe you’ve heard of it.

(Quinn: one of my favorite Apple Music playlists is Crate Junkie. This is how they describe it: “The heart of so much great hip-hop production is the sample: a piece of found sound that, when recontextualized, becomes the scaffolding around which a beat gets built. Sometimes the art is in taking something familiar and framing it in new ways. (Will James Brown’s “Funky Drummer” ever get boring? A rhetorical question.) But more often it’s about going into the dustbins of history to pull out something unusual or obscure—a process the great DJ Shadow once described as “urban archaeology.”)

You get the idea.

Of course, in the examples cited above, the owners of the previous works are usually being paid for their artistic contribution.

But here’s the thing with AI-generated art and the Stable Diffusion model: it’s likely that the majority of use cases, from training to the image output, fall under fair use.

From EFF:

Is it generating and storing infringing derivative works of all of the images in the training data? 

Probably not, for at least three reasons: 

First, a derivative work still has to be “substantially similar” to the original in order to be infringing. If the original is transformed or abridged or adapted to such an extent that this is no longer true, then it’s not a derivative work. A 10-line summary of a 15,000-line epic isn’t a derivative work, and neither are most summaries of books that people make in order to describe those copyrighted works to others. 

Second, copyright doesn’t grant a monopoly on a genre or subject’s tropes and motifs, including expressive elements like wavy lines to denote shaking, giving animals more human facial expressions, and similar common—even if creative—choices. What’s more, copyright does not apply at all to non-creative choices—like representing a cat as having four legs and a tail. Much of the information stored by and produced by an AI art generator falls into these categories. 

Third, the amount of copyrightable expression taken from each original image in the training set could be considered “de minimis,” a legal term that means “too minimal to qualify as infringing.”

They outline how each step in the Stable Diffusion model could fall under fair use (I encourage you to read the full article), but essentially based on the vast amount of data generative AI is trained on, it’s not likely that Stable Diffusion models are generating derivate works.

Copyright infringement is still possible — but the most likely occurrence is a user of a generative AI tool intentionally creating a derivative work through their prompt (and safeguards should be implemented across all tools to prevent that).

Finally, copyright deliberately leaves room for artists to be inspired by other works, and so class action suits against AI tools could potentially put all artists at legal risk.


Isn’t this what everything always comes down to?

There’s something to be said about how capitalism contributes to our mindset and to how we measure what is worthwhile (like the amount of “work” we put into something), whether that’s in art or anything else.

This, again, is not something unique to AI art.

There will always be art created for the masses, that’s lining the aisles of Ikea and probably most of our homes, that’s being used to decorate Airbnb’s worldwide.

And there will always be art that moves people and changes culture.

Both of those things can exist at once, and both AI and traditionally generated art can and will be derivative and subversive and hack and unique.

Yes, automation probably will make it harder for artists working in traditional mediums to get paid for it, especially from corporations, and that sucks. As usual, economic powers will prioritize their bottom line to squeeze out artists.

But I don’t think that means there will no longer be a market for “handmade” art. Look at the success of Etsy. It’s never been easy to make a living as an artist, but it comes back to the question “Why do we make art?” and what do we, as individuals and as a society, find valuable?

Ethical Considerations

All that being said, artists absolutely deserve to be paid and recognized for their work. I do think that for-profit companies should pay for the data they scrape, like Getty Images says they are doing. Data is a resource, and companies should not be able to extract it for free.

From a user standpoint, I think it is possible to use generative AI responsibly (by not intentionally prompting to recreate a work). The main difference I see in using a generative AI tool vs searching the internet myself for inspiration is that AI is putting together the mood board of inspiration pics for me, in the background, and then giving me options of what something new, based on those (billions of) inspirations, could look like.

Workers absolutely deserve to have agency in the workplace, and we can do more than strengthen copyright laws and provide training.

Safety regulations that protect against abusive and violent content*, biased results, or content that contributes to misinformation and disinformation absolutely need to be better.

*(Reddit opening up to Google for training data doesn’t make me feel great about safeguarding against violent and abusive content, based on the number of death threats that are spawned in the Bachelor subreddit, but anyway)

I do believe that a world exists where traditionally generated and AI-generated art can coexist, and each offers its own unique value.

Ok, now let’s get to the final big ethical elephant in the room.

AI requires a shit ton of energy and water, to run (in some estimates, the same amount of energy as required to run the Netherlands).

And especially as we transition to an economy that is fully electrified, that is worrying.

But, despite my best instincts to be wary of Big Tech, the level of investment companies like Google and Microsoft have in generative AI, and the level of competition driving them gives me some hope.

These market forces (ugh, who am I) are strong incentives (paired with government regulations) for making AI more efficient, training on small language models, and also, inadvertently financing the clean energy transition by investment into renewables like nuclear, specifically fusion.

So for me, because we try to see problems as opportunities here and I to be a realist about things we probably can’t put back in the box, AI is one of many things that we need to make more efficient, yet another reason why we need to upgrade the grid and speed up the transition to renewable energy.

So, now what?

I do not believe that tech is the answer to every problem, or that tech will be our downfall. I am excited by the opportunity to do more with our limited time and resources that tech, used responsibly, can afford us.

I’m writing this, not to persuade you either way about generative AI or change your mind about art, but because I think it’s good, and necessary, that we are having these discussions and asking these questions as all of this develops.

As Meredith Broussard discusses in her book Artificial Unintelligence, culture changes in response to both technology and art (actually she’s talking about journalism, but I think the same thing can be said about art), and if goals aren’t aligned at the beginning, it’s not easy to go back and fix it because culture has already shifted.

This is all so new, and moving so fast, that no one has all the answers.

We need to take the concerns about AI seriously, by listening to workers, listening to artists, and iterating on solutions as we go.

Luckily many people with way more experience and expertise than me are advocating for and working on AI ethics (check out our conversations with some of them on the podcast, including Rumman Chowdhury, Abhishek Gupta, and Emma Pierson).

Our job here is to amplify their work and find the most effective avenues for you to support it.

Technology isn’t inherently bad or good — it’s how we use it, how we build it, and if we are asking the right questions as we are building and using it.

Machines do not apply meaning to symbols, humans do. It is the responsibility of all of us to use these tools to make sure the meaning that is being extracted from the data is fair and accurate.

New tech is a human problem, but if we can find ways to be tech-empowered instead of tech-led, then we can use it to accelerate progress across many different fields, from drug discovery and climate-resistant kelp to weather forecasting and education assistance, and yes, in art too.

It’s up to us to avoid binary narratives of a techno-utopia or techno-apocalypse by carefully considering how using generative AI can best serve humanity, and including the nuance and context that AI lacks on its own by always asking what we should do, instead of what we can do.

— Willow

How To Give A Shit header

🌎️ = Global Action Step

🤝 Support Our Work

We’re 100% independent and proudly supported by readers like you.

Members get:

  • Vibe Check: Our news homepage, curated daily just for you. Never doomscroll again, thx

  • Half Baked: Weekly briefs to help you think and act on specific, timely issues as they happen

  • The Thunderdome: Join us on INI Slack to connect, build, and share dog pics

  • Lifetime thanks for directly supporting our work

🙋‍♀️ Vote!

In last week’s poll, 47% of you said you probably only got one dose of the MMR vaccine.

How do you think AI will impact jobs?

Tell us why!

Login or Subscribe to participate in polls.

Want to talk climate strategy, investing, or anything else?

Want to sponsor the newsletter?

Get your brand, product, or service in front of 28,000+ sustainably-minded consumers:


Join the conversation

or to participate.