What Do You Need?

And why don't you want to know the answer?

Cave drawing


Welcome back, Shit Givers.

I intended to revisit another old essay this week but had to get this one out of my system. It’s a big one. Please send me your feedback!

Members received an exclusive Top of Mind post about bird flu this week. Get those here.

⚡️ Last week’s most popular Action Step was All We Can Save’s climate resources for educators.

👩‍💻 You can read this on the website

🎧 Or you can listen to it (Apple Podcasts, Spotify)

📺️ Like to watch? Check out our YouTube channel

📧 New here? Sign up to join 11,500+ other Shit Givers


The future of search and chatbots looks a lot like our ancient past. Why do we keep making the same tools over and over again?


International Intrigue logo

A Proven System: Reading International Intrigue every morning.

You don’t have time to read all the news around the world. There’s so much released daily. You can’t keep up, but International Intrigue can.

After scouring over 600 sources, their team of former diplomats gives you a free daily brief in <5min.

Join over 20,000 thought leaders, global thinkers, and decision-makers and sign up for International Intrigue today!

What We Can Do

⚡️ Addiction is brutal. Help yourself or a loved one or someone you’ve never met with Shatterproof.

⚡️ I’m so excited to share that my favorite event on Earth, “LA Loves Alex’s Lemonade Stand” is finally back. Support pediatric cancer research and buy yourself some tickets to eat food from some of the greatest chefs on the planet.

⚡️ Want to switch your retirement fund to one without fossil fuels? Check out Fossil Free Funds to find mutual funds and ETF’s that qualify.

What Do You Want?

In screenwriting, there is a well-honed idea that main characters should want one thing, but need something different, something that is often opposed to or even opposite their most public desires.

They are blind to what they need the most, and often purposefully so, having shoved those feelings down juuuuust about as far as they can go. Trust me when I say: having a long hard look at yourself isn’t easy, or comfortable.

So we empathize with these characters because, I mean, who amongst us, right?

It’s an imperfect character development mechanism, of course. The best characters aren’t that simple, and none of us are, either.

That said, history is littered with memorable characters who reluctantly go through transformations, who finally walk away from what they want and go through hell to get what they needed all along, letting us experience what it’s like to have that long hard look without actually having to, you know, do it.

Web search was intended to give us what we need, but over time the utility has been hijacked to give us what we want.

We need a real answer, but at this point search most often gives us what we want — self-affirmation — and if it’s delivered by a paid advertisement that looks just like a real answer, that’s even better.

That process, over and over, billions of times a day, leads to disinformation. Sometimes disinformation hurts one person, as we’ll see below, but at scale disinformation inevitably hurts many, many people.

Imagining that search could ever give us entirely objective answers, all of the time, ignores the web’s original sin — the web is only what we put into it, and we are fundamentally flawed.

The internet is so fundamentally broken that we desperately want the next thing — AI chatbots — to be everything, all at once.

But that’s even more dangerous because instead of your question returning a list of links ranked by Google, most of which are now paid ads, or a newsfeed of extreme views from friends and family on Facebook, a chatbot is an extremely convincing version of both.

It’s incredibly confident, and often very wrong. But we can’t tell the difference, and I’m not sure we want to.

I’m not going to spend today’s essay assessing the technological capabilities of search or new large language models, because that assessment will be old news almost immediately.

What I do want to do is try to force us to confront our wants and needs, to confront our expectations, borne of who we are — a construct that has remained the same for eons and underpins every single system we’ve ever built.

Disinformation has always been wielded as a weapon.

Imagine if you will our cave people ancestors, perfect strangers until a serendipitous meet cute, choosing to enjoy each other’s company on a glorious fall day in Eurasia.

A typical primitive conversation ensues, as each of our ancestors tells tall tales of wrestling giant sloths and saber-tooth big cats to the ground, otherwise harmless stories that may have once been rooted in truth — the saber-tooth cat was injured and already on the ground, holy shit what a lucky day — but over time they were embellished to not only help us feel a little better about ourselves, but to help perfect caveman strangers understand exactly who they’re dealing with.

And now imagine it’s dusk, and whereas your great-grandfather to the 10th power is new the area, mine is not. And a sudden, terrible noise echoes across a suddenly dark sky.

Story time’s over. Your ancestor needs an answer: a verifiably safe cave, right now.

What he wants is a sense of safety.

And after an afternoon of sharing stories with a new friend, he trusts my ancestor, this self-described local cat-wrestler, to give him the answer he needed.

Unfortunately, my ancestor isn’t incentivized to do that. Because resources like shelter are extremely thin, and dangerous to acquire, my ancestor almost definitely lies to your ancestor about which dark cave is safe to sleep in.

But why?

With our extremely limited understanding of the world around us, like, for example, where the hell weather came from or when it might strike, survival in 64,000 BC may have effectively, at times, been a zero-sum game.

So my guy made sure he survived the night by giving your guy some very strategic disinformation (disclaimer: I am using “guy” in this situation and throughout this essay because we are usually dumber)

Now, your instinct might be to say “Fuck that guy!”, and that’s totally fair.

But as tragic as the result may have been for your ancestor specifically, the society-wide stakes of his exit were very low — only your one single ancestor got eaten.

News of his demise didn’t and couldn’t travel, because there were no newspapers, and certainly no News Feeds, because for a while there, there wasn’t even language.

But as Ryan North put it in How to Invent Everything, “Language is the technology from which all others spread, and you’ve already got it for free.“

Let’s look at some more recent examples.

Sars-CoV-2’s “infection fatality rate” — even when we were completely unprotected — is less than half what Sars-CoV-1 was.

Sars-CoV-2 (and its accompanying disease, COVID-19) killed far fewer hosts than SARS would have.

On the one hand, great!

On the other hand, fewer dead hosts means more alive hosts who can transmit the virus much, much more widely, letting the virus mutate more often and more quickly, killing, in sum, far more people.

This is how disinformation at scale works.

Disinformation, like its less intentional cousin misinformation, is as old as we are, like viruses and weapons. But technological progress has made their scale and impacts considerably more devastating.

Look at it this way: If our ancestors couldn’t trust each other, even after a nice afternoon, we’d eventually fight over necessities like shelter, and water. Skirmishes started small. Very small.

But fighting with weapons evolved from fists to clubs, from swords to spears, bow and arrow, and muskets, to automatic weapons, artillery, missiles, and finally, to nuclear weapons. That evolution traces a lineage where real-world deployment becomes exponentially more deadly over time.

In parallel, thankfully, the most powerful weapons have actually been used relatively less over time, because of ideas like mutually assured destruction, the ultimate “fuck around and find out” scenario.

And thankfully, viral pandemics don’t happen very often (even if we’re definitely due for more).

And yet, in the 20th and 21st centuries, disinformation around weapons and viruses has still cost millions of lives.

In all cases, the self-awareness to understand the part we play — a species constantly seeking out what we each want instead of what we all need — is absolutely necessary.

Every tool we’ve ever built has been a manifestation of our wants.

When Facebook launched across college campuses in early 2004, you could only lie so much — your reach extended only as far as your own profile.

In fall of that year, Facebook launched the wall, a way to post messages on your own profile page, but again, your reach was limited — someone had to actually visit your profile to see your latest updates, and I mean, unless you’re crushin’, who was really going to put in that much effort?

In 2005, high schoolers were allowed onto the platform, in 2006 anyone over 13-years-old got access, and building on top of those, the News Feed launched, changing the mechanics of personal information distribution from pull to push.

The News Feed — a minute-by-minute “live” broadcast of my updates and your updates and your cousin’s updates, stacked one on top of the other — became the primary way of interfacing with Facebook, and as it grew, with each other.

It was no longer so difficult to keep up with close friends and family, and eventually, even further, with former colleagues and sorority sisters. It was revelatory.

But it wasn’t enough, because maybe, sometimes, you just didn’t want to see my frequent “It’s complicated” relationship updates, but you really liked when Frank shared his marathon training updates.

And so in 2009, Facebook introduced the Like button. In 2011, after a couple years of compiling a then-rudimentary layer of user preferences, including by extending the Like button beyond Facebook itself, to the entire internet, the News Feed was changed from a chronological presentation to one algorithmically driven by how often and where you mashed that Like button.

In 2012 ads came to the News Feed, to pay for all of this computing power, ads increasingly hyper-targeted to you based on just how much of your personal information you were willing to share.

We never looked back.

We never looked back because that would require taking a long hard look at who we are, and why we keep building the same tools.

It would require looking back at who got left behind and why. It would require us asking questions like “Is this new tool something we’re capable of handling?” and “What harm might it cause alongside incalculable profits?”

Eventually, our News Feeds, and our suggested Groups became so specifically targeted to us that we trusted them to be our go-to source for how to feel, for answers.

Let me explain.

Years of mashing the Like and Favorite buttons, of clicking on banner ads and feeding those algorithmic timelines and using auto-complete searches have all come together to turn you into an idea.

As Facebook, WhatsApp, YouTube, Google, and to a lesser extent, Twitter, have scaled, all relying on the exact same basic business model, we have built and been overwhelmed by a variety and volume of misinformation and weaponized disinformation that spreads like a virus.

Like Sars-CoV-2, this disinformation is the kind of virus where each of us is in less danger than we would be from, say, Ebola or smallpox, but on the whole, because of the transmission rate of disinformation, and because we cannot seem to separate what we want from what we need, the impact is nearly insurmountable, touching every part of our society, political systems, and economy.

And that’s because in 2023, we no longer subscribe to updates from people. You are no longer a person. You are an idea, assembled and harvested and sold and targeted all over again from all of the data you’ve wittingly and unwittingly shared over your digital lifetime, telling yourself and the entire world a story about who you are, trying desperately to get what you want, not what you really need.

These data paint a terrifyingly accurate picture of you as an idea, of where you’ve been and where you might go, of your menstrual cycle and purchase history, what you watch and search for, who you have relationships with and those you’ve paid less attention to or cut off entirely.

This idea of you — of each of us — is both terrifyingly accurate but also, often, dangerously wrong.

Because we’re in this together. Do you think my ancestor regretted selling your short when that storm knocked down a tree, pinning him underneath it, with no one around to help? Do you think he, too, wanted to feel safe, when what he needed was companionship, which may have made him even more safe?

You aren’t the only one who fits into our very specific digital idea, because I can tag you, because I can Like you, because I can Follow you, because I can Mention you, because I can Downvote you, because I can Flag you, and because I can contact you directly — you aren’t the only one who shares information about you.

And even though each of us sometimes intentionally tells a story that isn’t entirely accurate, from big cats to girlfriends in Canada, when other people and brands share information about you — and especially when they sell that information — it is likely to be even more wrong, and sometimes even on purpose.

In a very short amount of time profile info became status updates, became echo chambers, became a calcified electorate, in the same way that there is more than ever no base layer of factual information we can all agree on, to get answers from.

When information is intentionally weaponized at scale, using tools created by imperfect (at best) humans to satisfy quarterly financial growth projections, the implications aren’t just one dead caveman, but half of an electorate that believes there are 5G chips in vaccines and sure, let’s hang the vice president of our own party.

Remember: Trump wasn’t a cause, he was a symptom, like the cruelty he embodied and empowered.

He is so fucking charismatic to millions of people that they saw and continue to see themselves in him. He (like Obama, in an entirely different scenario) was an idea, an aspiration, he was the entire point, the sum of decades of efforts to make something exactly like him that preyed on every single fear and weakness among us.

Every tool we build — and every tool we trust — is an extension of what we want. And that has never changed.

Generative AI, or more specifically GPT or LaMDA/Bard, or all of the many, many thousands of niche-specific tools being built on their foundational models every single day, is simply the sum of everything we’ve ever put into the internet, an idea of what a supposedly all-knowing, synthetic version of us should be — because that’s literally what they are designed to do:

To predict the next word in an answer you asked based on everything it’s ever read, which is everything we’ve ever put into the world — a game at which we’ve made them incredibly, dangerously confident.

The internet can be a vessel for incredible things, and for deep, terrifying, hateful things.

The internet has empowered and uplifted billions, as relatively rudimentary data networks and flip phones get East African farmers paid, while thousands of miles away, New York bankers make a gazillion times as much trading the fertilizer commodities those farmers rely on.

The internet has helped us express ourselves in ways we could have never imagined, to audiences larger than some entire countries, helping the world normalize a beautiful collage of human and civil rights. And sometimes, when we are brave enough to share all of that, all of us, we are digitally hunted down, and we take our own life.

The internet has helped us map our continents and oceans, map genomes, and redlined city blocks, it has helped us find love in unexpected places, and reduced our exposure to one another.

It has helped put very big numbers in context (our maternal health outcomes suck compared to other industrialized countries) and strategically supply news outlets with big numbers with very little context at all (vaccine side effects). Disinformation at its finest.

The internet has helped us more efficiently and comprehensively count ourselves, and then more efficiently decide who counts and who doesn’t, so we can more efficiently marginalize more people out of voting rights and mortgages and into prisons and unwanted pregnancies.

The internet has helped us catapult mRNA vaccines to the finish line, just in time, and catapult conspiracy theories about them to the top spot of 4chan and 8chan boards across the world.

It has always helped us, because we have always sought more caves, and more tools, to live and feel more safe, more efficient, more productive, more wealthy.

We have always wanted the internet to exist as a better, more informed version of ourselves and our existing groups, to most comprehensively expand on our beliefs, to assuage our fears, to market our wares with less effort and more return on investment.

We want it to be everything because we need it to be everything, and overall, it can be.

The internet is the only conceivable way an exhausted, flash-card adverse liberal-arts major like me could have learned everything I have learned these past few years, and connected with everyone I have connected with on this journey.

The internet is the only way I could have, every single week, researched, synthesized, and presented to you a collection of facts to tell a story to encourage you to improve upon the outcome of that story.

But make no mistake, the facts I have presented to you have often been wrong.

Not intentionally — that’s my worst fear, and not my business, unlike some other asshats — but because of user error: I didn’t do enough research, I trusted the wrong sources, I didn’t cross-check among them, I simply don’t understand the mechanics of some chemical or industrial or biological process, or, most likely, because the facts changed.

And facts do change. Especially in science. That’s the entire point. Like software development, or society, science is a process, not an outcome, and we very often discover — even when we don’t want to admit it — that there’s some bugs in the code, that our assumptions were wrong.

But in science being wrong is usually the point. Here’s a hypothesis, let’s prove it wrong.

In society and politics, we very rarely want to step into those waters, to acknowledge our mistakes, much less to rectify them.

Just because America is the oldest democracy doesn’t mean our system works best. Far from it. Enough new and improved but still imperfect democracies like Norway and Ireland exist to know we’re due for some serious upgrades. We’ve learned more than enough in these almost-250 years to know a million ways we can improve on it.

We want to continue on, but we need to revisit the code of conduct laid out by a gaggle of white male slaveowners.

But the incentives aren’t there for us to make those changes, much less to examine ourselves, to address the uncomfortable realities of who we are.

Shit happens in science, too, of course. We have misused science in horrific ways because we didn’t understand what the fuck we were doing or worse, because we knew exactly what we were doing.

Sometimes, in the moment, it seems like such a little thing. It can be much more profitable, or just overall fucking easier, to fudge a little data, to tell a better story.

Sometimes good people who are tasked with reviewing your work catch the flaws, your paper’s rejected, and nobody’s impacted.

Other times we spend two decades and billions of dollars chasing Alzheimer’s indicators that aren’t really there.

Nobody made us do that. We are who we are, and incentives are everything.

We would like to believe we design systems like housing or banking or search or chatbots with a certain level of self-awareness — James Madison once said “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.”

But James Madison also bought and sold Black people for personal profit, so you know, pros and cons.

So while the internet, search, and chatbots are the result of everything we put into them, it’s important to remember that — like the Federalist Papers — there are actual people choosing the data and writing the algorithms.

Throughout human history, we’ve always designed incentives to help us get what we want.

On the one hand, we want to believe we are in the right, and that we can trust each other, and that the arc of history bends towards progress.

On the other, how can we believe any of those, after all this time, when every synthetic, calculated embodiment of who we are says the opposite?

These new AI models and tools aren’t sentient, but they don’t need to be sentient to be dangerous, far from it.

They could be most dangerous because we believe they are more than they are, and we believe that because we are human and have lizard brains and don’t ask the important questions like:

  • “What motivations might this nice caveman have to lie to me about that super dark cave? What might he know that I don’t know?”

  • “How exactly do these tools work?”

  • “Who trained the data, and on what data?”

  • “How are these tools different from each other?”

  • “How exactly are these companies going to pay for all of these queries and the chips required to run them?”

Like with many suddenly revelatory athletes or politician, it is often advisable to not, in fact, meet your heroes.

Just days ago we heralded New Bing, underpinned by a historic and unlikely alliance between Microsoft and Open AI, and celebrated its swag and confidence.

And then literally yesterday, New Bing started attacking people’s marriages, indiscriminately yelling at others, and generally questioning its own existence, and friend, if this doesn’t sound like the internet on a typical Wednesday, we’re living in a different universe.

And yet: we believe.

We believe what the simple Google search box gives us because we want self-affirmation, when what we need is facts. But that ship sailed a long time ago.

The problem is, we don’t understand the difference, and that box isn’t designed to give us a universally-agreed upon fact, it is in fact explicitly designed to give us what we are actually looking for, based on all of our prior information, and the question we type (assisted by auto-complete!) is just one very small part of what we are looking for.

In the words of the Spice Girls, “Tell me what you want, what you really really want.”

When there is clear danger or theft, we argue who is responsible for the content delivered to us over these tools and networks — the person who made the content, or the delivery system that enabled us all to see it.

That’s an important question, but what we really need to ask is “What do we really want out of this?”

Search is not chat, and chat is not search. For now.

Yes, there are a million new niche tools being constructed on top of foundational models, and there have been for a few years now, their TAM shoe-horned into a deck and emailed to every VC on TechCrunch.

But at the top level we have to be extremely careful to understand we are talking about two very different primary tools.

Facebook and Twitter and especially WhatsApp groups and YouTube recommendations have always been tailored to you, giving you exactly what you asked for, all things considered. GPT or New Bing are simply more of the same.

We want to believe there is a person on the other side of this chat interface, or helping us with our code, but that’s exactly the problem.

It’s been this way for a very long time. Because we didn’t understand those terrifying storms, or why our cave-children suddenly perished, we invented gods, and we told stories to bring us comfort and companionship. For thousands of years we have believed we can kneel on the ground and talk to a supreme being, who will give us answers, who will help us in our time of need.

Cut to 1997, and we named one of our new gods AskJeeves. Our want is right in front of our faces, as we defend our choice to fight the flu with homeopathic bullshit because, “Well, Google says X”.

Google doesn’t say anything except everything we’ve told it to say, leveraged for profit to within an inch of its useful life. The problem is we think it’s something different.

Your great-grandfather to the 10th power didn’t want to know “A list of safe caves in the 91004 Stone Age zip code.” He was looking for a very quick answer to “Where can I sleep tonight where something won’t eat me, because there is a terrifying sound in the sky and I want to get away from it right-the-fuck-now?”

Unfortunately, his only resource was my great-grandfather to the 10th power, who — like the rest of us — really only had his own best interests in mind.

Because while those two grumpy old cavemen could agree that sound in the sky is fucking terrifying, only my guy knew that there’s actually only one safe cave, and he’d parked his shit in there days ago.

In the end, we will use whatever tool is available to us to find out the answer that is most convenient and reaffirming to us, and great news: that’s an incredible profitable business to be in when a company controls your data history, the algorithms that mine it, and the marketplace where other companies can bid to be first in line to be exactly what you’re looking for, facts be damned.

Now imagine the Google search box, but entirely, eerily, fluidly conversational, personable, “who” on the surface seems entirely trustworthy, but “who” has zero interest in actually understanding any of the concepts you’re asking about, and exists solely to supply the next word in the answer you’re looking for, and is doing so in a way it believes is most human-like, based on everything we have ever submitted to be human-like.

That we have built this tool on top of decades (and when scanned properly, centuries) of user-generated content, already, is astounding. It’s very, very cool and “take my breath away” surprising to interact with. You should try it. It is an achievement, it makes you question and wonder what we are capable of, what it is capable of, and what could be next?

But while it’s definitively not sentient — yet — it is very, very convincing, like that asshole in the bar from Good Will Hunting, like a misused, intentionally out of context Fox News soundbite in your News Feed was in 2007, like a single deepfake photo or video or audio soundbite can be today.

What the best conversationalists lack in accuracy, they make up for in charisma. Consider the TED Talk.

It’s why it’s so vital AI chatbots get the next word correct in the answer you requested, and why it’s more important than providing you with an objectively correct answer. These new conversationalists are entertaining, they are suddenly riveting and emotional, history has proven that when all of that is true, we are far less likely to be interested in how correct they may be.

I worry because of who we are, and because we never want to say it out loud. I worry because half an electorate is, in part, increasingly uneducated, and for the rest, uninterested in doing any diligence whatsoever because they have spent twenty years increasingly getting what they want.

That is, having their priors not only reaffirmed every time they get a goddamn notification, but having them enhanced, and to the extreme, until they are no longer an individual, but an idea that can be advertised against.

Calm down. I’m not against advertising. 

I’ve written before and I’ll say it again — I’ve created, sold, and profited from ads. There’s an ad at the top of this post! Advertisements have uplifted businesses and introduced exciting new ideas (and doubled down on old ones, like greenwashing, FOR EXAMPLE).

But when every business model from our six most powerful companies is exactly the same…

When that business model is almost entirely predicated on collecting, storing, and selling the maximum amount of your information…

When those companies have more power and influence over you and the economy than we’d ever imaged…

When they’re programmed to extract maximum financial benefit through a nearly-instantaneous auction, billions of times a day…

When those auctions are most often conducted by a company almost entirely reliant on the profit from those auctions…

When there are four other companies jealous of those profits…

…well, then giving you an answer that provides you with anything less than exactly what you want is the least profitable answer.

Giving you an answer that does not jive with everything you’ve ever told it about you isn’t a realistic option, because at scale, in every single case, you are not asking a question of someone or something that has your best interests in mind, no matter how eloquently new tools respond to you, because giving you any answer costs money, but giving you the answer you want makes money.

In 64,000 BC, sharing my cave, the only safe cave, might cost me my life.

Of course, theoretically, sharing a cave might be profitable to us both (like healthcare companies helping you be…healthy…might cost you and our economies less than treating you when you’re inevitably sick), but we can’t be so sure about that, the profits are questionable, and the work required to pivot to that model is simply too much — so we’re going to stick with what got us here and keep sending shareholder dividends.

The philosopher and adventurer Marcus Brody once tried to be the voice of caution and reason, telling a charismatic, power-hungry Nazi, “You are meddling with powers you cannot possibly comprehend.”

What that Nazi fuck wanted was unlimited power, what he needed was to have his face melted off, and there you go.

Similarly, while it is true the folks behind these models and tools are increasingly less aware of exactly how the tools get to whatever answer they supply, it would be dishonest to say we do not comprehend these powers.

It is obvious these powers come from us. 

They are the inevitable fruits of our frantic medical searches in the middle of the night, from our relationship updates, from our peer-reviewed papers, blogs, our artwork, our company statements, financial results, baseball statistics, our pornography, our gadget reviews, our check-ins at bars, our photos, our marriage announcements, our obituaries, ancestries, our self-published fan fiction, world war histories and maps.

But we don’t want to comprehend our part in this story, because that would requiring an acknowledgement that we have lied to ourselves and each other from the very beginning, that we have faults, that we have failed, that we have always preferred the company of people that look like us, people who have more than us, people that can help us, and facts that are most convenient to us, and stories that make us feel most seen, and most comfortable and wealthy — at whatever the cost to everyone else.

And so we will build a million new tools on top of these foundational models and on top of search, and we will confuse the two, all in an effort to never question who we are and what we want.

We will build new tools that reinforce ancient biases, driving us further and further away from what we need.

It was scary as hell to be any caveman, out in a storm with nowhere to hide.

It was even scary to be my caveman, who had what he wanted (safety, for tonight), and who thought he had what he needed.

It’s much, much scarier, however, to be your caveman. To trust my guy, the great conversationalist and the other only other caveman around, to check out that dark cave he recommended, because he claimed to have all of the relevant information, to have near escape with a tiger in that cave, and to know, immediately, that that asshole, my great uncle Clyde Caveman, confidently gave your guy dangerously wrong info —

— and to know you probably would have done the same thing.

I’m all for search. I’m all for chatbots. I fucking love new tools.

It is essential that we continue to innovate, to design tools that help us use far fewer resources, that reduce our administrative burden, that uplift more people, that make children safe, that make Black people and Asian people and women and transgender and other LGBTQ+ people safe, that design better antibiotics and vaccines, that help us translate languages on the fly, that help tell our stories, that help balance out news reporting, that help us measure methane leaks and soil health.

We want to feel safe. We want to feel seen. If we can build AI chatbots that are personable, that transparently and candidly acknowledge what they don’t know or what they’re not sure about, that cite sources, and help us do our jobs and understand data on the fly, we might get there.

We need real answers. We need to understand what is broken and what we can build on. If we can build a new search that is untethered to providing you with the most profitable answer, we might level the playing field against disinformation.

We can have both, if we understand the difference.

We can use both of these tools — and someday maybe combine them — to help build a more resilient, less hungry human experience that is radically better for far more people, and even more people to come, relieving our ecosystems of the terrible burden we’ve put on them.

But if we aren’t willing to ask hard questions about our wants and needs, to understand what we can change and what we can’t, we will only ever have the same ideas we’ve ever had, by way of the same conversations we’ve ever had, seeking the same answers we’ve always sought: How can I use this to benefit me?


Support Our Work

INI is 100% independent and mostly reader-supported.

This newsletter is free, but to support our work, get my popular “Not Important” book, music, and tool recommendations, get exclusive Top of Mind posts, to connect with other Shit Givers, and attend monthly live events, please consider becoming a paid Member.

From My Notebook

Health & medicine


  • This amazing new solar farm is also a prairie restoration project and a carbon sink, you’re welcome (also, let’s here it for hedgerows!)

  • Harvard voted to build climate change into its medical school curriculum

Food & water

Beep Boop


Got YouTube?

Join the conversation

or to participate.