Tomorrow is another one
How quickly can we adapt?
I’m Quinn Emmett, and this is science for people who give a shit.
Every week, I help 23,000+ humans understand and unfuck the rapidly changing world around us. It feels great, and we’d love for you to join us.
WELCOME TO THE UNKNOWN UNKNOWNS
Take a look at your calendar. Note the date. Today is the day the world changed forever.
Let’s step back into the human past for a moment, to 1992.
Even if you’ve never seen Jurassic Park, you know the quote, the one by a soon-to-be-gloriously-shirtless Dr. Ian Malcolm.
Dr. Ian Malcolm’s quote has stayed relevant because it has applied to many of the technological advancements we’ve made since 1992, and it applies very much to what’s happening in “artificial intelligence” — but not in the way you think.
To be clear: I’m not here to bash innovation, not by a long shot, friend.
The world is better in almost every measurable way since 1992 not just because of environmental protections, lawsuits, anti-smoking campaigns, and the Sustainable Development Goals, but also because of advancements in genome science, big data, medical devices, targeted cancer therapies, pharmacology, ART, heart disease, and so many more.
To answer Dr. Malcolm, we absolutely should have done these, because we could. We were finally, marvelously, technologically able, and millions of lives could have been — and were subsequently — improved by them.
Today, we can do many, many fruitful things, and we should do those, as well, because we understand them, and they are necessary.
For example, we have virtually every technology we need to build a world powered by renewable energy sources. And with climate change here, of course we “should” build them, to bring meaningful relief to billions of people, animals, and ecosystems. The only “can” holding us back is the political will to overcome trillions in fossil-fuel subsidies and industry lobbying to build what the hell we need to build.
Once we overcome those, we can, should, and will spend the next few decades building an abundant, incredible world, relieving the devastating burdens we’ve put upon the planet’s ecosystems and most marginalized people, and honestly accounting and paying for new tradeoffs along the way.
We can do those things, now, and simultaneously, because we have the information we need to understand them, and (some) time to most ethically execute them. These are the known knowns.
With AI, “can” is no longer a question of technical ability — we’re well past that — but a much more urgent question of how much change we can possibly absorb. It requires us to shift Malcolm’s question from “Should we do X?” to “Can we afford to do X, all things considered?”
This new AI era supersedes everything before it, where the abundance from its utter ubiquity will rapidly compound into known unknowns — which we have some experience with, but don’t handle very well — and soon, unknown unknowns, where fundamental assumptions of how society works evaporate and future shock becomes the status quo.
Why time matters.
One of the driving mechanisms of both the pandemic and climate crisis has been a refusal to calculate — much less pay for — the costs of a couple hundred years of progress, of building what we want to build, where we want to build it, and of whatever materials we please.
After decades of lies and lobbying, companies, governments, and rich people are finally doing the math on their daily emissions, and they’re throwing millions of dollars at carbon offsets that aren’t real, in a dishonest attempt to forestall the future.
The point is: The Industrial Era built the world we take for granted every minute of every day. It lifted billions out of poverty, but with perspective — with time — we understand just how much damage we’ve caused along the way.
We understand now what carbon does to the atmosphere, and that we can remove carbon — and that our ocean and trees have been doing it for us all along. But there are many species we cannot bring back, and we cannot put sea level rise back in the box. It will march on for the rest of our lives, and our descendants, too.
Progress comes with tradeoffs. It can take time to understand what they are, and the first and second-order effects of those. We’ve built some mechanisms to speed these up — massive, randomized clinical trials, for example. But as humans, we can only know so much, and project so far into the future, much less travel to it.
There’s a reason we pull ice cores from the Antarctic and Greenland — to give us a better understand of what happened the last time things were like this.
But with AI, there are no ice cores to pull. There is no precedent in the geological record. The only factor that is relevant about the past is an understanding of how we as humans have made decisions, given enormous change, given time to adjust.
With AI, we have no time to adjust, to assess what is necessary to who we are, and how the pieces of our society and economy fit together, what’s worth preserving and what is not, and what our descendants lose when we let options be taken away.
To be clear: it’s not like AI overlords are going to tell people they can write fiction anymore. In the best case scenario, we’ll have even more time to write fiction. The question is who will pay us for it.
With AI, we can do wonderful, imaginative, and soon, impossible to imagine things. But the biodiversity of human contributions as we know it is at risk, and we don’t know what will happen with those go away.
In Thinking In Systems, Donella Williams et al. described what happens when we willingly sacrifice biodiversity:
To say that we don’t all agree on this framing yet would be an understatement: we've normalized industrial meat to the tune of one soccer field of rainforest lost a minute, every minute, and air pollution that kills eight million people a year, every year, because they are convenient.
Barring an asteroid or supervolcano explosion or both, climates usually change over millennia, or longer. We have sped ours up, and in the wrong direction, but we still have some time, some room for error, to do as much as we can.
We’ve begun to course correct, like Captain Jack Aubrey in the Southern Ocean, chased by some massive Dutch ship-of-the-line, freezing cold water rising in the hold, our masts splitting down the middle, who knows how many midshipmen already flung over the side or exploded by cannon balls or grape shot, but somehow we push on with the belief that we’ll make it out of this, that clear skies and calm waters are just around the corner.
We’ve (barely) enough time to turn the proverbial ship around — knowing, of course, that millions have already suffered and many more will during the transition.
There are plentiful known unknowns when it comes to the climate crisis — heat, drought, flooding, storms, and of course, what we can build with unlimited renewable energy, and what the contributions might be from eight million people a year who would have otherwise died from air pollution.
We don’t know, but we’re sure as hell going to find out. These are incredibly complex systems we’ve fucked with, and there are real tipping points with inevitable outcomes we can’t understand yet.
But we’ve triangulated the information we do have, and many of us are operating at maximum warp to build a radically better future and atone for the past, to build multisolve with shit like solar panels over dwindling reservoirs, to shut off the gas, to map the ocean floor, to protect it and the waters and creatures above it.
We can use AI to move faster on those. But what will be the costs to access the power we want?
I got into sci-fi writing because I wanted to help imagine what’s just beyond our reach — not too close, not too far — and to question how we get there.
Years later, I find myself here, trying to help tens of thousands of readers more effectively put a dent in the universe.
Used ethically, AI can help us put one hell of a dent in the universe.
The future-positive known unknowns of AI are abundant:
New medicines, but for which diseases?
New ways of learning languages, but what might be the most effective way to do so?
New ways for less educated workers to compete (and contribute) alongside more educated ones? But where? Will it require Microsoft Office, Google Docs, or something else?
More productivity and more free time (for some), but more time to do…what?
To find meaning?
Is it, as Viktor Frankl wrote, a new opportunity to “transcend subjective pleasures by doing something that points, and is directed, to something, or someone, other than oneself … by giving himself to a cause to serve or another person to love”?
As it stands, the current pace of AI doesn’t give us much time. Time to react, much less to plan.
But like the climate crisis on fast forward, AI is only going compound on itself until the clock is ticking so fast that time doesn’t mean what it used to.
Recognizing we cannot slow AI’s progress now, it is essential we ask of ourselves, our money, our tools, and our time:
What’s it all for?
In A Wizard of Earthsea, series protagonist Ged is one of a few special wizards, a teen who feels his potential and powers are criminally unappreciated. So one day at school he lashes out at a rival, showing off in front of peers and mentors, never stopping to question what may come of it.
It goes poorly.
Ged spends the rest of the series tempering his mighty powers, and atoning for what he wrought, because he increasingly understands the wide-ranging implications of that one decision, and because he has the time to make recompense.
We’re not going to temper anything — with notable, muddied exceptions for nuclear weapons, cloning, and germ line editing, we don’t temper progress.
Even with those, we are mostly dealing in known unknowns.
In The Three Body Problem, Ken Liu translated Cixin Liu’s brief summary of human progress into English:
I don’t say this lightly: Today’s AI-copilots might be obvious, but we have manifested a future of unknown unknowns.
Describing the scope of real AI as anything but everything, everyone, everywhere, all at once, would be a disservice, and we are not prepared for the transition.
It’s one thing to adapt to a sea that is rising slowly but surely over decades and centuries.
It’s another to adapt to a novel coronavirus for which we have no natural immunity, or AI tools that have literally just this week unlocked vast educational and productivity improvements, but which could quickly overturn our understanding of education and productivity, of employment, of inequality, of biological research, and a million other building blocks of society that we can’t possibly foresee.
AI — or really just fancy machine learning — has been a part of your life for a decade now, from social media to online advertising to Siri to mortgages and policing.
But compared to just this week, those tools were primitive at best, with results that have been, well, decidedly mixed. Known knowns, our most obvious instincts and biases at work, more connected and made faster.
I cannot believe I am saying this but as relatively limited as these new tools are compared to what we’ve always imagined artificial general intelligence, or AGI, would be, we are not far off from…a version of it. Something. Something even more disruptive.
There is a very long way to go, but time-space does not mean the same thing to AI as it does to us, and LLM’s that can inhabit different personalities on demand, all while somewhat accurately posing as a law student, a radiologist, a musical historian, a micro-economist, and action-movie screenwriter is a paradigm-shift we are not ready for.
As of today we have entered a world and empowered a technology that we simply do not understand, much less are able to control or rein in. We barely know how to handle a late-pandemic, early-climate crisis economy, with known, measurable inputs and outputs.
Many things about AI are out of our control, but knowing what we can control and operating with purpose can upvote fantastical opportunities, and alleviate some of the inevitable and unimaginable losses.
We have to ask all of the hard questions right now.
I firmly believe we can celebrate a new era like this one while simultaneously questioning not only the ethics of who makes the underlying technology, what (and for something like face-scanning, who) it’s made from, who profits from it, and who will suffer from it.
Which is already getting more difficult to answer.
I intended this to be a more timeless piece — if that’s possible in the AI era — but it’s important to understand for a moment how one of the primary players, Open AI, has evolved from a well-funded open research non-profit to, in part, a closed for-profit.
Google has always been for-profit, so Open AI’s pivot and feedback loop aren’t difficult to understand, or even a new idea: they’ve said becoming a for-profit entity enables them compete for talent, who can use access to increased funding for more research, the subsequent intellectual property from which becomes a further profit mechanism, enabling them to hire even more talent, and so on.
But they’ve also refused to share any more research and said sharing in the past was a mistake, because doing so alongside their effort to bring about AGI would give bad actors too many pieces to put together on their own. That is, it’s not because it would torpedo their partner and sugar daddy Microsoft’s new business model, the way Microsoft torpedoed their ethics and safety team just this week.
These moves require more pointed questions from us: is backtracking their way to win the arms race vs Google and Meta and others? Is it for safety? Is it to preemptively eliminate opportunities for regulation and enforcement?
Without more context, I think we can safely assume all of these are true. But AI doesn’t operate in isolation — the exact opposite — so without answers to these, asking broader questions becomes more difficult, too:
What are the raw mineral and climate impacts of NVDIA’s chips?
What are the power requirements for a day of use even now, at the beginning?
How much should our precious water cost to cool the data centers we’ll somehow become even more reliant on?
Who should regulate these things? States, countries, the UN? No one? The “self-regulating” market?
How will they self-regulate for ethics and safety without an ethics and safety team? Google set the pace by laying off their ethical AI team years ago. We can only assume this is the way forward.
And to paraphrase The Mandalorian himself, this is not the way, and certainly not when we’re dealing with unknown unknowns.
Maybe governments will step up? Before legislation and regulation comes understanding — not simply how something works, but what its potential may be and who it could effect, to protect the vulnerable and still leave room for innovation and to maximize the universal good it can do.
A compromised, octogenarian Congress isn't the answer. But that’s the obvious dig, and if it isn’t clear, I don’t think anyone has the answer. And willful ignorance DEFINITELY isn’t the fucking answer.
Which is why it’s so vital we ask better questions. Big questions. Hard questions.
One analogous climate-era example would be, “How can tens of millions of people continue to live in the American West knowing it’s well into desertification?”
A more future-positive AI question would be, “If the cost for pharmaceutical companies to research new medicines drops 90%, how can we cap consumer costs for new medicines (or devices) to provide for universal access (especially if AI’s going to make so many jobs suddenly expendable)?”
Here’s what we know. Here’s where we start.
The known knowns: Training these foundational models require very specific chips, and enormous amounts of power, both of which are enormously tenuous geopolitical questions right now. Derivative versions of the models, from the API or public research, require far fewer chips and far less power — they can be trained more specifically and run right on your phone — because the broader work is already done. For those who seek to profit from them, the work will never be done.
These tools will struggle at times to live up to all of the hype, including what I’ve posited here. But they will eventually, technically make jobs and entire industries like graphic design, screenwriting, editing, non-fiction writing, accounting, architects, software development, data science, market research, legal, customer service, and many, many others expendable, and soon. Do not delude yourself. The copilots of today will become the pilots of tomorrow. There is no going back.
These tools, like the workers they are replacing, are very imperfect, often inaccurate, and biased. We’d like to believe they know more than they do, when in reality they are incapable so far of making decisions on their own. But they will inevitably grow, and change. As they grow, they surprise us, so we expect more from them than they are capable of. This is what we do.
But this is also what we do:
We are a species that finds enormous meaning in work, in creativity, in expression. We are most happy when we are connected live and in person, and when we have a purpose to work towards, even if a rare few of us get to actually choose our work and that purpose, and on the other hand, even if many of us could stand to work a little less.
We can electrify cars, but what about the tens of thousands of people who service combustion engines because they love working on machines?
We can automate checkout, customer service, or food prep but what is the cost of less human interaction?
How do we accommodate both versions?
If there are tools that let us spend more meaningful time with young people and the elderly, to rebuild our relationships with nature, to make it easier to converse with one another in whatever language, to personalize learning, to increase crop yields, to distribute our clean energy more efficiently, to increase access to financial services and essential infrastructure services, to provide for a more robust safety net, to predict natural disasters and speed recovery, and to make wellness more universal with an increased emphasis on preventative health — we should use those.
Those tools are here, or coming, and that’s wonderful. But we have to try to understand the known tradeoffs as best we can, and steel ourselves for the rest, considering our most basic needs.
So now is the time to ask big questions about social safety nets, about reinvigorating hands-on-work industries, improving labor standards, economic diversification, trade schools, re-training, and more, to support one another, to make for a soft landing.
Look. After all of this, time might not actually be real — long story, I’ll call you later — but until we make some serious advances in theoretical physics, the past has already happened, and tomorrow is always right around the corner.
There’s no going back.
So we have no choice but to go into tomorrow with our eyes wide open, to make sure we don’t automate what gives us life, to take away the livelihoods of those that rely on the act of creation to find meaning for themselves, and whose creations often provide it for the rest of us.
Last week’s most popular Action Step was finding out what your bank is doing with your money using Mighty Deposits.
Donate to the World Resources Institute to support their work developing practical solutions to improve lives worldwide.
Volunteer with Energy Justice to support communities most impacted by pollution and waste.
Be heard about child hunger and urge your representative to co-sponsor the Universal School Meal Program Act of 2023.
Invest in climate solutions with The Climate Finance Fund.
Support Our Work
INI is 100% independent and mostly reader-supported.
This newsletter is free, but to support our work, get my popular “Not Important” book, music, and tool recommendations, connect with other Shit Givers, and attend exclusive monthly live events, please consider becoming a paid Member.