Everyone knows what it is to be conscious, and we imagine that other people are also aware. That we have a voice in our heads, apparent agency and free will, a little person inside who is commenting, making decisions and in charge.
We’re not sure if dogs have this, and we’re pretty sure that sunflowers aren’t having deep philosophical misgivings about this year’s harvest. We’re very sure we have it, though.
But other than a few philosophers, we mostly avoid considering the nature of the most vivid part of our days. The noise in our heads that can narrate joy and prolong and amplify stress.
AI is here now. And AI requires us to get clear about what’s actually going on in my head (and yours). Dennett’s work on the Intentional Stance is worth considering:
Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.
It’s easier to win a game of chess if you imagine that the person you’re playing against is playing a game of chess. The rook doesn’t ‘want’ anything, but if we imagine the person who moved it does, we can play with more ease.
We give the driver of a car in the other lane or the person in a negotiation the consciousness we imagine that we have. It’s a powerful shortcut to survival, and might even be the way we ended up deciding we were conscious.
Because we might not be. There’s plenty of fascinating work on the fact that free will is an illusion, and that the voice in our head happens after we’ve already made a decision.
It’s clear to most people who have used AI that it can’t possibly have consciousness, but it’s also human to imagine that it does. I can’t use chatgpt for more than three minutes without imagining that there’s a weird little person inside. I know what it’s doing, but that doesn’t help me interact with it.
My guess is that we’ll revert to our habits and take an intentional stance with AI. Perhaps, though, this might be a moment to think hard (whatever that means in a world without active consciousness) about the fact that we might be exactly like an AI… that we’re nothing but wet electricity, and the intentional stance is an evolutionary hack, the product of living in a complicated world, not the cause of it.
If our brains are embeddings plus probabilities… if what we are doing is processing inputs and creating outputs, that might be all there is. And that could be fine. If it helps us find peace of mind and acceptance of our situation, it could end up being a net positive.
It could be that when we drop the story and simply focus on what is in front of us, we’re able to make the impact we seek. The noise in our heads might just be noise.
To win a Nobel prize a hundred years ago, you might only need a legal pad and a few pencils.
Today, it takes millions of dollars, scores of people and many years of effort.
That’s because the most straightforward problems have been solved.
One side effect of this inevitable shift is that many parts of science have become bureaucratic and industrialized. Most people who work in most organizations that do science simply do their jobs. That’s a good thing, because it can lead to coordinated, stepwise progress at scale. But it’s also a problem, because it puts a premium on being right, and creates a fear of being wrong.
But innovation–in the arts, in science, in business–is all about being willing to be wrong, because innovation requires missteps. They’re not a bug, they’re a feature.
Open systems, loosely coordinated networks and laptops are changing this. Now, it’s possible for tiny teams to have significant impacts on the entrenched power structures. As a result, the incentives shift. Now, a tiny team has little benefit in being just a cheap cog in a big system, and a huge upside for challenging conventional wisdom with new insights.
And those are the two challenges of anyone seeking to make an impact.
First, we get distracted by the inclination to make the group as big as we can imagine. After all, the change is essential, the idea is a good one. It’s for everyone.
Except that’s a trap. Because a group that’s too large cannot be coherent or organized.
Or perhaps, we blink and settle for a group that’s too small. Change requires tension, and if our group is so small that it’s comfortable at all times, we are probably avoiding making an impact.
And well organized? That’s the persistent, generous work of creating the conditions for deep connection.
When in doubt, focus on how to organize the folks you already have. Find a way to give them the tools for them to tell the others. Build a resilient loop, one that gets more organized and powerful as you grow.
The right-sized group and ceaseless peer-to-peer organization are the foundation of culture change.
There are now 1,000 of us in this online community that’s not a social network. Proudly a millionth the size of some other online experiences.
It includes the original Creative’s Workshop, with hundreds of people working through it, side by side. And just added, access to the Marketing Seminar.
But mostly it’s the peer support, the daily journaling, a safe place to improve your ideas and lean into the work ahead. Around the world and around the clock, people who are eager to contribute and create.
Some people try it for free for a week and then unsubscribe, because they’re hoping for something that is convenient and runs on autopilot. The ones who stick are finding the connections they’ve been missing and the chance to level up.
All the details are at purple.space (including how to unsubscribe) … use the code TOGETHER to try it for a week.
If you’re on a journey to do work worth doing, folks are eager to help you get there.
In many creative industries, there’s a similar pattern.
When the stakes are very low, most creators produce things that are fairly banal and ordinary. Part of that is the law of large numbers, but it’s mostly our personal cultural resistance to leaning too far into weird stuff. And so the vast majority of YouTube videos, Spotify tracks, potluck dinner contributions and craft fair items are copycats.
When more time or money is required, we actually see the percentage of creative work go up. I think this might be because the effort and risk separates the vast sea of hobbyists from the committed creator. This is where we find the solo chef with a strip mall restaurant, the indy record label, the art school movie project or the avant-garde composer who hires a small group to perform. Netflix demonstrates this small-budget magic on a good day.
But then we enter the grey zone. This is where the rents for the big restaurant are high, the budget for the film is in the tens (or hundreds) of millions and the record label has decided to really push on a particular artist. This is the moment when more creativity is the only appropriate economic plan. Why? Because if your goal was to go straight down the middle, you didn’t need to risk all that money.
Instead, cultural, financial and corporate pressure all conspire to push the creator to do precisely the wrong thing. The dotted line is where we should be, but we often ride the boring road all the way down.
If you’re going to go big, don’t stay home.
The “money at risk” is always relative, not absolute. If it feels like you’re entering the grey zone, you are.
Kindergarten teachers matter more than you think. Chess isn’t a talent, it’s a learned practice. We’re sorting for head starts, not growth. And that’s just the first chapter. I think Hidden Potential is the most important book in Adam Grant’s career.
The indoctrination around test scores and prodigies runs so deep, that most of us believe it to our core. Not just about others, but our kids and ourselves as well.
Soft skills are real, and attitudes are skills. They can be learned, and we can help others learn them as well.
The data is clear, but embracing it takes guts and mutual support. In many ways, we become what we allow the system to cause us to become. But we can change this.