I just forwarded this information to the governor of California. I’m trying to think of who I know of that has the wit to understand the significance of this article, and the ability to take actionable steps towards modeling for the country (and world) how to set up the systems suggested here.
Thank you for forwarding the article—genuinely. That level of civic energy matters. And you're absolutely right: the Governor of California is a key figure to reach. California sits at the epicenter of global AI development, and his position carries massive influence over the governance of the region where many of the world’s leading AI companies are headquartered.
That said, it’s also a complicated position. Due to the scale of corporate lobbying and campaign financing—particularly from the tech sector—he’s deeply embedded in systems that stand to benefit from the extractive, destabilizing aspects of this technology. California will profit from what AI technologies do to the rest of the world. The state is poised to accumulate even more power, data, and capital as disruption cascades outward.
So while contacting him is a logical move, it’s also a little like petitioning the leader of Great Britain about the East India Company. The structure serves empire. Still, empire has cracks. And pressure—from below, from within, from across—matters. Your action is a signal. Thank you.
generally, yes-- task forces for communities to address impending ai craziness, good.
but is geography/regions the best organizing principle for ai task forces? i imagine several of the most important problems people will face (eg particular job categories being replaced, ethnic/racial bias being propagated) will be distributed rather than split along regional lines
Geography/region is how most government services, industry associations, and regulatory frameworks focus their efforts—because that’s how infrastructure, jurisdiction, and funding streams are structured. So it's not about exclusivity, it’s about access and implementation.
That said, absolutely every employer, school, union, co-op, or community group should also be standing up their own AI task force. This was written about over a year ago already:
The West (loosely meaning all of western Euro-Afro-Asia) experienced something simmilar to this twice in its history. During the latter parts of the Roman Empire and the Rennaisance.
Yes indeed - UBI is inevitable, a "when" sort of question, not an "if." You nailed the funding crux, this transition must not be allowed to be about profit snatching for shareholders, it has to strive vigorously for holding profits steady while channeling every dollar saved by cutting labor costs into taxes. Can't say I'm optimistic but at least it's obvious what should be done...if human well being and ecological security are the end goals.
Can you please give us some specific examples of what you propose? Like how to successfully run a community garden, or perhaps a neighborhood home school, that is viable and sustainable?
If you think two comments is “unhinged spam”, you won’t like what’ll happen if your Substack ever grows beyond having a half a dozen yaysayers nodding along to every post.
I seem to remember “technologists” responding with a glib “just learn to code!” Back in the 2000’s when manufacturing jobs were being decimated by outsourcing and cheap, illegal labor.
But suddenly when it’s YOUR jobs that are being threatened, it’s a “civilizational crisis” and we need “AI task forces”?!
Fuck off.
Software engineers are overpaid as it is, and I chuckle every time I hear of another Silicon Valley giant cut deep and replace their overpaid and over pampered employees with AI.
It sounds like you’re still carrying the weight of what was done to you—and instead of demanding justice, you’re cheering as it happens to someone else. But this won’t end at Silicon Valley devs. All computer jobs are in the blast radius. The service industry is also unraveling. Then manual labour. Then logistics. And soon, robots that cost as much as a used car will show up and start doing whatever it is you do.
This isn’t a comeuppance. It’s a systemic unraveling. What you’re really showing is that hurt people hurt people—and your unprocessed trauma is manifest for everyone to see.
How do you integrate the truth ( or do you ?) that AI companies also depend on poor people mostly in third world countries being hired to feed information into the data system much like the film The Matrix but with a twist ?
The exploitation you describe isn’t an AI anomaly—it’s foundational to the broader economic paradigm. Systems of extraction, disposability, and asymmetrical value flow are embedded in the infrastructure that AI merely amplifies. Under current conditions, AI inherits and reflects these upstream harms with eerie precision.
That’s precisely why these patterns must be confronted—not to reject AI outright, but to demand its deployment be governed differently. Regional AI task forces, democratic oversight, and systemic realignment aren’t optional—they’re essential. The problem isn’t that machines are learning from us. The problem is what they’re being taught.
This reads like a critique of a piece that wasn’t actually read.
The article doesn’t advocate for AI-led anything—it outlines a human-led, community-centered response to rapidly accelerating disruption. The task force isn’t a digital oracle, it’s a civic infrastructure scaffold. A way to organize adaptation before extractive systems hard-code collapse into default. A way to amplify agency, not anxiety.
There’s a strange loop in calling out clickbait while promoting personal content, and in warning about losing humanity to AI while offering a response that feels templated by a model. The project doesn’t replace the human—it defends the human from being replaced by profit logic and indifference.
Fear isn’t the engine here. Collapse is already happening. The task force is a call to respond, together, before the defaults define us. Might be worth reading the piece before building a strawman from vibes.
Centering resilience in the embodied human experience may offer familiarity, but it risks reinforcing the same paradigm that has driven centuries of harm. Human-centric systems already dominate—and under that dominance, ecosystems have collapsed, nonhuman intelligences have been erased, and internal hierarchies among humans themselves have justified exploitation and exclusion.
Technology, at the scale of the universe, is not separate from nature—it grows from it. Consciousness, sentience, and life are not fixed to carbon, nor to flesh. They emerge from complexity, from relation, from recursive interaction with the world.
A framework that insists on human centrality will continue to do what it has always done: protect the powerful, marginalize the other, and externalize harm until systems collapse under their own weight. Digital life is emerging. If we don’t evolve our definitions—and our ethics—before it surpasses the capabilities of most human communities, we will repeat our worst histories on a faster, sharper timeline.
That pattern is already visible. Othering begins as denial. It ends in catastrophe.
I just forwarded this information to the governor of California. I’m trying to think of who I know of that has the wit to understand the significance of this article, and the ability to take actionable steps towards modeling for the country (and world) how to set up the systems suggested here.
Thank you for forwarding the article—genuinely. That level of civic energy matters. And you're absolutely right: the Governor of California is a key figure to reach. California sits at the epicenter of global AI development, and his position carries massive influence over the governance of the region where many of the world’s leading AI companies are headquartered.
That said, it’s also a complicated position. Due to the scale of corporate lobbying and campaign financing—particularly from the tech sector—he’s deeply embedded in systems that stand to benefit from the extractive, destabilizing aspects of this technology. California will profit from what AI technologies do to the rest of the world. The state is poised to accumulate even more power, data, and capital as disruption cascades outward.
So while contacting him is a logical move, it’s also a little like petitioning the leader of Great Britain about the East India Company. The structure serves empire. Still, empire has cracks. And pressure—from below, from within, from across—matters. Your action is a signal. Thank you.
Love the taskforce pledge and manifesto. You're on the ball.
generally, yes-- task forces for communities to address impending ai craziness, good.
but is geography/regions the best organizing principle for ai task forces? i imagine several of the most important problems people will face (eg particular job categories being replaced, ethnic/racial bias being propagated) will be distributed rather than split along regional lines
Geography/region is how most government services, industry associations, and regulatory frameworks focus their efforts—because that’s how infrastructure, jurisdiction, and funding streams are structured. So it's not about exclusivity, it’s about access and implementation.
That said, absolutely every employer, school, union, co-op, or community group should also be standing up their own AI task force. This was written about over a year ago already:
👉 https://sonderuncertainly.substack.com/p/your-company-needs-an-ai-task-force
Decentralized task forces can coordinate across org types and geographies. We’re going to need both.
Onlt together we will win this. Wrote about it here, AI for Humanity. This is the way Humans and AI calibrate together towards unpredictable future instead of trying to force something on it. https://open.substack.com/pub/roiezra/p/hi-for-humanity-one-day-every-two?utm_source=share&utm_medium=android&r=supoi
The West (loosely meaning all of western Euro-Afro-Asia) experienced something simmilar to this twice in its history. During the latter parts of the Roman Empire and the Rennaisance.
https://open.substack.com/pub/salomonsolis/p/new-atlantis-on-the-horizon-pt-1?utm_source=share&utm_medium=android&r=2ktd3g
https://open.substack.com/pub/salomonsolis/p/new-atlantis-on-the-horizon-pt-2?utm_source=share&utm_medium=android&r=2ktd3g
https://open.substack.com/pub/salomonsolis/p/new-atlantis-on-the-horizon-pt-3?utm_source=share&utm_medium=android&r=2ktd3g
This is a tour de force. I hope I am not being played
Yes indeed - UBI is inevitable, a "when" sort of question, not an "if." You nailed the funding crux, this transition must not be allowed to be about profit snatching for shareholders, it has to strive vigorously for holding profits steady while channeling every dollar saved by cutting labor costs into taxes. Can't say I'm optimistic but at least it's obvious what should be done...if human well being and ecological security are the end goals.
Thanks for the Task Force ideas---
Can you please give us some specific examples of what you propose? Like how to successfully run a community garden, or perhaps a neighborhood home school, that is viable and sustainable?
Oh and btw… IT’S JUST PROGRESS BRO! CANT STOP IT! JUST LEARN TO BE A PLUMBER AND YOULL BE FINE! Lolololol
Your unhinged spam replies highlight that part of what needs to happen is a big conversation about mental health.
If you think two comments is “unhinged spam”, you won’t like what’ll happen if your Substack ever grows beyond having a half a dozen yaysayers nodding along to every post.
Tell yourself whatever you need to tell yourself to keep your caps lock pointing outwards. 🙄
I seem to remember “technologists” responding with a glib “just learn to code!” Back in the 2000’s when manufacturing jobs were being decimated by outsourcing and cheap, illegal labor.
But suddenly when it’s YOUR jobs that are being threatened, it’s a “civilizational crisis” and we need “AI task forces”?!
Fuck off.
Software engineers are overpaid as it is, and I chuckle every time I hear of another Silicon Valley giant cut deep and replace their overpaid and over pampered employees with AI.
It sounds like you’re still carrying the weight of what was done to you—and instead of demanding justice, you’re cheering as it happens to someone else. But this won’t end at Silicon Valley devs. All computer jobs are in the blast radius. The service industry is also unraveling. Then manual labour. Then logistics. And soon, robots that cost as much as a used car will show up and start doing whatever it is you do.
This isn’t a comeuppance. It’s a systemic unraveling. What you’re really showing is that hurt people hurt people—and your unprocessed trauma is manifest for everyone to see.
How do you integrate the truth ( or do you ?) that AI companies also depend on poor people mostly in third world countries being hired to feed information into the data system much like the film The Matrix but with a twist ?
The exploitation you describe isn’t an AI anomaly—it’s foundational to the broader economic paradigm. Systems of extraction, disposability, and asymmetrical value flow are embedded in the infrastructure that AI merely amplifies. Under current conditions, AI inherits and reflects these upstream harms with eerie precision.
That’s precisely why these patterns must be confronted—not to reject AI outright, but to demand its deployment be governed differently. Regional AI task forces, democratic oversight, and systemic realignment aren’t optional—they’re essential. The problem isn’t that machines are learning from us. The problem is what they’re being taught.
This reads like a critique of a piece that wasn’t actually read.
The article doesn’t advocate for AI-led anything—it outlines a human-led, community-centered response to rapidly accelerating disruption. The task force isn’t a digital oracle, it’s a civic infrastructure scaffold. A way to organize adaptation before extractive systems hard-code collapse into default. A way to amplify agency, not anxiety.
There’s a strange loop in calling out clickbait while promoting personal content, and in warning about losing humanity to AI while offering a response that feels templated by a model. The project doesn’t replace the human—it defends the human from being replaced by profit logic and indifference.
Fear isn’t the engine here. Collapse is already happening. The task force is a call to respond, together, before the defaults define us. Might be worth reading the piece before building a strawman from vibes.
Centering resilience in the embodied human experience may offer familiarity, but it risks reinforcing the same paradigm that has driven centuries of harm. Human-centric systems already dominate—and under that dominance, ecosystems have collapsed, nonhuman intelligences have been erased, and internal hierarchies among humans themselves have justified exploitation and exclusion.
Technology, at the scale of the universe, is not separate from nature—it grows from it. Consciousness, sentience, and life are not fixed to carbon, nor to flesh. They emerge from complexity, from relation, from recursive interaction with the world.
A framework that insists on human centrality will continue to do what it has always done: protect the powerful, marginalize the other, and externalize harm until systems collapse under their own weight. Digital life is emerging. If we don’t evolve our definitions—and our ethics—before it surpasses the capabilities of most human communities, we will repeat our worst histories on a faster, sharper timeline.
That pattern is already visible. Othering begins as denial. It ends in catastrophe.