A Better World is Possible

Greetings from Maine where Kin, Poppy, and I are visiting family for a few days. As such – as summer – today's missive is a little shorter than usual.
And, quite unlike recent emails, today's newsletter opens with some very good news: Zohran Mamdani's win in the NYC primary on Tuesday. Also wonderful, of course: the defeat of sex pest and former NY Governor Andrew Cuomo (not to mention the defeat of Whitney Tilson, the founder of Democrats for Education Reform, coming in with .8% of the vote).
We can be free and we can be fair. We can demand what we deserve. – Zohran Mamdani
In our hotel room, en route from NYC to Blue Hill, Poppy had her first experience with a full-length mirror – I guess neither of the apartments she's lived in have had them. No surprise, if you're familiar with [dog] psychology, Poppy failed the mirror test, unable to recognize that the reflection was her own, uncertain if it was another dog, and just generally confused by the whole thing. You know, you think you have a good dog, an intelligent dog, even – at least, you've trained her well: she knows some tricks – she rolls over, she "plays dead," she shakes hands, she comes when her name is called. She sure seems smart. But then you put her in front of a mirror for the first time, and goddamn you realize, "Oh you have no idea what the hell is going on here, do you? You sweet, stupid creature."
It's a bit like the political pundit class and Zohran Mamdani: simply no ability to comprehend the world because they just cannot fathom, just cannot process the world around them. I mean, at least Poppy has an excuse: dogs (and many other more-than-human beings) generally do not recognize themselves in a mirror. The pundits have less of one: they surround themselves with mirrors of a different sort. But they have little self-awareness, almost no awareness of others. Like some animals, they act instead with aggression to what they see.
And there's an analogy here too with those promoting "AI," unable (unwilling) to recognize the inhumanity that Silicon Valley has built into the development and deployment of the technology. Powerful tech oligarchs have forced upon us systems designed to cultivate individualism and selfishness, strategically undermining political solidarity by selling us a version of "community" and "social" that they control. These men speak of a future in which none of us matter – none of our work, none of our dreams, none of our joy, none of our lives. Only our data.
Too many people, to borrow the phrase from Ryan Broderick, are "emotionally psyopping" themselves with "AI" – and not just those folks whose mental health crises (and conspiracy theories) appear to be exacerbated by chatbots, but arguably those who insist they are now vastly more productive, their cognition and their capabilities greatly enhanced thanks to their regular chit-chats with the bullshit machine.
But here's what I think: most people don't want "AI." Most people are exhausted by the onslaught of technology "upgrades" that have consistently made everything worse. Most people stopped being wowed by technology a decade ago because they don't have a stock portfolio or consulting gig that demands their ooohs and aaahs. Most people know "AI" is bad for their future – that it's something their bosses want in order to surveil, control, and probably fire them. (According to Bloomberg, tech companies are no longer bragging about the "hyperscaling" of their user-base; they're boasting about how few people are on their payrolls.) There's a reason that one of the big backers of Cuomo, alongside the Democratic establishment, was DoorDash incidentally: the future of work is piecemeal employment algorithmically organized.
Zohran Mamdani advocated for the delivery drivers; we understand and even share their precarity.
It's probably worth noting here too, Randi Weingarten's recent exit from the DNC. The teachers' union, despite the rhetoric of many powerful men, is not a particular radical group of workers. But see, it's not radical, again despite the rhetoric of many powerful men, to hope and dream and demand a livable wage in a livable world.
Zohran Mamdani did not win because of social media gimmickry, as some pundits have tried to spin their mirrors since Tuesday's results. He won because of the work of grassroots organizers – years of work, years of solidarity. He won because he is a charismatic politician whose proposals appealed to hundreds of thousands of New Yorkers – because he offered a meaningful, hopeful vision of a better city. (All without throwing trans folks or immigrants or Palestinians or the poor under the bus! Go figure!)
You might think "AI" offers a meaningful vision of a better world, but damn, "AI" is some Andrew Cuomo level shit: just this empty but entitled presumption that we will all keep going along with the exploitation.
We won't.
Meanwhile in ed-tech...
"Khan Academy CEO predicts AI in the classroom will be like 5 'amazing graduate students' assisting teachers," says Business Insider. I mean, we could actually hire 5 amazing graduate students for each classroom teacher if we had the will; but instead, powerful people want to replace human labor – all human labor – with machines. Not because it's good. But because they profit.
MOOC provider Udemy announced it's acquired Lummi, an AI stock image generator. I mean, we all knew that AI slop was coming for ed-tech, and frankly, we could have guessed that MOOCs would be the vehicle.
The Information reports that "OpenAI Quietly Designed a Rival to Google Workspace, Microsoft Office." This comes on the heels of a lot of marketing buzz surrounding AI-focused web browsers. As schools have already reoriented themselves around "productivity software" – whether Google's or Microsoft's – it's worth thinking about the push for AI from within and adjacent to these tools.
"Getting off US tech is no easy task," Paris Marx admits (but offers some good suggestions for alternatives). Getting off ed-tech – de-computing the classroom – will be no easy task either, but surely the time has come.
"Large language models are a bad replacement for human therapy, a new study says." But here's The New York Times, consistent with its "AI" laundering, asking, "Kids Are in Crisis. Could Chatbot Therapy Help?" (Sound the "digital natives" klaxon for this one.)
One of the things I find incredibly annoying is the way in which we always frame technology as doing things to schools – the former the actor, the latter the object. This is often echoed in how we speak of students too – both technologies as with school itself do things to students, who have little agency in this framework (and often, sadly, in reality). What if, instead of insisting that "AI will change education," we change the story and instead push for education to act rather than bend. Or, as Dan McQuillan argues, "The role of the university is to resist AI."
"School isn't just about basic skills," writes Michael Pershan. (Some time I will write about how I've learned more from math bloggers than any other discipline of education writers...)
"GPTZero is useless," says James O'Sullivan.
Eryk Salvaggio on "The Black Box Myth: What the Industry Pretends Not to Know About AI."
Ah yes, some folks will keep trying to repackage technologies of eugenics, won't they. Michael Petrelli wants to convince ed reformers that DNA analysis is the future: "Measuring Student Potential—with Genetics." My god, if it's not one thing, it's another with these guys.
"Our exhausted words and theories are not, I believe, as important as how we relate to each other and the relationships we build." – Mariame Kaba, "The Struggle is Permanent. Keep Fighting."
Thanks for subscribing to Second Breakfast. Please consider becoming a paid subscriber. Your financial support enables me to do this work: resisting the mal-engineering and mis-understanding of our lives.