US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of “the richest people in the world” to economic insecurity for millions of Americans – and calling for a potential moratorium on new datacenters.
Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.
“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”



The “reasoning” models aren’t really reasoning, they are generating text that resembles “train of thought”. If you examine some of the reasoning chains with errors, you can see some errors are often completely isolated, with no lead up and then the chain carries on as if the mistake never happened. Errors that when they happen in an actual human reasoning chain propagate.
LLM reasoning chains are generating essentially fanfics of what reasoning would look like. It turns out that expending tokens to generate more text and discarding it does make the retained text more more likely to be consistent with desired output, but “reasoning” is more a marketing term than describing what is really happening.
LLMs do not reason in the human sense of maintaining internal truth states or causal chains, sure. They predict continuations of text, not proofs of thought. But that does not make the process ‘fake’. Through scale and training, they learn statistical patterns that encode the structure of reasoning itself, and when prompted to show their work they often reconstruct chains that reflect genuine intermediate computation rather than simple imitation.
Stating that some errors appear isolated is fair, but the conclusion drawn from it is not. Human reasoning also produces slips that fail to propagate because we rebuild coherence as we go. LLMs behave in a similar way at a linguistic level. They have no persistent beliefs to corrupt, so an error can vanish at the next token rather than spread. The absence of error propagation does not prove the absence of reasoning. It shows that reasoning in these systems is reconstructed on the fly rather than carried as a durable mental state.
Calling it marketing misses what matters. LLMs generate text that functions as a working simulation of reasoning, and that simulation produces valid inferences across a broad range of problems. It is not human thought, but it is not empty performance either. It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.
That’s some buzzword bingo there… A very long winded way of saying it isn’t human-like reasoning but you want to call it that anyway.
If you went accept that reasoning often fails to show continuity, well then there’s also the lying.
Examining a reasoning chain around generating code for an embedded control scenario. At one point it says the code may effect the behavior of how a motor is controlled, and so it will test if the motor operates.
Now the truth of the matter is that the model has no access to perform such a test, but the reasoning chain is just a fiction, so it described a result, asserting that it performed the test and it passed, or failed. Not based on a test, but by text prediction. So sometimes it says it failed, then carries on as if it passed, sometimes it decides to redo some code to address the error, but leaves it broken in real life. Of course it can claim it works when it didn’t at all. It can show how “reasoning” can help though. If the code is generated based on one application, but when applied to a motor control scenario, people had issues and so generating the extra text caused it to zero in on some stack overflow thread where someone made a similar mistake.
I didn’t call it human-like reasoning? Just that reasoning isn’t limited to human-like reasoning.
Have already covered your other points in this comment