

AI companies are helping create a distrust in media that took the Russian state apparatus decades to perfect. It’s amazing what venture capital can accomplish!


AI companies are helping create a distrust in media that took the Russian state apparatus decades to perfect. It’s amazing what venture capital can accomplish!


And even in its “current, final form” I do not believe it is as decent as this paper suggests. (See my other comment has more info)


I worry that others might be under that impression.


Resharing this to BuyEuropean communities


It’s accelerated: In 2001, technology companies were forced to collect user data and realized it could be a goldmine. Today, technology companies are being forced to collect people’s IDs… I’m sure this will end up just fine.


Oh, would you like to see something gross?
Brandon Wang’s recent blog post, “A sane but extremely bull case on Clawdbot / OpenClaw”
You know it’s bad when even Hacker News, a website funded by venture capital demon Mark Andreessen, calls him out:
Fine article but a very important fact comes in at the end — the author has a human personal assistant. It doesn’t fundamentally change anything they wrote, but it shows how far out of the ordinary this person is. They were a Thiel Fellow in 2020 and graduated from Phillips Exeter, roughly the most elite high school in the US.
Other comments point out his opulence: hotels charging $850 a night, reservations at expensive bay area restaurants, buying $80 gloves, and typing in lowercase because “sam altman types like this, so this is what is cool to the agi believers.”


Something is fishy here.
Manifest v3 has hard limits, and the developer of uBlock Origin has documented issues with the supposedly “just fine” new APIs in AdBlock Plus:
uBO Lite reliably filters at browser launch, or when navigating to new webpages while its service worker is suspended. This can’t be achieved without uBO Lite’s declarative approach. Example: [video]
But has also said that updates to their filters depends on Google graciously allowing it:
There are no filter lists proper in uBOL. There are declarative rulesets and scripts which are the results of compiling filter lists when the extension package is generated. Those declarative rulesets and scripts are updated only when the extension itself updates.
In other words, you can either have a tool that blocks ads unreliably, or a tool that can only update ad-blocking rules if an ad company allows it.
There are also things that are objectively impossible to do with Manifest V3.
So consider me skeptical. Any perceived parity or improvement is due to competent developers, not due to a willingness to make manifest V3 good. I think I’ll trust the people building adblock tech over a couple of university students.
(Copied from my original comment on an article that uses this as a source)


What a disgusting philosophy to have towards others. Please keep it to yourself.


Is that seriously an “AI is like a child” poster made to motivate workers?
AI companies sure love to treat humans like machines, while humanizing machines.


The source for creating the model, the training data, is closed, locked, a heavily guarded corporate secret. But unlike code for software, this data might be illegally or unethically gained, and Mistral may be violating the law by not publishing some of it.
You can “read” the assembly language of a freeware EXE program just as easily as you can “read” the open model of a closed source LLM blob: not very easily. That’s why companies freak out over potential hidden training data: the professionals developing these models are incapable of understanding them. (I shudder to imagine a world where architects could not read blueprints.)


For the purpose of simplification, calling it a closed as an executable is close enough. Or a closed-source freeware ROM that you can download and run on an emulator (since you can just download models and run them via ollama or something similar). Or a closed-source game that supports modding and extension like Minecraft. Or a closed-source DLL with documentation…
Anyway, the point is, it’s closed. If it’s not closed source, I’d beg you to link the source, both code and data, that compiles to the output.


So they’re basically following the early Elon Musk playbook: Look like the good guys, by being slightly less bad than your enemies.
I’d like to think society won’t fall for the same trick again.


“Open weights” just means you can download the blob they output from their sources. So… Closed source, unless they open it.
Their terminology is just tricky marketing. It would be like calling a closed source program “open executable” or something…


“Malicious” keywords aren’t exclusively the problem, as the LLM cannot differentiate between “malicious” and “benign”. It’s been trivially easy to intentionally or accidentally hide misinformation in LLMs for a while now. Since they’re black boxes, it could be hard to identify. This is just a slightly more pointed example of data poisoning.
There is no threat to an LLM chatbot outputting text… unless that text is piped into something that can run commands. And who would be stupid enough to do that? Okay, besides vibe coders. And people dumb enough to use AI agents. And people rich enough to stupidly link those AI agents to their bank accounts.


How much does it cost to run a system that supports it, in 2026 hardware prices? 4B is not the biggest AI model number, but RAM and GPU prices are very daunting thanks to the company releasing this closed-source model


Broke: AI will cause an existential threat to humanity (because it’s powerful)
Woke: AI will cause an existential threat to its own industry (because it’s worthless)
You have no dispute from me there. I just figured I would mention it to people who are already knowledgeable enough that a few switches won’t bother them - people already in this thread, probably not on the street


Have you tried asking the puppy to be a better guard dog? That’s how the AI safety professionals do it.


Some news sources continue to claim Elon has disabled the generation of CSAM on his social site. But as long as the “guardrails” used by AI companies are as vague as AI instructions themselves, they can’t be trusted in the best of times, let alone on Elon Musk’s Twitter.
Agreed. To me, this sounds like a continuation of the abolition of Web 2.0, the era where APIs were open and nobody was talking about how they’d pay for it.