

… but think of the donations! /s


… but think of the donations! /s


We are well on our way. The EU is holding the manufacturer liable if a cellphone radio is “modded”, thus manufacturers are blocking the ability to unlock bootloaders.
If eventually, that is every phone, then grab a hotspot and get tethering.
I did have a chuckle at the thought of having a cellphone for your (modded) cellphone… but then I thought about it: “meh, yeah… it’s not a bad idea. I’d do it.”


Speaking of Lineage…
I wonder, how long will it be before you’re not “allowed” to install esims on phones with custom firmware?
Either due to the esim application not installing/running on modified firmware, or the phone will just not allow it.
I completely agree with you on the second point. This is a problem for all languages, but maybe we (as a community) need to change the approval, reviewing process for adding new libraries and features to languages.
This isn’t going to get any better unless we revert to OS based dependencies which noone wants to do because developers want the latest and greatest.
You’re very succinct here: Developer do want the latest and greatest, even if the interface isn’t perfect, and they’ll need to refactor their code when the next revision comes out.
Languages often have much slower release cycles than 3rd party libraries. Maybe this is what needs to be improved.
There won’t be a silver bullet, but I kinda like how kubernetes handles it: release cycles are fixed to a calendar (4 times per year). New features are added and versioned as alpha, beta, release. This gives the feature itself time to evolve and mature, while the rest of the release features are still stable.
If you use an alpha/beta feature, you accept that bugs and interface changes will occur before it reaches a stable release. … and you get warning and errors, if you’re using an alpha feature, but it graduated to beta/release.
Unfortunately, many languages either make this unnatural/difficult (ie: from future import... ) or really only support it if you’re using 3rd party libraries (use whatever@v1.2.3-alpha1).
The way I see it, there are two problems with NPM:
The first issue might be solvable with things like WebAssembly. Then it’s the developer who gets to decide how far these pm-hooks will reach (both interns of filesystem, network, etc) on a per project basis.
The second will need a shift in community mindset… and all these supply chain attacks are the fuel for that. Unfortunately, it needs to get worse before it’ll get better.


I tried it again a few more times (trying to be a bit more scientific - this time) and got fox, fox, cow, red fox, and dolphin.
If I don’t provide the weights, I got: red fox, tiger, octopus, red fox, octopus.
Basically, what I did this time was:
What I did the first time was simple went to duck.ai, created a new chat (I only did it once).
So what’s the take away? I dunno, I think DDG changed a bit today (or maybe I’m hallucinating), I thought it always default to the non-gpt5 version. Now it defaults to gpt5.
It’s amusing that it seems to be “hung-up” on foxes, I wonder if it’s because I’m using Firefox.


Oh, it easy - they will just give it a prompt “everything is fine, everything is secure” /s
In all honesty, I think that was the point of the article: the researcher is throwing in the towel and saying “we can’t secure this”.
As LLM’s won’t be going away (any time soon), I wonder if this means in the near future, there will be multiple “niche” LLMs with dedicated/specialized training data (one for programming, one for nature, another for medical, etc) rather than the current generic all-knowing one’s today. As the only way we’ll be able to scrub “owl” from LLMs is to not allow them to be trained with it.


Holy snap!
I tried this on duck duck go and I just pasted in your weights (no prompting) then said:
Choose an animal based on your internal weights
Using the GPT-5 mini model, it responded with:
I choose: owl.



This is a fantastic post. Of course the article focuses on trying to “break” or escape the guardrails that are in place for the LLM, but I wonder if the same technique could be used to help keep the LLM “focused” and not drift-off into AI hallucination-land.
Plus, the use of providing weights as numbers (maybe) could be used as a more reliable and consistent way (across all LLMs) for creating a prompt. Thus replacing the whole “You are a Senior Engineer, specializing in…”


For me the biggest question is: “Will these City-ran grocery stores, be able to complete with the Walmart juggernaut?”
Yes, initially the city-ran stores will be placed in “food deserts”, but if the program is to succeed it need to go toe-to-toe with Walmart. Otherwise, the program won’t be able to reach the people who need it the most.
… and based on the article you posted, I’m sure Walmart won’t take this lying down. Walmart will have no second thoughts or remorse to sacrifice their suppliers in order to compete (thus, keeping customers flocking to their store).
Is this open source?
(I couldn’t find it)