After the controversial news shared earlier this week by Mozilla’s new CEO that Firefox will evolve into “a modern AI browser,” the company now revealed it is working on an AI kill switch for the open-source web browser.
On Tuesday, Anthony Enzor-DeMeo was named the new CEO of Mozilla Corporation, the company behind the beloved Firefox web browser used by almost all GNU/Linux distributions as the default browser.
In his message as new CEO, Anthony Enzor-DeMeo stated that Firefox will grow from a browser into a broader ecosystem of trusted software while remaining the company’s anchor, and that Firefox will evolve into a modern AI browser and support a portfolio of new and trusted software additions.
What was not made clear is that Firefox will also ship with an AI kill switch that will let users completely disable all the AI features that are included in Firefox. Mozilla shared this important update earlier today to make it clear to everyone that Firefox will still be a trusted web browser.

The real issue is not whether we are going to be force-fed this features or not, but the fact that a foundation with limited resources is going to spend any sizable amount of them developing a solution its users are not interested in.
Waiting for Ladybird at this point.
The trust was lost when they said nonsense like “AI browser” as of that means anything concrete.
The reason the “kill-switch” wasn’t made clear originally was because it literally didn’t exist until users very vocally tool them where to shove their AI crap.
It was added on afterwards.
What? They’ve been talking about features that are now being called the “kill switch” for the better part of a year. Literally all they did that’s new was give it a dumb name.

only shipped it because of the backlash, they will quitely remove it eventually.
No, why would they?
$
Already ahead of ya, about:config is a great thing
I don’t really know what an ‘ai browser’ is and at this point I feel like i really need to ask. What makes a browser “AI”?
Serious and long answer because you won’t find people actually providing you one here: in theory (heavy emphasis on theory), an “agentic” world would be fucking awesome.
Agents
You know how you have been programmed that when you search something on Google, you need to be to terse and to the point? The worst you get is “Best Indian restaurants near me” but you don’t normally do more than that.
Well in reality most of the times when people just love rambling on or providing lots of additional info, so the natural language processing capabilities of LLMs are tremendously helpful. Like, what you actually want to do is “Best Indian restaurants near me but make sure it’s not more than 5km away and my chicken tikka plate doesn’t cost more than ₹400 and also I hope it’s near a train station so I can catch a train that will take me home by 11pm latest”. But you don’t put all that on fucking Google do ya?
“Agents” will use a protocol that works in completely in the background called Model Context Protocol (MCP). The idea is that you put all that information into an LLM (ideally speak into it because no one actually wants to type all that) and each service will have it’s own MCP server. Google will have one so it will narrow down your filters to one being near a train station and less than 5km away. Your restaurant will have one, your agent can automatically make a reservation for you. Your train operator will have one, so your agent can automatically book the train ticket for you. You don’t need to pull up each app individually, it will all happen in the background. And at most you will get a “confirm all the above?”. How cool is that?
Uses
So, what companies now want to do is leverage agents for everything, making use of NLP capabilities.
-
Let’s say you maintain a spreadsheet or database of how your vehicle is maintained, what repairs you have done. Why do you want to manually type in each time? Just tell your agentic OS “hey add that I spent ₹5000 in replacing this car part at this location in my vehicle maintenance spreadsheet. Oh and also I filled in petrol on the way.” and boom your OS does it for you.
-
You are want to add a new user to a Linux server. You just say “create a new user alice, add them to these local groups, and provide them sudo access as well. But also make sure they are forced to change their password every year”.
-
You have accounts across 3 banks and you want to create a visualisation of your spendings? Maybe you want to also flag some anamolous spends? You tell your browser to fetch all that information and it will do that for you.
-
You can tell your browser to track an item’s price and instantly buy it if it goes below a certain amount.
-
Flying somewhere? Tell your browser to compare airline policies, maybe checkout their history of delays and cancellations
-
And because it’s natural language, LLMs can easily ask to clarify something
Obvious downsides
So all this sounds awesome, but let’s get to why this will only work in theory unless there is a huge shift:
-
LLMs still suck in terms of accuracy. Yes they are decent but still not at the level where it’s needed and still make stupid errors. Also currently they are not making as generational upgrades as before
-
LLMs are not easy to self host. They are one of the genuine use cases of making use of cloud compute.
-
This means they are going to be expensiveeeeee and also energy hogs
-
Commercial companies actually want you to land on their servers. Yes its good that your OS will do it for you and they get a page hit but as of now that is absolutely not what companies want. How are they going to serve you ads?
-
People will lose their technical touch if bots are doing all the work for them
-
People do NOT want to trust a bot with a credit card. Amazon already tried that with Alexa/Echo devices and people just don’t like saying “buy me a roll of toilet paper” because most people want to see what the fuck is actually being bought. And even if they are okay, because LLMs are still imperfect, they are going to make mistakes now and then.
-
There are going to be clashes of what the OS will do agentically vs what a browser will do. Agentic browser makers like Perplexity want you in their ecosystem but if Windows ships with that functionality out of the box then how much reason is there really to get Perplexity? I expect to see anti-competitive lawsuits around this in the future.
-
This also means there is going to be a huge lock-in to Big Tech companies.
My personal view is that you will see some of these features 5-10 years down the line but it’s not going to materialise in the way some of these AI companies are dreaming it will.
-
Not entirely clear, but my best guess is that it will basically have an MCP implementation so that the browser can be controlled directly by an LLM
I think that’s basically what e.g. the chatgpt browser is. Despite the… hostile… response on the fediverse, I suspect it will end up being the way a lot of people interact with the internet in a few years.
The implementation challenge currently is that they’re extremely vulnerable to prompt injection.
Using existing LLMs functionality with fewer steps. You can have a chatbot in the side bar, no doubt keeping track of all your browsing habits to better assist you which incidentally builds a very valuable profile of the user companies would love to buy. Summarizing large texts so AI generated slop and search algorithm filler content can be filtered out more efficiently vs a decent chance at introducing errors. Rewording text so you can make it more simple, translated, adhering to your world view. All of this with minimal clicks, automatically done if possible.
Marketing
It will goon for you that’s what make the browser AI
clearly some damage control strategy here… but good news if true
The news of being able to use or disable all of the AI features was in the original announcement as well, but it was pretty clear that most of Lemmy just read the headline and leaned into it.
Firefox just can’t win with their users.
- Mozilla makes decisions based on market data
- Users complain they never wanted those features
- Mozilla makes a decision based on user feedback
- Users shit on them for backpedaling or damage control
It’s absurd.
No, it’s not. 1. Nobody wanted AI as a feature. 2. They didn’t even completely backpedal, that would be not implementing AI. This sounds like it will be opt out maybe. They may remove it if they feel like it.
In my books that comment was far from complainong about damage control.
Just a objective observation.OP said that they are happy if true.
What have they decided based on market data?
I think in this particular case at least Mozilla decided to introduce something that their users didn’t want without asking, and our backpedaling and are being mocked for having done the thing in the first place.
Frankly I don’t know what’s going on in their collective brains. What Firefox needs more than anything else is refinement. There are no features that it’s missing as far as I can think of.
- Mozilla makes decisions based on market data
Firefox has had one hidden away in about:config since they started adding AI. Are they going to put it in the settings page now?
Too late - they already lost me.
You can also disable ai via toggling browser.ml.enable to false on about:config. For now at least…
For the record a quick web search for how to disable AI in firefox gave me this list of items to set to false in
about:config:browser.ml.enable browser.ml.chat.enabled browser.ml.chat.sidebar browser.ml.chat.shortcuts browser.ml.chat.page.footerBadge browser.ml.chat.page.menuBadge browser.ml.linkPreview.enabled browser.tabs.groups.smart.enabled extensions.ml.enabledI don’t think you need to set all to false, all except the first look like granular settings
I might never get around to flipping whatever kill switch they claim to be working on, so I’m turning off as much as I can now
What is it actively doing now with AI? There is the ai sidebar, but if you don’t use that it isn’t used, right?
I think there’s some alt-text generation for websites that don’t have proper accessibility, though not certain if it’s released yet
I think the biggest issue people have with it is if you can’t trust a company not to shill AI then you also can’t trust them not to shove it even further down your throat and train it on you.
It’s just a bottom line trust issue, regardless of actual features or capability.
The way they talked about making an Agentic Browser implies they want AI to eventually be the primary default method of interaction.
There’s the slow-and-not-very-capable link preview thing… and I could’ve sworn the “what’s new” page the other day said they were adding an on-device model to improve search results or something, but I can’t find the reference to it now.
Maybe they removed it after all the AI backlash. 😬
Does anyone even talk about what the “AI features” are?
Could I, liked recolor webpages? Automate ublock filters? Detect SEO/AI slop? Create a price/feature table out of a shopping page?
See, this would all be neat like auto translate is neat.
But I’m not really interested in the 7 millionth barebones chatbot UI. I’m not interested in loading a whole freaking LLM to auto name my tabs, or in some cutsie auto navigation agent experiment that still only works like 20% of the time with a 600B LLM, or a shopping chatbot that doesn’t do anything like Amazon/Perplexity.
That’s the weird thing about all this. I’m not against neat features, but “AI!” is not a feature, and everyone is right to assume it will be some spam because that’s what 99% of everything AI is. But it’s like every CEO on Earth has caught the same virus and think a product with “AI” in the name is like a holy grail, regardless of functionality.
You reminded me that one use for AI I’d really like is removing all photos of Trump, Musk and Putin from my screen. Another is filtering the twenty reposts of every event in US politics and the incessant whining about prices. Alas, I need these in phone apps more than the browser.
You don’t need LLMs for that. An iPhone is plenty powerful enough for image recognition and text classification.
That’s sorta the funny thing about AI. There’s tons of potential, but it’s just unimplemented. Even on PC, you pretty much have to have some Nvidia GPU and fight pip setting up python repos to get anything working.
Right right. If they had real innovation, they would have defined it clearly as you suggested. But they didn’t, so they don’t. It’s all snake oil, again, because that’s the entire AI industry.
The term snake oil is actually especially fitting for this, due to its origins.
In Britain in the 1700s there was a somewhat common recommendation for using rattlesnake oil from the fat of the snake for skin diseases/rheumatism. The efficacy is debated but it’s got some amount of potential for change (if not help).
This turned into people in the US selling mineral oil as “snake oil” as a total panacea. So a product that actually could do stuff being used as the poster child for a completely useless product that can solve every issue ever, buy as much as you can today.
Snake oil indeed.
- AI chatbot in sidebar (you can choose which chatbot you want, similar to how you choose default search engine)
- Shake to summarize page (on mobile)
- AI Window (separate from Normal and Private window, upcoming). Apparently it lets you chat with an AI agent to power-browse the internet.
So, nothing.
Can’t wait to power browse.
Are we goong to get power browse or drunk browse?
The last feature is the mildly interesting one, but in my experience just not useful enough to do much, even on specific browsing finetunes or augmented APIs.
I guess shake to summarize is mildly interesting, but not really? I simply can’t trust it. And I can just paste the (much more concise) relevant text into a chat window and get a much better answer.
Apparently it lets you chat with an AI agent to power-browse the internet.
… I feel I have an idea of what this means, but it still breaks my brain just a little bit.
Not buying it. Kill switch will migrate further and further into about:config until it eventually too goes away without notice in an update six months from now.
No that’s way too paranoid. Honestly 20 years not 6 months. And by then ladybird will be viable so nbd
No six months to a year is probably about right. They’ll have enough data by then to say “most people don’t turn it off” because realistically most people will use the default, which is on.
Twenty years from now Firefox will be in a new controversy that we can’t even begin to guess.
Plus, while I can’t predict when the AI bubble will pop, whatever they add in the next year will be removed within the next five years. AI isn’t like browser tabs, or extensions, stuff that will always be a great idea, it’s just the current fad.
Well time will tell won’t it, but we’re both just guessing at the end
IDK guys, do you think a web browser should be a “broader ecosystem of trusted software” or a web browser?
I wouldn’t mind a web browser being part of a broader system of trusted software, but shoving an AI chatbot into my web browser does not make me trust it more.
I like the accessibility features like offline page translation
Why not just ship it without any of the AI stuff and give users the option to install and use it instead of bloating the application? This also confirms that the stuff is essentially OPT OUT instead of OPT IN
The bubble is AI and they want some of that bubble investor money is my guess, so they put optional AI
“On by default unless you run down a setting buried in a menu” is the thinnest type of optional in computing.
That’s fair, but also if you search AI in the settings it shows you all the options
And also … will the kill switch turn off the AI entirely … or partially? Since the AI system is baked in, will elements of it still operate in the background even if you turn off the switch?
Not sure what you mean by “will it operate in the background”? The current (and planned) features collect no data. The “operate” when you use them. Disabling them will remove them from the UI.
lol … so they won’t change how they function … just remove them from sight
out of sight, out of mind, right?
Whenever I trust big corporations … or even big organizations with a lot of power in their hands … it’s never usually good for common people like me and you.
What he wrote doesn’t seem ambiguous on this at all. But we’ll see.
So you agree that it will be baked in and impossible to actually turn off. Yep.
Otherwise, they would have made it an extension, right? If it’s optional, it needs to actually be optional … that’s what am extension is. That’s the whole point of them.
No
You can not push the button that says AI.
You can also hit the kill switch that completely removes that button.
That’s opt-in enough.
If it starts reading pages or doing things without you pushing a button, that’s an issue.
If it starts reading pages or doing things without you pushing a button, that’s an issue.
And therein lies the rub. The question is whether or not people trust that it won’t be doing that regardless of whether or not you hit the kill switch.
In their defense a very tiny percentage of users even open options and of those an even smaller actually change stuff.
Maybe slighlty different for Firefox as probably more power user use it than other random programs. But basically if something is not enabled by default, it doesn’t exist.
All AI features will also be opt-in. I think there are some grey areas in what ‘opt-in’ means to different people (e.g. is a new toolbar button opt-in?), but the kill switch will absolutely remove all that stuff, and never show it in future. That’s unambiguous.
Sounds like they will be opt in, not opt out
No, go deeper into that mastodon thread.
The dev has a really hinky defention of “opt-in” thats basically “yes we push all this on by default and realize it will be the norm for most of our users because of that, but you technically dont have to interact with it so thats opt-in.”
Somehow, eventually having a buried menu option that “opts out” of AI is also part of how it will be opt-in as well? Its a self serving mess of rationaliztions and doublethink, no matter the claim on the tin.
I mean yeah, that’s a fair point, and the dev said that themselves, that the definition of opt in is ambiguous. The definition they seem to use is that AI won’t run unless you explicitly tell it to, and I think that’s ok. There’ll be a button that you can press to do some AI action and you can hide it using the kill switch.
I do hope the kill switch isn’t hidden behind 5 layers of menus
Thats not ambuguity. AI will be opt out in firefox, which is them abandoning core principles like user choice and privacy.
They can do that, but playing like they aren’t by redefining well established terms in UI/UX is disengenious, and cuts right through the “we will earn your trust back” messaging made by the same dev.
I think it’s quite clear there’s ambiguity (hence this discussion). How would you define opt in? Should a user not even see the button for an opt in feature?
I think the big defining question is what will the AI features that they will implement do exactly and how will they run. If it’s something that runs in the background (even as unintrusive as the summaries on a search engine like DDG), then it’s opt out by default as it’s constantly running whether you want it to or not. If it specifically and exclusively runs when you hit the button to activate it and doesn’t run at any other time, then I’d say it’s unequivocally opt in. And regardless of what a company says that their software will do, at this point I won’t believe it until somebody has done a full teardown and discerned what exactly it does behind the scenes. I’ve seen enough nonsense like the Epic Games Store accessing your browser history and recording keyboard inputs or whatever the other absurd incident was.
Nah, I think it should be optional. Some AI features may even be useful — like an AI script to get rid of AI slop or something, idk.
I don’t see why there is a big outrage. Sure I’m not a fan of the AI features and I certainly will disable them but it’s tot like they’re forced upon me. Some people like (want) AI in the browser and good for them, this makes the browser better and easier to use for them. For me, it doesn’t change my experience at all
(Commented this separately on purpose)
Come to think of it, I do enjoy the translation feature in Firefox
I’ve been thinking the same thing. The online tech community is a very small part of a much larger pie and they need to serve multiple audiences. As long as it can be turned off and truly be off, who cares?
People don’t trust that it can be truly turned off and that it won’t act maliciously in some way. That’s really the crux of the whole saga. We’re at a point where phone companies are getting survey results that say that 80% of users either don’t care about AI nor use it or find that it actively makes their user experience worse.
Because they’re counting on people who know nothing about technology using the AI stuff when it’s placed in front of them.
Getting ready to ditch this crappy browser. I used to love Firefox but now it’s been enshitified. I’m a tab-o-holic. Have too many tabs open, think I’ll slow down. Leave the browser open too long. I think I’ll slow down. Oops. Looks like your page crashed. Wanna go to this site? Sorry it’s not compatible with your browser.
People seem to hate Brave for the crap they pulled, rightly so I guess, but at least it worked. It seems like there are no good browsers now. I tried Mulvad but due to it’s safety features, it won’t open where I last left off.
Chrome is apparently safe, if you consider Google safe. Microsoft? Puhlease.
Ngl I just use a fork like waterfox and librewolf instead of switching to brave or chromium.















