This sounds like a great idea, I might finally be able to use Linux at work in the future.
(With react native)
Replace the spaghetti with slop. Sounds like a great idea.
This is coming from someone who is pretending to think Microsoft is currently doing well and could lose some quality at the expense to introduce new features users want faster to sell an image of technology innovators or at a baseline a user friendly experience
What is actually happening is they are trying really hard to sell Microsoft after windows 11 launch pushed a lot of users away or at least gave them an accurate impression of how MS caters to corporations and advertisers and they don’t give a shit about users as long as they keep buying computers with windows monopoly pre installed. The ‘ai is going to be good enough to replace developers’ longshot could come in 10 years or 10,000 years and is a hand wave to not understand the problem or currently available possible solutions.
Also the interview only covers corpo interests and doesn’t include any labor or software union leaders.
“Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases,” he added. “Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”
Easy to achieve if the ai just wraps all code in an
unsafeblock ^^Honestly migrating from one language to another night actually be one of the best use cases for AI, if you don’t change the architecture much it should be doable especially if it’s a well tested codebase.
Maybe if the languages are very similar. If you convert C to Rust using AI it might work well but will most definitely not leverage the unique features of Rust. Might as well stay with C in that case. Migrating from an object oriented language like C++ to a language with another paradigm (such as Rust) will most likely produce a burning pile of shit
especially if it’s a well tested codebase.
So not for microsoft products then.
In theory.
But there’s no doubt all their tests will also be shat out from an LLM.
That’s funny because using unsafe might be an hint that Rust is not the right tool for the job. Yet we have rust in the kernel, rust coreutils… I just can’t wrap my head arout it, yet.
You shouldn’t have said that. Now the rust zealots are going to come for you.
Got downvoted, here they come…
Unsafe is expected and necessary in certain situations. Using other languages doesn’t change that fact.
It’s talked about in the docs
Incoming brainrot-video from Bryan Lunduke in 3…2…1
Will this finally be the end of Windows?
Also fun fact: Windows uses a lot of COM Interfaces for API, which in my opinion often makes developing for Windows a better experience, than developing for Linux. Rust does not have anything OOP related by default, and are often emulated with macros instead, like in C.
That’s a crazy take. I hate com and oop with a passion. If you like com that much there’s gobject for you on Linux.
I hate everything GTK/GNOME, they didn’t even call it
gnobject.
Rust has traits and reference counting which map nicely to COM objects.
By the way, the Linux Kernel is OOP. That’s a good choice for things like queues, file systems, and device drivers.
I work for a company developing software for Windows and deal with COM all the time. How do you communicate across dynamic libraries and languages in Linux?
Linux usually just uses C API, most of which reads horribly. Libevdev is quite notorious for using files and the docs not telling you that the reading is blocking. Some additional things use C++ classes though.
Could you provide a link to the documentation? It’s strange that it’s so barebones.

Surely there’s no way for Microsoft to vibe-fuck their OS.
Surely.
Surely.
“Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases,” he added. “Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”
That’s OK. I’m using Linux. Perhaps this will drive more people to Linux. The less people using corporate owned tools the better.
This could have been good news, however, Microsoft’s insistance on using AI, and general incompetence even without it, makes me very doubtful this will be successful.
They are going to try and replace C and C++ written by actual experts a few decades ago, with Rust written by idiots. Expect tons of logic bugs, and very little measurable difference in memory corruption.
little measurable difference? the last time they rewrote something they replaced the start menu with fucking react
the difference will be measurable and enormous
All the black hats are going to have a field day uncovering all manner of zero-day exploits…
No no no you see, they’re using rust, which is a ‘safe’ language. That means it’s not possible to have security issues…
so glad i switched to linux in time to avoid this clusterfuck. at least on my private machines.
Just fucking stop please

Yet another good time to get off of Windows
It’s always a good time to get off of Windows!
reimplement … with help from AI
Meaning, it will have more bugs and less features after.
I tried vibe coding a rust project and it was total ass.
AI doesn’t reason, so it heavily depends on what’s been presented in the training set.
Python is everywhere and most importantly whatever you can think exists in Python, from critical bioinformatics tools to somebody learning programming from the first time and posting their prime number finder or sorting algorithm online.
Rust? Not at that point yet, so the AI fails
Yeah, for everything I’ve seen it’s just a classical case of overfitment. I only tried it because it was recommended to me by a coworker. It failed at problem solving and choosing comparable dependencies. Completely jarring because like you said, it could likely do it in JS and Python. But clearly not Rust. I often wonder if the code you get from AI is +85% stolen verbatim.
In Python it can work but sometimes with crazy inefficient methods incorporated. In obscure geospatial stuff it often loses the plot. Still occasionally recommends functions that don’t exist
JS and Python code is equally garbage.
I dunno man, I tried coding a simply http listener with an LLM one time in python (a language I’m unfamiliar with). Just something to sit on a port, listen for a request, and run a script.
I ended up spending more time troubleshooting the maybe two dozen lines of code than I would have spent just looking up a tutorial online.
“My goal is to eliminate every line of C and C++ from Microsoft by 2030,” Microsoft distinguished engineer Galen Hunt wrote in a recent LinkedIn post.
“Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases,” he added. “Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”
Well, I expect it’ll be exciting, one way or another.
I always love how business bros use the term “Algorithm(s)” (and now also “AI”) as if that was just a magic incantation or something that you just switch on and it immediately solves whatever problem you might have.
All that’s needed is that the wizard comes up with the right spell and then everything just works and the business is generating infinite money!
You know it’s going to be successful when they go back to using antiquated productivity measurements like measuring based on lines of code in a time frame. We all know AI is fucking spectacular at generating overly verbose code.
I think the number of lines to be deleted is the target 1M…
yeah it’s “replace 1M lines of current code with whatever it takes to do the same task.”
“Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases,” he added. “Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”
That’s insane. Even a good engineer will frequently need years to fully understand one million lines of code - even if the code is organized very, very well.
To compare, one million lines of program code might have around 100000 to 200000 unique symbols whose meaning and complex connections an engineer working with it has to learn and memorize. That’s far more than the average vocabulary one will learn in five years when learning a foreign language to a high skill level. Doing it in a month would be like learning to read and write fine Japanese or Arab literature in a month when you have never spoken a word in that language before.
The Linux kernel has now passed 40 million lines of code, written over 30 years by tens of thousands of master programmers. And that’s kind of a historic achievement. What happens is that complexity increases sharply with each duplication of the amount of code.
Kinda still your point, but if you have one engineer producing 1M SLOC, how many do you have for code review?
I hate how everyone nowadays is acting like reviews are not important. Actual oversight over codebase is way less important than shipping random code. Which is insane.
GitHub Copilot performs PR reviews now doesn’t it
Yeah, that’s exactly what we need.
LLM producing the code
LLM approving the change
LLM agent pushing to prod
Then wonder why the users are gone
Replace users with LLMs too.
it is not much better than just running a linter over your code.
Not disagreeing with you there
It’s not great.
So to be paid I have to delete 1M lines of code every month. Gotcha.
they are speedrunning enshitification 🐧
Enshittification does not mean making things suck in general. It specifically means the business model of making a good product for users, then making the product bad for users and good for advertisers or data purchasers or retailers or whatever, and then when you have a captured market, making it worse for everyone to squeeze more money faster.
Microsoft is not doing this. They might be sucking, and making a worse product, but it’s not following the enshittification playbook.
points to ads in the start menu and surveillance in the OS
Pardon me, but that is exactly what MS is doing.
Well, I expect it’ll be exciting, one way or another.
This gives the curse “may you live in interesting times” vibes
Well known in the industry how you don’t assess programmers by lines of code. You kind of want them to be efficient and clean. Spend their day thinking and design clever solutions… Not pump out lots of unmaintainable low quality stuff. And have a million lines of that by tomorrow. But yeah, guess every aspect of this aligns well. You should be using Linux by now. Or at least do the switch in the near future.
This is what you get when AI fanaticism combines with Rust fanaticism.
1 million lines a month is 2-ish line per second. That “engineer” is just someone to blame when things don’t work. They aren’t going to be contributing anything.
I was about to say that surely it’s not just 1 person they are talking about. Then I read, "Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”
WTF
I mean, if this is true and it works it is not too far fetched. You’d mostly be checking that tests still make sense and that they pass.
Microsoft scientists have worked on a tool that automatically converts some C code to Rust.
But when they don’t pass, then you have to dissect a bunch of AI pasta, right?
No, you go to your manager and be like: your machine to make C code into rust code does not work. If you want to keep the pace of 1M loc per month and keep your boss happy I need double pay and 10 people working on it at all time.
But when your boss tells you that you have to keep doing it this way, then you don’t have much choice in the matter. You either keep asking AI for new code and hope it gets it right, or you have to actually delve into the code and spend your time correcting it.
The 1 million lines of code is just untenable, assuming they want code that actually works.
Well, if that’s the case you do the job in the way you yourself judge best. Maybe that tool is good at some tasks and you apply it to that. Bill Gates will be sad for a couple months and then likely forget about the expectations which had been set and you yourself got a stable job with a safe position for years to come.
The expensive autocomplete can’t do this.
AI markering all wants us to believe that spoon technology is this close to space flight. We just need to engrave the spoons better. And gold plate them thicker.
Dude who wrote that doesn’t understand how LLMs work, how Rust works, how C works, and clearly jack shit about programming in general.
Rewriting from one paradigm to another isn’t something you can delegate to a million monkeys shitting into typewriters. The core and time-consuming part of the work itself requires skilled architectural coding.
LLMs are - by the nature of how they work - only able to achieve 90-95% accuracy. That’s the theoretical best they can do, according to the people behind OpenAI. And worse, it will be presented as 100% accurate, even going so far as to make up sources wholecloth.
That’s an insane and completely unacceptable error rate for any system even pretending to be mission critical.
Can you imagine sending people to space with a system that has a 1 in 20 chance of just being completely unfit for service?
Well, in that case they’re overstating their capabilities. Which is not too surprising.
You’d mostly be checking that tests still make sense and that they pass.
Nah, my experience is most of your time is finding out what parameter or function call they made up because its mathematically a good answer.

















