The US dictionary Merriam-Webster’s word of the year for 2025 was “slop”, which it defines as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence”. The choice underlined the fact that while AI is being widely embraced, not least by corporate bosses keen to cut payroll costs, its downsides are also becoming obvious. In 2026, a reckoning with reality for AI represents a growing economic risk.

Ed Zitron, the foul-mouthed figurehead of AI scepticism, argues pretty convincingly that, as things stand, the “unit economics” of the entire industry – the cost of servicing the requests of a single customer against the price companies are able to charge them – just don’t add up. In typically colourful language, he calls them “dogshit”.

Revenues from AI are rising rapidly as more paying clients sign up but so far not by enough to cover the wild levels of investment under way: $400bn (£297bn) in 2025, with much more forecast in the next 12 months.

Another vehement sceptic, Cory Doctorow, argues: “These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire.”

  • FudgyMcTubbs@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    ·
    3 days ago

    I wouldn’t pay money for access to AI. The convenience is not worth a single cent to me. But am I the average person? Is the average person sold on this nonsense enough to subscribe to it? The first hit is free to get you hooked. So if the plan is to get the average person dependent on it while it’s free and then eventually charge for it, i’m not buying and I wonder how many people will. AI output is fucking garbage.

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      you have to remember how dumb the “average” person is, who absolutely thinks that AI chatbots give good answers and doesn’t notice or think about the accuracy

    • Kaiserschmarrn@feddit.org
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 days ago

      Two of my friends are paying for it. One works as a developer and one in DevOps. Currently, both of them have a ChatGPT subscription. The first one now shares a lot of Dall-E images picturing his dog and the other one recently showed us proudly how he could tell ChatGPT about our DnD Session so that it generates a summary for us. The latter took nearly forever and had a lot of funny errors in it.

      I really don’t get why people are paying over 20€/month for this shit.

      • prodigalsorcerer@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        14
        ·
        3 days ago

        20 bucks a month is basically nothing for a developer who’s making $100 an hour.

        My employer pays for copilot, and yeah, it makes mistakes, but if you pretend it’s a junior developer and double check its code, it can easily save time on a lot of tedious work, and will turn hours of typing into fewer hours of reading.

        • ashughes@feddit.uk
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          2 days ago

          but if you pretend it’s a junior developer

          Where do these geniuses think they’ll get senior developers from when the current cohort retires? How does someone become a senior developer? Surely not through years of experience as a junior developer under the mentorship of a senior.

          This mentality is like burning down an apple tree after one harvest. Fucking idiots, the whole lot of them. I can’t wait for the day all these people wake up and start wandering around confused about why their new talent pool is empty.

          A man in a black jacket and white shirt, a brown coat draped over is arm, looks around confused

          • prodigalsorcerer@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Should cars have been outlawed because it put farriers and stables out of business? Were shipping containers a bad idea because they required fewer longshoremen?

            Technology comes and makes jobs obsolete. It happens all the time. It just happens that this technology has come in a big, visible way, and many of the ways its used and marketed are useless and/or awful. That doesn’t mean it’s entirely bad, and there’s certainly no way to stop it now.

            AI will replace jobs. We can’t get around that fact. Companies that fail to adapt will fall behind. Whether I use it at my job or not has no bearing on the industry, and I’m not in a position to push for industry-wide change (nor is the company I work for). So we can either use it, or also fall behind.

            I work for a mid-sized company. We still hire junior developers. I don’t think we have any plans to get rid of them entirely, but I’m not involved in that process. But after a couple decades of huge growth in the industry, developers (especially junior ones) are going to have a rough few years as the industry realigns with the new normal. There will be job losses, there will be companies that disappear entirely because they either depended too much upon AI, or didn’t adapt fast enough. But pretending that AI isn’t a useful tool when used in specific ways is just sticking your head in the sand.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          3 days ago

          You have no idea the long term impact such a tool has on a codebase. The more it generates the less you understand, regardless of how much you “check” the output.

          I work as a senior dev, and I’ve tested just about all the foundational models (and many local ones through Ollama) for both professional and personal projects. In 90% of all cases I’ve tested it has always come back to “if I had just done the work from the beginning myself, I would have had a working result that’s cleaner and functions better in less time”.

          Generated code can work for a few lines, for some boilerplate, or for some refactoring, but anything beyond that is just asking for trouble.

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            2 days ago

            In my experience one needs to be a senior developer with at least some experience with their own code having gone through a full project lifecycle (most importantly, including Support, Maintenance and even Expansion stages) to really, not just intellectually know but even feel in your bones, the massive importance in reducing lifetime maintenance costs of the very kind of practices in making code that LLMs (even with code reviews and fixes) don’t clone (even when cloning only “good” code they can’t do things like for example consistency, especially at the design level).

            • Inexperienced devs just count the time cost of LLM generation and think AI really speeds up coding.
            • Somewhat experienced devs count that plus code review costs and think it can sometimes make coding a bit faster
            • Very experience devs looks at the inconsistent multiple-style disconnected mess (even after code review) when all those generated snippets get integrated and add the costs of maintaining and expanding that codebase to the rest, concluding that “even in the best case in six months this shit will already have cost me more time in overall even if I refactor it, than it would cost for me doing it properly myself in the first place”.

            It’s very much the same problem with having junior developers do part of the coding, only worse because at least junior devs are consistent and hence predictable in how they fuck up so you know what to look for and once you find it you know to look for more of the same, and you can actually teach junior developers so they get better over time and especially focus on teaching them not to make the worst mistakes they make, whilst LLMs are unteachable and will never get better plus they’re mistakes are pretty much randomly distributed in the error space.

            You give coding tasks in a controlled way to junior devs whilst handling the impact of their mistakes because you’re investing in them, whilst doing the same to an LLM has an higher chance of returning high impact mistakes and yields you no such “investment” returns at all.

          • shalafi@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            can work for a few lines, for some boilerplate, or for some refactoring

            I highly doubt the person you’re replying to meant anything else. We’re all kinda on the same page here.

            • CeeBee_Eh@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              2 days ago

              I hope so, but your be surprised. I know some devs that basically think LLMs can do their work for them, and treat it as such. They get them to do multi-hundred line edits with a single prompt.

      • ScoffingLizard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        13
        ·
        3 days ago

        If you work in tech, it’s very useful. It’s either get on board or get left behind. I absolutely hate how it’s used in a lot of cases. It’s really gross to pay attention to imagery on the Internet now. People post AI slop for advertisements and don’t realize a few people’s faces are bashed in and disfigured. It’s disgusting.

        • Kaiserschmarrn@feddit.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          I am a software engineer and nearly every time when I used a model for something, it made shit up that didn’t work this way or didn’t even exist. I always ended up reading documentation and fixing the problem myself.

          The only thing where AI is somewhat decent in the context of software development is Code Completion. JetBrains’ models are doing an ok job in that regard.

          • Tar_Alcaran@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            Oh, you wanted code that actually works with your existing codebase? Sorry, here’s some additional sludge sprinkled on top of the previous crappy answer.

    • SGG@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      I setup a local ollama instance trying to look for ways to integrate it into my regular work. I do IT stuff, from basic helpdesk to office 365 Configs, and almost anything in-between

      At best I just use it as a sounding board, basically rubber duck debugging.

      I prefer the rubber duck.

    • Piatro@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      3 days ago

      I know a few people who subscribe who I never would have expected to do so, but I also know people who have started asking “why does Google show me an AI summary all the time when I don’t need it?” I think any sheen it had is diminishing, slowly but surely.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      In my university, people are paying and saying wonders about it… that’s terrifying

      They even talk daily about which model is best, just like children discussing which super hero is stronger

      • WamGams@lemmy.ca
        link
        fedilink
        English
        arrow-up
        20
        ·
        3 days ago

        My management has fallen in love with it, but is considering dropping because the commercial fees to use copilot haven’t seen a return on investment.

        Out of the 3 people at my company who pay for chat gpt or grok, the 2 chat gpt users are too reliant on it, while the grok user believes he is talking to a living super intelligence.

      • Joe@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Well, no one is paying shit for my uni thesis, and most others like it. University reports under PhD level are of absolutely no value. And even most PhD are not commercially viable.

        So what you are saying is that Ai is good at getting a good grade in an exam, but that doesn’t mean it can commercially make a viable product.

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Things that used to be free: Google searches, YouTube, Android, Reddit - all have enshittified in different ways (e.g. Reddit is still free of direct monetary charge, but now restrictive rather than “free”).

      AI is simply following this well-trodden path, or rather people are claiming that is what is happening.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Google search, youtube and android at least make owners money. ChatGPT just burns dumptrucks full of cash just to create the option to burn more cash.

          • ThirdConsul@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Investing in OpenAi makes Altman money, not Chat Gpt. They spend last year trying to decrease the cost of a query after all.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      6
      ·
      3 days ago

      My dad never had the patience to write a single python program. Last year (2025) he wrote an entire android app that displays values from some hardware sensor graphically (with a neat animation) in like 1 week with the help of chatgpt. it does help people. it makes mistakes, but so do humans. the question is, is it more productive to do with than without? and i’d say, for some use-cases it’s more productive with than without.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    They’ve gone into deep, deep debt, and the pay-off is looking more like vaporware every day. They dun fuckt up bad.

  • kelpie_returns@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    Off topic, but describing Ed Zitron as having a “foul-mouth” is very funny when you compare how tame his word choice is compared to Robert Evans (his coworker/employer). Like, sure he uses cusses at times, but calling that foul-mouthed is just placing the bar pretty gd low imo

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Honestly love to see how these companies figure to make a profit in any given future. There are not enough humans with the money or care to pay even a minimal subscription. This is why they’re jamming it up our ass. An AI subscription will have to be the next internet or phone bill for this thing to even think about making a profit.

    But what about commercial uses? There are plenty, but not enough to make a profit. Companies are already cautiously rolling back subscriptions.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      An AI subscription will have to be the next internet or phone bill for this thing to even think about making a profit.

      Not really, since even the paying subscribers are costing the companies money. If they were a baker, they’re doing the equivalent of selling 1-dollar loaves of bread that cost 2,50 to knead and bake, and that’s not even counting the fact that you need to buy flour first.

    • ApatheticCactus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      It’s not a ‘product’ in the conventional sense. It’s a gateway to an intellegent astroturf machine. Buy a ton of fake accounts for every social media platform, make them appear ‘legit’, then have bots comb for anything they can shoehorn a message into and have your ai bot army manipulate public perception. That’s the only use case I could see companies actually willing to pay that kinda money for.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Looks like consumer subscriptions are 1/3rd of total revenue, which doesn’t cover nearly 1/3rd of operating costs. Yikes! Worse than I thought.

  • Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 days ago

    The bar for discourse re: AI is pathetically low. I guess par for dis course.