When people ask me what artificial intelligence is going to do to jobs, they’re usually hoping for a clean answer: catastrophe or overhype, mass unemployment or business as usual. What I found after months of reporting is that the truth is harder to pin down—and that our difficulty predicting it may be the most important part of

https://web.archive.org/web/20260210152051/www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/

In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting.

The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.

Still, the machines pressed on.

    • stormeuh@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      But it can be sold as good enough to credulous management, thereby still doing damage by getting people laid off in the short term.

      There’s this famous quote about investing which goes: “the market can remain irrational longer than you can remain solvent”. I think that equally holds for the labor market. Just because you and everyone around you knows your job can’t be replaced by AI, doesn’t mean there won’t be an attempt to replace you which lasts long enough for you to lose your house.

  • Formfiller@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    15 hours ago

    Owner of the Atlantic is in the Epstein Files. They also wrote an article shaming Americas reaction to the Brian Thompson killing with no acknowledgement of the trauma we all experience in this corrupt system. Not going to give them any traffic

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    68
    ·
    20 hours ago

    The TLDR of this article is “we can’t predict the impact of AI because we can’t predict the future.” It takes apparently 15,000 words to say that. It just talks about what people are saying about AI without any purpose, along with random irrelevant things. This article is a waste of time.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      31
      ·
      18 hours ago

      Based on your description, I expected the article to be worthless (and it definitely was worthless!), but I didn’t expect the author to start breathlessly talking about Steve Bannon as if he’s some paragon of populist “AI safety” wisdom that transcends the Republican and Democrat parties.

      For anybody who’s not aware, Steve Bannon is a key architect of the first and second Trump administration. And the fact that Bannon is part of the AI safety grift, which should be a red flag that it is a bad thing, this author twists it into a green flag that Bannon might be a good guy after all.

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      17 hours ago

      I don’t feel at all like I’m the smartest person in any given room, but lately I feel like the movie idiocracy. Where I’m just some average guy, and the rest of the world is letting AI do their thinking for them. The end result is, crops won’t grow, because the lot of you are trying to water them with gatoraide. Top scientists in the country are so blinded by why science fails them, never realizing it’s because gatoraide controls the farming industry, and helps write the laws to ensure further grasp of control. Regardless of results.

      And everybody else just goes with it. What will happen in the future? Click this article to read about it! Answer: No one knows what would happen if you water plants with water.

      Here is how the AI experiment plays out…

      Corporations cling and force this stuff down our throats, despite it not working. They do this for 2-3 generations to normalize it. With time and tech advancements they continue to develop it.

      They keep using it where people don’t push back. Which for AI, is most things. I don’t see a major pushback on google including AI in search results. I don’t see a major pushback from MOST people on AI being in every element of Windows 11. I see people here hating on microsoft, but linux users are like 4% of the market.

      So they continue using the stuff people don’t rock the boat over, while not improving services. Eventually they get more and more of these AI services in every aspect of your life.

      The one place they spend all their effort improving is survailance. Watching you watch yourself, and sending them the data.

      Alexa could listen for “Hey Alexa” or it could listen for sneezing. Then send that information to HQ where they can now sell that data, that you sneeze 37 times per day in the spring, or 3 times a day in the winter.

      Now your insurance rates go up for allergy medication before you even see your doctor.

      Thats just one example. Like one dot of a painting of millions of dots. But it all starts with people who don’t have critical thinking skills. They just don’t even question why TVs in the 90s were expensive, but by 2020 they were basically free.

      So they buy their cheap smart tvs, and smart fridge, and everything else. Happy as can be. Not even realizing that its all just corporations bringing us closer and closer to 1984.

      And in 30 years, not having a smartphone will be illegal. Not having a trackable device with you 24/7 will be illegal. They’ll justify it by saying “think of the children!”. And people will fall for it, yet again. Just as they always do.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        Well, the U.K. recently tried to require citizens own and maintain a propriety device completely beholden to U.S. companies in order to be alive (effectively), so.

        • Lost_My_Mind@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          17 hours ago

          …in the words of Ian Malcom:

          “God damn do I hate always being right all the time…”

          Also in the words of Ian Malcolm:

          sexy growling and laughing noises

    • jqubed@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      I’ve found that to be the case more and more with The Atlantic in recent years: long articles that might sound impressive but don’t actually say much or could’ve said things much more succinctly. I usually don’t read their articles anymore.

  • Binturong@lemmy.ca
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    18 hours ago

    AI is snake oil and the ones ruining the jobs are the corporations and billionaires. AI will be a net positive for society once we make it a public project and reclaim the stolen wealth of the oligarchy, who use it to maximize their extraction and destroy society. Cool article, or whatever.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    20 hours ago

    There are gobs of money to be made selling enterprise software, but dulling the impact of AI is also a useful feint. This is a technology that can digest a hundred reports before you’ve finished your coffee, draft and analyze documents faster than teams of paralegals, compose music indistinguishable from the genius of a pop star or a Juilliard grad, code—really code, not just copy-paste from Stack Overflow—with the precision of a top engineer. Tasks that once required skill, judgment, and years of training are now being executed, relentlessly and indifferently, by software that learns as it goes.

    Literally not true.

    It can’t “analyze” documents. There’s no thinking involved with these machines. It outputs the statistically most likely thing that looks like analysis.

    And it’s not even close as good as the top engineer. If it was there would be no engineers TODAY.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      This is why I get so frustrated when people demand I integrate this stuff into every workflow. It’s not thinking at all. It’s just regurgitating text based on input and hoping for the best.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      18 hours ago

      And let’s not forget the asinine claim about music composition. Yeah, this is a bullshit fluff piece to keep attention on AI.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        Could AI blow up the world tomorrow? Who knows! The future is unpredictable, so it’s basically a 50-50, right? /s

    • dparticiple@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      19 hours ago

      LodeMike, I’m curious about something. What’s the latest set of AI models and tools you’ve used personally? Have you used Opus 4.5 or 4.6, for instance?

      I am not disagreeing with the points you’ve made, but it’s been my experience that the increase in capabilities over the last six months has been so rapid that it’s hard to realistically evaluate what the current frontier models are capable of unless you’ve uused them meaningfully and with some frequency.

      I’d welcome your perspective.

      • criss_cross@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        Not OP but I use these on the regular.

        I’d still agree with the OP that there are hard limits to what these can do. I’ve gotten Claude stuck in loops before on removing unrelated code, then adding it back, then removing it again hoping it’ll fix something.

        And OP is still correct. At the heart of all of this it’s “given input x guess the probability of response Y”. Even frontier models don’t think. They can output tokens to call tools to try and get more input x but it’s still a best guess.

        You can also give them too much context and get “context rot” which makes their output absolutely horrible too. I think cursor had a problem with that where too many Claude skills caused cursor to hallucinate and go nuts.

  • ruuster13@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    17 hours ago

    To everyone shitting on the article because of where AI is now: remember how little time passed between will smith spaghetti and sora 2?

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Sora 2, the product that cost $1.6 billion and hasn’t recouped even a thousandth of that yet?

      Yeah it’s as financially unviable as ever

    • Kairos@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      Those gains won’t continue into the future. Transformers are a mostly flushed technology, at least from the strictly tech/math side. New use cases or specialized sandboxes are still new tech (keyboard counts as a sandbox).

      • ruuster13@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        15 hours ago

        Moore’s Law isn’t quite dead. And quantum computing is a generation away. Computers will continue getting exponentially faster.

        • Kairos@lemmy.today
          link
          fedilink
          English
          arrow-up
          6
          ·
          15 hours ago

          No.

          We know how they work. They’re purely statistical models. They don’t create, they recreate training data based on how well it was stored in the model.

        • squaresinger@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 hours ago

          The problem is with hardware requirements scaling exponentially with AI performance. Just look at RAM and computation consumption increasing compared to the performance of the models.

          Anthropic recently announced that since the performance of one agent isn’t good enough it will just run teams of agents in parallel on single queries, thus just multiplying the hardware consumption.

          Exponential growth can only continue for so long.