• pastermil@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    2 days ago

    Definitely not a big fan of it, but realistically speaking, it’s here to stay. It is wise for them to govern and regulate it rather than outright ban it. Especially with a project as big as this one, people will try. Saying that the responsibility falls on the human is definitely the right move.

  • stylusmobilus@aussie.zone
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    4
    ·
    3 days ago

    any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.

    Watch Americans and their companies pull some mad gymnastics on proportioning blame for this

  • Jankatarch@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    3 days ago

    Maintainers’ only responsibility is to ensure quality and shouldn’t have to check for rogue AI submissions.

    Tho I still miss consistent fucking weather so year of the netbsd?

  • sonofearth@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    4
    ·
    3 days ago

    I am the c/fuck_ai person but at this point I have made peace we can’t avoid it. I still don’t want it to do artsy stuff (image gen, video gen) and to blindly use it in critical stuff because humans are the ones that should be doing it or have constant oversight. I think the team’s logic is correct here, because there is no way to know if the code is from an LLM or a human unless something there screams LLM or the contributor explicitly mentions it. Mandating the latter seems like a reasonable move for now.

    • DaleGribble88@programming.dev
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      12
      ·
      3 days ago

      I consider myself to be more pro AI than not, but I’m certainly not a zealot and mostly agree with the take that it shouldn’t be used in artistic pursuits. However, I love using AI to help me create art. It can give great critiques, often good advice on how to improve, and is great for rapid experimentation and prototyping. I actually used it this weekend to see what a D&D mini might look like with different color schemes before painting it. I could have done the same with Gimp, but it would have taken much longer for worse results that was ultimately just for a brain storming session. How do you feel about my AI usage from your perspective? I suppose from an energy conservation perspective, all of it was bad, but I’m more interested in a less trivial take.

      • sonofearth@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        3 days ago

        Yes the energy consumption is bad. My main gripe about LLM generated art is that it will not be original. It will use its training data from uncredited artworks to generate it. Art usually is made by humans to express something or convey something in a creative way. LLMs fail at that. What LLMs can actually be helpful at is making learning art more accessible to everyone. Art schools or private art classes can be expensive. This lowers the barrier to entry.

        As for you using generated Art is that the it might be really beautiful but it will be very difficult to maintain that style and even more difficult to convince that it is your style. The Artist doesn’t get much recognition with LLM generated art. Using it as a critique also seems stupid because LLMs will aways try to give an objective view on it than subjective. Your art won’t trigger an emotion in it and might say it is bad or “do this to make it more understandable” — that’s where you lose as an artist.

        My mom likes to paint as a hobby. What she does it searches stuff on Pinterest (which is mostly LLM Generated). She uses it as an inspiration to do it in her own style and maybe give it some spin. She keeps all of it for herself.

        • MeekerThanBeaker@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          3 days ago

          I’m a writer. I got paid to write on a few things here and there, but mostly there are just huge barriers for people without connections.

          I plan on using AI to turn my writing into a visual animated format for people to consume. I don’t much care about the style of art, I just want my work to be seen. I can’t afford to pay for artists. If I could, I would. But at least, this would give me an opportunity to show my work without some execs saying no a hundred times.

          When I look at the art for cartoons in the 70s/80s, there is so much crap animation with mistakes and duplications, you would think it’s “a.i. slop.” I understand that these were done overseas, pumped out quickly so quality control was overlooked for speed… but it wasn’t the animation I was interested in, it was the stories and characters.

          I still think original artists will continue to exist. A.I. is just another tool. People will get bored of the same old stuff and want originality. I really hope it’ll make our lives better in the long run, but we’re just in the weird middle stage of A.I. crawling before running.

          • sonofearth@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            3 days ago

            I can’t afford to pay for artists

            You can afford LLMs right now because all of the LLM companies are losing money on it. If they decide they want to make a profit, they will raise their prices significantly. So you still end up in the same situation. You don’t have much control on what an LLM spits out while with doing animation manually, you have total control or at-least sit with an actual animator to make it look how you envision it to be.

            I plan on using AI to turn my writing into a visual animated format for people to consume.

            What makes you think that people will respond the same way and in the same numbers to LLM generated animation than if it were crafted by an artist? I reckon that it will be much lower. I see it on youtube constantly. I watched a video about a topic, then I got recommended something related to it from a different channel. Guess what? The script and the animation were so damn similar and the shit they were spewing wasn’t even true in the end. Everything that both the channels made was slop. Sure they spit out more content than conventional methods and got a few thousand views each video and made decent money on it. But they aren’t gonna sustain for long if they want audience retention.

            Since then I have been more mindful on what video I click on and going to the extent of disabling recommendations and watch history.

            • MeekerThanBeaker@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              3 days ago

              I have downloaded my own LLM that can be used on my own computer… So the only cost is electricity since I upgraded my computer before the prices went to shit. Newegg even gave me free RAM with the purchase of a motherboard so I lucked out on that. Storage is not an issue too since I got that back in 2024 knowing Trump would fuck everything up.

              And no, people might not respond the same way to my work, but then again I’m not taking any work away from anyone else because then it would not even exist. If you want to fund me and the artist for our work, then okay. Show me the money.

              One thing I’ve noticed is that I see many more people complain about slop than slop itself. It’s so annoying at this point that’s it’s making me go in the opposite direction. Hey everyone, slop here… Microsoft slop here… Use Linux Linux Linux. Slop slop slop. Sloppy joes. It’s like candlestick makers complaining to Nikola Tesla.

              • Cataphract@lemmy.ml
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                2 days ago

                Another great example of how AI is just wreaking havoc on people’s brains.

                • Wants to show an enticing product to execs, doesn’t want to invest in paying an artist
                • realizes they have to have connections but doesn’t want to network
                • wants recognition of their hard work, hasn’t sought out a community or collaboration but states “show me the money”

                AI will fix everything for me! Slop doesn’t exist! (ignores the very article we’re in, any platform algorithm feed, the us president shit posting, all the slop that gets presented here). Go get em Nik, don’t let haters stop your brilliance.

              • sonofearth@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                my own LLM that can be used on my own computer

                May I ask how many B parameters does it have? Because the paradox over here is:

                1. if it is weak then you will be getting much much worse results than even the Big Models the corpos have (we don’t even know how much tbh), let alone the quality of an actual artist.
                2. If you have a respectfully powerful model then your PC might cost thousands of dollars (even by ignoring the price hikes) which eliminates the excuse to pay an actual artist.
  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    18
    ·
    edit-2
    3 days ago

    Bad actors submitting garbage code aren’t going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines.

    “Guns don’t kill people. People kill people”

    Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.

    The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?

    I get their point. If you want to create good code by having AI create bad code and then spending twice the time to fix it, feel free to do that. But I’m in favor of a complete ban.

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      1
      ·
      3 days ago

      The (very obvious) point is that this cannot be enforced. So might as well deal with it upfront.

    • Simulation6@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      2
      ·
      3 days ago

      The keyboard thing is sort of a parable, it is as difficult to determine if code was generated in part by AI as it is to determine what keyboard was used to create it.

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      You’re the one comparing AI and guns/killing people, and then saying their metaphorical comparison isn’t accurate? Lol

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      5
      ·
      edit-2
      3 days ago

      AI is a useful tool for coding as long as it’s being used properly. The problem isn’t the tool, the problem is the companies who scraped the entire internet, trained LLM models, and then put them behind paywalls with no options to download the weights so that they could be self-hosted. Brazen, unaccountable profiteering off of the goodwill of many open source projects without giving anything back.

      If LLMs were community-trained on available, open-source code with weights freely available for anyone to host there wouldn’t be nearly as much animosity against the tech itself. The enemy isn’t the tool, but the ones who built the tool at the expense of everyone and are hogging all the benefits.

      • Electricd@lemmybefree.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Eh, trust me, anti AI people don’t think this much about it

        Also, there are a lot of open weight models out there that are pretty good

      • cartoon meme dog@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        There are hundreds of such LLMs with published training sets and weights available on places like HuggingFace. Lots of people run their own LLMs locally, it’s not hard if you have enough vram and a bit of patience to wait longer for each reply.

    • BigPotato@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Wooting and Razer had a macro function that allowed Counterstrike players to setup a function to always get counter strafe. Valve decided that was a bridge too far and banned “Hardware level” exploits.

      So, Valve once banned a keyboard.

    • bassow@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago
      Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.
      

      The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?

      It’s about the heritage of code not being visible from the surface. I don’t know about your brain.

    • ede1998@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 days ago

      Last I checked a keyboard only enters what I type

      I’ve had (broken) keyboard “hallucinate” extra keystrokes before, because of stuck keys. Or ignore keypresses. But yeah, that means the keyboard is broken.

    • ziproot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Last I checked a keyboard only enters what I type

      I’m assuming the author is talking about mobile keyboards, which have autocomplete and autocorrect.

    • alyth@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      11
      ·
      3 days ago

      Out of curiosity how much code have you contributed to the Linux kernel?

  • theherk@lemmy.world
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    5
    ·
    4 days ago

    Seems like a reasonable approach. Make people be accountable for the code they submit, no matter the tools used.

    • ell1e@leminal.space
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      4 days ago

      If the accountability cannot be practically fulfilled, the reasonable policy becomes a ban.

      What good is it to say “oh yeah you can submit LLM code, if you agree to be sued for it later instead of us”? I’m not a lawyer and this isn’t legal advice, but sometimes I feel like that’s what the Linux Foundation policy says.

      • ViatorOmnium@piefed.social
        link
        fedilink
        English
        arrow-up
        54
        arrow-down
        1
        ·
        4 days ago

        But this was already the case. When someone submitted code to Linux they always had to assume responsibility for the legality of the submitted code, that’s one of the points of mandatory Signed-off-by.

        • badgermurphy@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          20
          ·
          4 days ago

          But now, even the person submitting the license-breaching content may be unaware that they are doing that, so the problem is surely worse now that contributors can easily unwittingly be on the wrong side of the law.

          • Traister101@lemmy.today
            link
            fedilink
            English
            arrow-up
            48
            arrow-down
            1
            ·
            4 days ago

            That’s their problem. If they are using an LLM and cannot verify the output they shouldn’t be using an LLM

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              4 days ago

              Problem is that broadly most GenAI users don’t take that risk seriously. So far no one can point to a court case where a rights holder successfully sued someone over LLM infringement.

              The biggest chance is getty and their case, with very blatantly obvious infringement. They lost in the UK, so that’s not a good sign.

            • hperrin@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Nobody can verify that the output of an LLM isn’t from its training data except those with access to its training data.

            • badgermurphy@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              13
              ·
              4 days ago

              It is their problem until the second they submit it, then it is the project’s problem. You can lay the blame for the bad actions wherever you want, but the reality is that the work of verifying the legality and validity of these submissions if being abdicated, crippling projects under increased workloads going through ever more submissions that amount to junk.

              What is the solution for that? The fact that is the fault of the lazy submitter doesn’t clean up the mess they left.

              • Traister101@lemmy.today
                link
                fedilink
                English
                arrow-up
                13
                arrow-down
                1
                ·
                4 days ago

                Frankly I expect the kernel dudes to be pretty good about this, their style guides alone are quite strick and any funny business in a PR that isn’t marked correctly is I think likely a ban from making PRs at all. How it worked beforehand, as already stated by others is the author says “I promise this follows the rules” and that’s basically the end of it. Giving an official avenue for generated code is a great way to reduce the negatives of it that’ll happen anyway. We know this from decades of real life experience trying to ban things like alcohol or drugs, time after time providing a legal avenue with some rules makes things safer. Why wouldn’t we see a similar effect here?

                • badgermurphy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 days ago

                  I do think that some projects will fare better than others, particularly ones like you mentioned, where the team is robust and capable of handling the filtering of increased submissions from these new sources.

                  I believe we are going to end up having to see some new mechanism for project submissions to deal with the growing imbalance between submission volume and work hours available for review, as became necessary when viruses, malware, and spam first came into being. It has quickly become incredibly easy for anyone to make a PR, but not at all easier to review them, so something is going to have to give in the FOSS world.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      7
      ·
      4 days ago

      No, it’s not a reasonable approach. Make people be the authors of the code they submit is reasonable, because then it can be released under the GPL. AI generated code is public domain.

      • theherk@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        8
        ·
        4 days ago

        I suppose there should be no code generators, assemblers, compilers, linkers, or lsp’s then either? Just etching 1’s and 0’s?

        • hperrin@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 days ago

          The copyright office has made it explicitly clear that those tools do not interfere with the traditional elements of authorship, and that the use of LLMs does. So, if you don’t want to take my word for it, take the US Copyright Office’s word for it.

          • theherk@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            4 days ago

            As the agency overseeing the copyright registration system, the Office has extensive experience in evaluating works submitted for registration that contain human authorship combined with uncopyrightable material, including material generated by or with the assistance of technology. It begins by asking “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry. If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

            In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of ” and do “not affect” the copyright status of the AI-generated material itself.

            This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work’s expression and “actually formed” the traditional elements of authorship.

            https://www.copyright.gov/ai/ai_policy_guidance.pdf

            What this makes clear is that it certainly isn’t black or white as you say. Nevertheless, automation converting an input to an output, simply cannot be the only mechanism used in determining authorship.

            And that wouldn’t change my statement anyway, but rather supports it. The person submitting a patch must be accountable for its contents.

            An outright ban would need to carefully define how an input gets converted to an output, and that may not be so clear. To be effectively clear, one would have to potentially end the use of many tools that have been used for many years in the kernel, including snippet generation, spelling and grammar correction, IDE autocompleting. So such a reductive view simply will not suffice.


            Additionally, copywritability and licenseability are wholly different questions. And it does not violate GPL to include public domain content, since the license applies to the aggregate work.

            • hperrin@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              4 days ago

              If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

              That seems very clear to me. Generative AI output is not human authored, and therefore not copyrighted.

              The policy I use also makes very clear the definition of AI generated material:

              https://sciactive.com/human-contribution-policy/#Definitions

              I’m not exactly sure how you can possibly think there is an equivalence between a tool like a spelling and grammar checker and a generative AI, but there’s a reason the copyright office will register works that have been authored using spelling and grammar checkers, but not works that have been authored using LLMs.

              • theherk@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 days ago

                Just read the next two paragraphs. Don’t just stop because you got to something that you like. The equivalence I draw is clear. You don’t like it, and that’s okay. But one would have to clarify exactly what the ban entails, and that wouldn’t be as clear as you might think. LLM’s only, transformers specifically, what about graph generation, other ML models? Is it just ML? If so, is that because a matrix lattice was used to get from input to output? Could other deterministic math functions trigger the same ban? What is a spell checker used RNG to select best replacement from a list of correct options? What if a compiler introduces an assembled output with an optimization not of the authors writing?

                Do you see why they say “The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry”?

                And that still affects copywriteability, not license compliance.

                • hperrin@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  4 days ago

                  Do you want to explain to me what, in those two paragraphs, means that the use of spell checkers and LLMs is equivalent with regard to copyrightability? It seems like those paragraphs make it clear that the use of spell checkers is not the same as LLMs.

                  The policy I use bans “generative AI model” output. Generative AI is a pretty well defined term:

                  https://en.wikipedia.org/wiki/Generative_AI

                  https://www.merriam-webster.com/dictionary/generative AI

                  If you have trouble determining whether something is a generative AI model, you can usually just look up how it is described in the promotional materials or on Wikipedia.

                  Type: Large language model, Generative pre-trained transformer

                  - https://en.wikipedia.org/wiki/Claude_(language_model)

                  I never said it violates GPL to include public domain code. I’m not sure where you got that from. What I said is that public domain code can’t really be released under the GPL. You can try, but it’s not enforceable. As in, you can release it under that license, but I can still do whatever I want with it, license be damned, because it’s public domain.

                  I did that with this vibe coded project:

                  https://github.com/hperrin/gnata

                  I just took it and rereleased it as pubic domain, because that’s what it is anyway.

      • ziproot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        4 days ago

        Isn’t that the rule? The author has to be a human?

        The new guidelines mandate that AI agents cannot use the legally binding “Signed-off-by” tag, requiring instead a new “Assisted-by” tag for transparency. Ultimately, the policy legally anchors every single line of AI-generated code and any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.

  • NewNewAugustEast@lemmy.zip
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    5
    ·
    4 days ago

    Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?

    Which is why its in the title and not the article? EntertainBait?

  • CanIFishHere@lemmy.ca
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    13
    ·
    4 days ago

    AI is here, another tool to use…the correct way. Very reasonable approach from Torvalds.

    • Newsteinleo@infosec.pub
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      4 days ago

      I don’t have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can’t fix a sentence in a slide deck without using an LLM.

      It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        4 days ago

        It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.

        That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn’t want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.

        • boraginoru@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I’m also a mobile app dev and at my workplace they’re having non-mobile devs submit code to my codebases totally vibed with no understanding behind it. It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.

          So yes basically you’re right. If people only used it to learn and do initial code review passes and other reasonable things we’d be totally fine. But that’s unfortunately not the reality 🙈

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.

            The next step is, CEO, look at how good these non-mobile devs are, they’re submitting 10x the commits to the mobile repo than boraginoru our mobile dev! We should fire him and just let the backend devs keep vibe coding it!

        • Newsteinleo@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I’m talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.

      • CanIFishHere@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        4 days ago

        Very frustrating for sure. Like any tool, it’s up to humans to know when the tool is useful.

        • filcuk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          Partly a marketing issue.
          Companies keep advertising their new AI’s as destroyers of worlds, and something that’s too dangerous to even release.
          As with anything else, the average user will not have but the most surface level understanding of the tool

      • InternetCitizen2@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        4 days ago

        This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.

        • NekoKoneko@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 days ago

          I get the metaphor but it’s not a great one for AI in mathematics especially. A statistical word generator is not going to perform reliable math and woe to anyone who acts otherwise.

          I would call it an autistic sycophantic savant with brain damage. It’s able to perform apparent miraculous feats of memory and creativity but then be unable to tell reality from fiction, to tell if even the simplest response is valid, and likely will lie about it to make itself seem more competent to please you.

          If you have a use for an assistant like that, then great. But a calculator - simple and cheap and reliable - it definitely is not.

    • null@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Clickbait got me. No mention of “Yes copilot” which I assumed was a joke anyway.

  • catlover@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    60
    ·
    4 days ago

    I’d still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn’t even read the code, and i have to go through all the slop

    • kcuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 days ago

      Ya I’m finding myself being the bad code generator at work as I’m scattered across so many things at the moment due to attrition and AI can do a lot of the boilerplate work, but it’s such a time and energy sink to fully review what it generates and I’ve found basic things I missed that others catch and shows the sloppiness. I usually take pride in my code, but I have no attachment to what’s generated and that’s exposing issues with trying to scale out using this

      • Repple (she/her)@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        4 days ago

        Same. There’s reduction in workforce, pressure to move faster, and no good way to do that without sloppiness. I have never been this down on the industry before; it was never great, but now it’s terrible.

        • Danitos@reddthat.com
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          3 days ago

          Some thought I had the other day: LLM is supposed to make us more productive, say by 20%. Have you won a 20% pay rise since you adopted it? I haven’t

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        4 days ago

        Just fucking stop using it? Wtf? Tell you boss to pound sand! They’re going to blame you when it goes south anyway so you might as well stay honest.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      I suspect the answer will be that such large requested as you frequently see with LLM codegen will just be rejected.

      Already I see changes broken up and suggested bit by bit, so I presume the same best practice applies.

    • terabyterex@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      12
      ·
      edit-2
      4 days ago

      Did we all forget about stackoverflow?

      Peopleblindly copy/pasted from there all the time.

      • Railcar8095@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 days ago

        Couple of years back I got a PR at work that used a block of code that read a CSV, used some stream method to covert it to binary to then feed it to pandas to make a dataframe. I don’t remember the exact steps it did, but was just crazy when pd.read_csv existed.

        On a hunch I pasted the code in google and found an exact match on overflow for a very weird use case on very early pandas.

        I’m lucky and if people send obvious shit at work I can just cc their manager, but I fell for the volunteers at large FOSS projects, or even paid employees.

  • null@lemmy.org
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    4
    ·
    4 days ago

    Ah, the solution that recognizes there’s no way to eliminate AI from the supply chain after it’s already been introduced.

    • sunbeam60@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      11
      ·
      4 days ago

      You make it sound as if there was another choice if just people had better principles. Pray tell us, what would you have done, now. Not in the past, now.

      • null@lemmy.org
        link
        fedilink
        English
        arrow-up
        18
        ·
        4 days ago

        That wasn’t my intent. This is me saying, “of course that’s what they’re going to do because there’s nothing else they can do.”

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        4 days ago

        You’re agreeing with the comment you replied to. Why the fuck are you trying to be so smug???

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    4 days ago

    I agree. If AI becomes outlawed, it will simply be used without other people knowing about it.

    This approach, at least, means that people will label AI-generated code as such.

    • emmy67@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      4 days ago

      Maybe. There’s still strong disapproval around it. I can imagine many will still hide it.

    • truthfultemporarily@feddit.org
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      11
      ·
      4 days ago

      Where does slop start? If you use auto complete and it is just adding a semicolon or some braces, is it slop? Is producing character by character what you would have wrote yourself slop?

      How about using it for debugging?

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 days ago

        You don’t need AI to autocomplete code. We’ve had autocomplete for over 30 years.

      • ell1e@leminal.space
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        3
        ·
        4 days ago

        If you would have written it yourself the same way, why not write it yourself? (And there was autocomplete before the age of LLMs, anyway.)

        The big problems start with situations where it doesn’t match what you would have written, but rather what somebody else has written, character by character.

      • BoxOfFeet@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        4
        ·
        4 days ago

        To me, it starts at anything beyond correcting spelling for individual words or adding punctuation. I don’t even want it suggesting quick reply phrases.

        Is producing character by character what you would have wrote yourself slop?

        Yes.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        There’s the rub. When establishing laws and guidelines, every term must be explicitly defined. Lack of specificity in these definitions is where bad-faith actors hide their misdeeds by technically obeying the letter of the law due to its vagueness, while flagrantly violating its spirit.

        Its why today, in the USA, corporations are legally people when its convenient, and not when its not, and the expenditure of money is governments protected “free speech”.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        15
        ·
        edit-2
        4 days ago

        There is a certain brand of user (who may or may not be a human) who draws the venn of ‘AI slop’ and ‘AI output’ as a circle.

        They’ve taken the extremist position that AI should be uninvented and any use of AI is the worst thing that could possibly happen to any project and they’ll have an entire grab bag of misinformation-based memes to shotgun at you. Engaging with these people is about as productive as trying to convince a vaccine denier that vaccines don’t cause autism.

        I’m not saying that the user you replies to believes this, but the comment they wrote is indistinguishable from the comments of such a user.

        e: I’d also like to point out that these users are very much attracted to low-effort activism. This is why you see comments like mind being heavily downvoted but not many actual replies. They want to try to influence the discussion but don’t have the capability or motivation to step into the ring, so to speak, and defend their opinions.

        • ell1e@leminal.space
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          4 days ago

          It’s less extremist if you look at how easily these LLMs will just plagiarize 1:1, apparently:

          https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567

          Some see “AI slop” as “identified by the immediate problems of it that I can identify right away”.

          Many others see “AI slop” as bringing many more problems beyond the immediate ones. Then seeing LLM output as anything but slop becomes difficult.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            7
            ·
            4 days ago

            It’s extremist to take the fact that you CAN get plagiaristic output and to conclude that all other output is somehow tainted.

            You personally CAN quote copyrighted music and screenplays. If you’re an artist then you also CAN produce copyright violating works. None of these facts taint any of the other things that you produce that are not copyright or plagiarized.

            In this situation, and in the current legal environment, the responsibility to not produce illegal and unlicensed code is on the human. The fact that the tool that they use has the capability to break the law does not mean that everything generated by it is tainted.

            Photoshop can be used to plagiarize and violate copyright too. It would be just as absurd to declare all images created with Photoshop are somehow suspect or unusable because of the capability of the tool to violate copyright laws.

            The fact that AI can, when specifically prompted, produce memorized segments of the training data has essentially no legal weight in any of the cases where it has been argued. It is a fact that is of interest to scientists who study how AI represent knowledge internally and not any kind of foundation for a legal argument against the use of AI.

            • badgermurphy@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              4 days ago

              Sure, but if they can be demonstrated to ever plagiarize without attribution, and the default user behavior is to pencil-whip the output, which it is, then it becomes statistically certain that users are unwittingly plagiarizing other works.

              Its like using a tool that usually bakes cookies, but every once in a great while, it knocks over the building its in. It almost never does that, though.

              • FauxLiving@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                4 days ago

                Plagiarism and copyright violation are two different things, one is ethical and the other is legal.

                Copyright has a body of case law which helps determine when a work significantly infringes on the copyrighted work of another. Plagiarism has no body of law at all, it is an ethical construct and not a legal one.

                You can plagiarize something that has no copyright protection and you can infringe on copyright protection without plagiarizing. They’re not interchangeable concepts.

                In your example, some institutions would not allow such a device to operate on their property but it would not be illegal to operate and the liability would be on the person and not on the oven.

                To further strain the metaphor, Linus is saying that you can use (possibly) exploding ovens, because he isn’t taking a moral stance on the topic, but you are responsible for the damages if they cause any because the legal systems require that this be the case.

        • hperrin@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          4 days ago

          According to the US Copyright Office, AI generated material cannot be copyrighted (unless of course it’s plagiarized copyrighted code). That’s reason enough to leave it out of the kernel. If the kernel’s license becomes unenforceable because of public domain code, the kernel is tainted.

          Edit: I don’t know why people are downvoting this. It’s literally just the truth: https://sciactive.com/human-contribution-policy/#More-Information

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 days ago

            Copyright and License terms are two different categories of law. Copyright is an idea created and enforced by the laws of the country which has jurisdiction. Licenses are a contract between two parties and is covered by contract law.

            A thing can be unable to be protected by copyright and also protected by the terms of the license that it is provided under. If a project contains copyrighted code that does not mean that you cannot be held to the terms of the license. Your use of licensed works is granted under the agreement that you follow the terms of the license. You cannot be held liable for copyright violations for using the code, but using the code in a manner that is not allowed by the license makes you liable for violation of the contract that is the license agreement.

            • hperrin@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              I think you’re misunderstanding what I’m saying. Any portions of the kernel that are public domain can be used by anyone for any purpose without following the terms of the GPL. AI generated code is public domain. To make sure all parts of the kernel are protected by the GPL, public domain code should not be accepted unless absolutely necessary.

              • FauxLiving@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I don’t see the problem. GPL protects all of the code that is copyrighted, i.e. 100% made by humans. Accepting a submission created with AI tools doesn’t change this. It’s not going to be a simple task for someone who has decided to violate the GPL license to only use the generated/uncopyrighted portions without using any other GPL code and thus being subject to GPL licensing terms.

                These hypothetical GPL violating people will have a hard time using lines 27-38 of ./kernel/events/ring_buffer.c to do anything even if they technically can do so without releasing their code under the GPL. If they use any piece of GPL code, at all, anywhere, their entire project is required to follow the GPL. So while they could, technically, take 27-38 of ring_buffer.c and build an entire proprietary non-GPL Linux kernel… it is, in practice, not feasible even if it technically possible.

                • hperrin@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  So what happens thirty years from now when 95% of the kernel code is AI generated? It’ll be a lot easier to rewrite the parts that aren’t, and have a fully closed source kernel that you can use without following the GPL.

    • femtek@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      4 days ago

      I mean I don’t use copilot but a self hosted Claude at work for debugging and creating templates. I still run thru and test it. I’m only doing crossplane, kyverno, kubernetes infra things though and I started without it so I have an understanding. Now running their someone’s crossplane composition written in go and I asked them about this error and he just said get the AI to fix it was worrying since his last day is next week.

    • chilicheeselies@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      4 days ago

      Its only slop if you accept slop. What i mean is that it cna and does generate perfectly fine code. It also generates code that is ok, but needs a human touch. It also generates verbose garbage.

      Its only slop if you approve the slop. Its perfectly fine to let it generate the boilerplate of what you want, and tweak it. If its prompted well enough, you get less slop.

      Ultimately I am with Linus on this one. The genie is out of the bottle. Use it responsibly.

  • ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    edit-2
    4 days ago

    Ultimately, the policy legally anchors every single line of AI-generated code

    How would that even be possible? Given the state of things:

    https://dl.acm.org/doi/10.1145/3543507.3583199

    Our results suggest that […] three types of plagiarism widely exist in LMs beyond memorization, […] Given that a majority of LMs’ training data is scraped from the Web without informing content owners, their reiteration of words, phrases, and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both the size of LMs and their training data increase, […] Plagiarized content can also contain individuals’ personal and sensitive information.

    https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/

    Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books. […] This phenomenon has been called “memorization,” and AI companies have long denied that it happens on a large scale. […]The Stanford study proves that there are such copies in AI models, and it is just the latest of several studies to do so.

    https://www.twobirds.com/en/insights/2025/landmark-ruling-of-the-munich-regional-court-(gema-v-openai)-on-copyright-and-ai-training

    The court confirmed that training large language models will generally fall within the scope of application of the text and data mining barriers, […] the court found that the reproduction of the disputed song lyrics in the models does not constitute text and data mining, as text and data mining aims at the evaluation of information such as abstract syntactic regulations, common terms and semantic relationships, whereas the memorisation of the song lyrics at issue exceeds such an evaluation and is therefore not mere text and data mining

    https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7

    In this work we explored the relationship between discourse quality and memorization for LLMs. We found that the models that consistently output the highest-quality text are also the ones that have the highest memorization rate.

    https://arxiv.org/abs/2601.02671

    recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures […]. We investigate this question […] our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs.

    How does merely tagging the apparently stolen content make it less problematic, given I’m guessing it still won’t have any attribution of the actual source (which for all we know, might often even be GPL incompatible)?

    But I’m not a lawyer, so I guess what do I know. But even from a non-legal angle, what is this road the Linux Foundation seems to embrace of just ignoring the license of projects? Why even have the kernel be GPL then, rather than CC0?

    I don’t get it. And the article calling this “pragmatism” seems absurd to me.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      6
      ·
      edit-2
      4 days ago

      Given the research that you’ve done here I’m going to assume that you’re looking for an answer and not simply taking us on a gish gallop.

      Your premise, and what appears to be the primary source of confusion, is built on the idea that this is ‘stolen’ work which, from a legal point of view, is untrue. If you want to dig into why that is, look into the precedent setting case of Authors Guild, Inc. v. Google, Inc. (2015). The TL;DR is that training AI on copyrighted works falls under the Fair Use exemptions in copyright law. i.e. It is legal, not stealing.

      The case you linked from Munich shows that other country’s legal systems are interpreting AI training in the same way. Training AI isn’t about memorization and plagiarism of existing work, it’s using existing work to learn the underlying patterns.

      That isn’t to say that memorization doesn’t happen, but it is more of a point of interest to AI scientists that are working on understanding how AI represents knowledge internally than a point that lands in a courtrooom.

      We all memorize copyrighted data as part of our learning. You, too, can quote Disney movies or Stephen King novels if prompted in the right way. This doesn’t make any work you create automatically become plagarism, it just means that you have viewed copyrighted work as part of your learning process. In the same way, artists have the capability to create works which violate the copyright of others and they consumed copyrighted works as part of their learning process. These facts don’t taint all of their work, either morally or legally… only the output that literally violates copyright laws.

      The pragmatism here is recognizing that these tools exist and that people use them. The current legal landscape is such that the output of these tools is as if they were the output of the users. If an image generator generates a copyrighted image then the rightsholder can sue the person, not the software. If a code generator generates licensed code then the tool user is responsible.

      This is much like how we don’t restrict the usage of Photoshop despite the fact that it can be used to violate copyright. We, instead, put the burden on the person who operates the tool

      That’s what is happening here. Linus isn’t using his position to promote/enforce/encourage LLM use, nor is he using his position to prevent/restrict/disallow any AI use at all. He is recognizing that this is a tool that exists in the world in 2026 and that his project needs to have procedures that acknowledge this while also ensuring that a human is the one responsible for their submissions.

      This is the definition of pragmatism (def: action or policy dictated by consideration of the immediate practical consequences rather than by theory or dogma).

      e: precedent, not president (I’m blaming the AI/autocorrect on this one)

      • bss03@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        The TL;DR is that training AI on copyrighted works falls under the Fair Use exemptions in copyright law

        This judgement was reversed by the next federal judge that reviewed AI, in the Meta case.

        It is far from legally settled whether training is fair use or not.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Well, cynically, the Supreme Court will decide and Team AI has more money to buy RVs and luxury vacations.

      • mimavox@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 days ago

        Training AI isn’t about memorization and plagiarism of existing work, it’s using existing work to learn the underlying patterns.

        Thank you. This is exactly what people misunderstands. LLMs aren’t gigantic databases that just shuffles information that they’ve copied from the internet.

        • anarchiddy@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          4 days ago

          LLMs themselves being products of copyright isnt the legal question at issue, it’s the downstream use of that product.

          If I use a copyright-infringing work as a part of a new creative work, does that new work infringe copyright by default? Or does the new work need to be judged itself as to the question of infringing a copyrighted work?

          And if it is judged as infringing, who is responsible for the damage done? Can I pass the damages back to the original infringing work? Or should I be held responsible for not performing due diligence?

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            If I use a copyright-infringing work as a part of a new creative work, does that new work infringe copyright by default?

            No, see reaction content, parody content, etc. They all undoubtedly use copyrighted work and they don’t automatically infringe on copyright by default.

            And if it is judged as infringing, who is responsible for the damage done? Can I pass the damages back to the original infringing work? Or should I be held responsible for not performing due diligence?

            The infringing party is the human that used the tool which generated the infringing work. Everything after that is exactly the same applicaton of copyright law just as if you were selling pictures of Mickey Mouse that you drew yourself. Disney can sue you, they can’t sue the pencil manufacturer.

            • anarchiddy@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              4 days ago

              Yup

              People want to pretend as if everything that flows downstream from the creation of LLMs is illegal, but that’s just not the reality.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          You’re confusing two separate legal issues.

          Copyright is created and enforced by copyright law.

          Licenses are created and enforced by contract law.

          You can violate a contract without violating a copyright and you can violate a copyright without agreeing to a license. You can also license works that are not able to be protected by a copyright because they are two separate categories of law.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 days ago

            Sure, you can license them, but that license is unenforceable, because you don’t own the copyrights, so you can’t sue anyone for copyright infringement. And you’d have to be a fool to agree to a license for public domain material. You can do whatever you want with it, no license necessary.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              because you don’t own the copyrights, so you can’t sue anyone for copyright infringement.

              You can’t sue for copyright infringement.

              You can, however, use content which is not able to be copyrighted and also still license (under contract law/EULAs) your product including terms prohibiting copying of the non-copyrightable information.

              This was settled in: https://en.wikipedia.org/wiki/ProCD%2C_Inc._v._Zeidenberg

              On Zeidenberg’s copyright argument, the circuit court noted the 1991 Supreme Court precedent Feist Publications v. Rural Telephone Service, in which it was found that the information within a telephone directory (individual phone numbers) were facts that could not be copyrighted. For Zeidenberg’s argument, the circuit court assumed that a database collecting the contents of one or more telephone directories was equally a collection of facts that could not be copyrighted. Thus, Zeidenberg’s copyright argument was valid. However, this did not lead to a victory for Zeidenberg, because the circuit court held that copyright law does not preempt contract law. Since ProCD had made the investments in its business and its specific SelectPhone product, it could require customers to agree to its terms on how to use the product, including a prohibition on copying the information therein regardless of copyright protections

              You can’t copyright phone numbers, just like you can’t copyright generated code, but you can still create a license which protects your uncopyrightable content and it can be enforced via contract law.

              • hperrin@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                Sure, but if it’s open source, I can just take that code without agreeing to your contract. Since it’s public domain, I can do whatever I want with it. You can only enforce a contract if I agree to it.

                • FauxLiving@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  It doesn’t have to be open source.

                  If someone 100% generates code to make software then the software isn’t protected by copyright.

                  That software could be distributed and licensed under an EULA and the fact that it isn’t protected by copyright means absolutely nothing as far as the EULA is concerned.

                  The copyright status and the ability to license a piece of software under contract law do not depend on one another.

        • anarchiddy@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          4 days ago

          The Linux Kernel is under a copyleft license - it isnt being copyrighted.

          But the policy being discussed isn’t allowing the use of copyrighted code - they’re simply requiring any code submitted by AI be tagged as such so that the human using the agent is ultimately responsible for any infringing code, instead of allowing that code go undisclosed (and even ‘certified’ by the dev submitting it even if they didnt write or review it themselves)

          Submissions are still subject to copyright law - the law just doesnt function the way you or OP are suggesting.

          • AeonFelis@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            they’re simply requiring any code submitted by AI be tagged as such so that the human using the agent is ultimately responsible for any infringing code, instead of allowing that code go undisclosed

            This makes zero sense, because the article says that this new tagging will replace the legally binding “Signed-off-by” tag. Wouldn’t that old tag already put that responsibility on the person submitting the code.

            Also - what will holding the submitter responsible even achieve? If an infringement is detected, the Linux maintainers won’t be able to just pass all the blame to the submitter of that code while keeping it in the codebase - they’ll have to remove the infringing code regardless of who’s responsible for putting it in.

            • anarchiddy@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Kinda, but they’re specifically saying the the AI agent cannot itself tag the contribution with the sign-off - like, someone using Claude Code to submit PRs on their behalf. The developer must add the tag themselves, indicating that they at least reviewed and submitted it themselves, and it wasn’t just an agent going off-prompt or some other shit and submitting it without the developer’s knowledge. This is saying ‘the dog ate my homework’ is not a valid excuse.

              The developer can use AI, but they must review the code themselves, and the agent can’t “sign-off” on the code for them.

              Also - what will holding the submitter responsible even achieve?

              What does holding any individual responsible on a development team do? The Linux project is still responsible for anything they put out in the kernel just like any other project, but individual developers can be removed from the contributing team if they break the rules and put it at risk.

              The new rule simply makes the expectations clear.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 days ago

            Copyleft doesn’t mean it’s not copyrighted. Copyleft is not a legal term. “Copyleft” licenses are enforced through copyright ownership.

            Did you read the quotes from the copyright office I linked to? I am going to go ahead and trust the copyright office over you on issues of copyrightability.

            • anarchiddy@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 days ago

              Even if this were true, it would only mean that the GNU license is unenforceable, not that the Linux kernel itself is infringing copyright

              • hperrin@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                Unless the code the AI generated is a copy of copyrighted code, of course. Then it would be copyright infringement.

                I can cause the AI to spit out code that I own the copyright to, because it was trained on my code too. If someone used that code without including attribution to me (the requirement of the license I release my code under), that would be copyright infringement. Do you understand what I mean?

                • anarchiddy@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  4 days ago

                  That would be true even if they didn’t use AI to reproduce it.

                  The problem being addressed by the Linux foundation isn’t the use of copyrighted work in developer contribution, it’s the assumption that the code was authored by them at all just because it’s submitted in their name and tagged as verified.

                  Does that make sense?