• dreadbeef@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    Get off of Github and I bet you those drop to nearly zero. Using Github is a choice with all of the AI slop it enables. They aren’t getting rid of it any time soon. The want agents and people making shitty code PRs—that’s money sent Microsoft’s way in their minds.

    Now that they see what the cost of using Github is maybe Godot will (re?)consider codeberg or a self-hosted forgejo instance that they control.

  • order216@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    23 hours ago

    Why people try to contribute even if they don’t work on their codes? Ai slop not helping at all.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    22 hours ago

    just deny PRs that are obvious bots and ban them from ever contributing.

    even better if you’re running your own git instance and can outright ban IP ranges of known AI shitlords.

  • BitsAndBites@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    1
    ·
    1 day ago

    It’s everywhere. I was just trying to find some information on starting seeds for the garden this year and I was met with AI article after AI article just making shit up. One even had a “picture” of someone planting some seeds and their hand was merged into the ceramic flower pot.

    The AI fire hose is destroying the internet.

    • maplesaga@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      I fear when they learn a different layout. Right now it seems they are usually obvious, but soon I wont be able to tell slop from intelligence.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        You will be able to tell slop from intelligence.

        However, you won’t be able to tell AI slop from human slop, and we’ve had human slop around and already overwhelming, but nothing compared to LLM slop volume.

        In fact, reading AI slop text reminds me a lot of human slop I’ve seen, whether it’s ‘high school’ style paper writing or clickbait word padding of an article.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        23 hours ago

        One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn’t matter.

        That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don’t think that any tweaking or fiddling with the model will ever be able to do anything toward what you’re describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.

        The reason they can’t make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn’t have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.

        Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of “people” say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!

        • Urist@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 hours ago

          I think the point is rather that it is distinguishable for someone knowledgeable on the subject, but not for someone is not. Thus making it harder to evolve from the latter to the former.

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    83
    ·
    1 day ago

    This was honestly my biggest fear for a lot of FOSS applications.

    Not necessarily in a malicious way (although there’s certainly that happening as well). I think there’s a lot of users who want to contribute, but don’t know how to code, and suddenly think…hey…this is great! I can help out now!

    Well meaning slop is still slop.

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    66
    ·
    1 day ago

    Look. I have no problems if you want to use AI to make shit code for your own bullshit. Have at it.

    Don’t submit that shit to open Source projects.

    You want to use it? Use it for your own shit. The rest of us didn’t ask for this. I’m really hoping the AI bubble bursts in a big way very soon. Microsoft is going to need a bail out, openai is fucking doomed, and z/Twitter/grok could go either way honestly.

    Who in their right fucking mind looks at the costs of running an AI datacenter, and the fact that it’s more economically feasible to buy a fucking nuclear power plant to run it all, and then say, yea, this is reasonable.

    The C-whatever-O’s are all taking crazy pills.

  • Bongles@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    I don’t contribute to open source projects (not talented enough at the moment, I can do basic stuff for myself sometimes) but I wonder if you can implement some kind of requirement to prove that your code worked to avoid this issue.

    Like, you’re submitting a request that fixes X thing or adds Y feature, show us it doing it before we review it in full.

    • selfAwareCoder@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      The trouble is just volume and time, even just reading through the description and “proof it works” would take a few minutes, and if you’re getting 10s of these a day it can easily eat up time to find the ones worth reviewing. (and these volunteers are working in their free time after a normal work day, so wasting 15 or 30 minutes out of the volunteers one or two hours of work is throwing away a lot of time.

      Plus, when volunteering is annoying the volunteers stop showing up which kills projects

      • Cryxtalix@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        If you want to get a programming job, you want a good looking CV. By contributing to prominent open source projects on github, github’s popularity and fancy profile system makes it look real good on a CV.

        Github is a magnet for lazy vibe coders spamming their shit everywhere to farm their CVs. On other git hosts without such a fancy profile systems, there’s less on an incentive to do so. Slop to good code ratio should be lower and more managable.

      • Routhinator@lemmy.ca
        link
        fedilink
        English
        arrow-up
        38
        ·
        1 day ago

        No but they are actively not promoting it or encouraging it. Github and MS are. If you’re going to keep staying on the pro-AI site, you’re going to eat the consequences of that. Github are actively encouraging these submissions with profile badges and other obnoxious crap. Its not an appropriate env for development anymore. Its gamified AI crap.

      • woelkchen@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        ·
        1 day ago

        No (just like Lemmy isn’t immune against AI comments) but Github is actively working towards AI slop

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    1 day ago

    Godot is also weighing the possibility of moving the project to another platform where there might be less incentive for users to “farm” legitimacy as a software developer with AI-generated code contributions.

    Aahhh, I see the issue know.

    That’s the incentive to just skirt the rules of whatever their submission policy is.

  • Luden@lemmings.world
    link
    fedilink
    arrow-up
    34
    ·
    1 day ago

    I am a game developer and a web developer and I use AI sometimes just to make it write template code for me so that I can make the boilerplate faster. For the rest of the code, AI is soooo dumb it’s basically impossible to make something that works!

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 day ago

      The context windows are only so large. Once you give it too much to juggle, it starts doing crazy shit.

      Boilerplates are fine, they can even usually stub out endpoints.

      Also the cheap model access is often a lot less useful than the enterprise stuff. I have access to three different services through work and even inside GPT land there are vast differences in capability.

      Claude Code has this REALLY useful implementation of agents. You can create agents with their own system prompts. Then the main context window becomes an orchestrator; you tell it what you’re looking for and tell it to use the agents to do the work. The main window becomes a project manager with a mostly empty context window, it farms out the requests to the agents which each have their own context window. Each new task is individual, The orchestrator makes sure the agents get the job done, none of the workloads get so large that stuff goes insane.

      It’s still not like you can say, go make me this game then argue with it for a couple of hours and end up with good things. But if you keep the windows small, it can crap-out a decent function/module if you clarify you want to focus on security, best practice, and code reusability. They’re also not bad at writing unit tests.

    • Pyr@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      1 day ago

      Yes I feel like many people misunderstand AI capabilities

      They think it somehow comes up with the best solution, when really it’s more like lightning and takes the path of least resistance. It finds whatever works the fastest, if it even can without making it up and then lying that it works

      It by no means creates elegant and efficient solutions to anything

      AI is just a tool. You still need to know what you are doing to be able to tell if it’s solution is worth anything and then you still will need to be able to adjust and tweak it

      It’s most useful for being able to maybe give you an idea on how to do something by coming up with a method/solution you may not have known about or wouldn’t have considered. Testing your own stuff as well is useful or having it make slight adjustments.

      • AnUnusualRelic@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        It finds whatever works the fastest

        For a very lax definition of “works”…

        Kind of agree with the rest of your points. Remember though, that the suggestions it gives you, for things you’re not familiar with may very well be terrible ones that are frowned upon. So it’s always best to triple check what it outputs, and only use it for broad suggestions.

      • ILikeBoobies@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Works in this case doesn’t mean the output works but that it passes the input parameter rules.

  • ZeroOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    edit-2
    1 day ago

    So I guess it is time to switch to a different style of FOSS development ?

    The cathedral style, which is utilized by Fossil, basically in order to contribute you’ll have to be manually included into the group. It’s a high-trust environment where devs know each other on a 1st-name basis.

    Oh BTW, Fossil is a fully-fledged alternative to Git & Github. It has:

    • Version-Tracking
    • Webserver
    • Bug-tracker
    • Ticketting-system
    • Wiki
    • Forum
    • Chat
    • And a Graphical User-Interface which you can theme

    All in One binary

    • ThirdConsul@lemmy.zip
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 day ago

      What if I want to contribue to a FoSS project because I’m using it but I don’t want to make new friends?

    • RemADeus@thelemmy.club
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      2 days ago

      That is a wonderful method because it works in a similar way of many FediVerse server administrators admitting people to new accounts. This way is the slop is immediately filtered away

      • ZeroOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        Why would your code be embarassing ? Yes I get it, but so what But at least it’s not AI-Slop, you fork it & do your own thing.

        It’s not a perfect solution

        • Ænima@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          I doubt my skills are sufficient enough in anything I make, feel less confident in it, and more judged by others critiquing me for it. I know I don’t suck at what I’ve done so far, but I never feel good enough to share my work with the public at large.

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    237
    ·
    2 days ago

    Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

    Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      181
      arrow-down
      9
      ·
      2 days ago

      I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

      AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

      Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”

          • k0e3@lemmy.ca
            link
            fedilink
            English
            arrow-up
            15
            ·
            2 days ago

            Yeah but then their Facebook accounts will keep producing slop even after they’re gone.

        • Tyrq@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      114
      ·
      2 days ago

      LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          30
          ·
          2 days ago

          Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.

          Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      81
      arrow-down
      1
      ·
      2 days ago

      From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

      • Notso@feddit.org
        link
        fedilink
        English
        arrow-up
        56
        arrow-down
        2
        ·
        2 days ago

        You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          that’s the annoying part.

          LLM code can range to “doesn’t even compile” to “it actually works as requested”.

          The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From “let’s abstract this with 5 new layers” to “I’m going to refactor that whole class of objects to get this simple method in there”.

          The requested feature might actually work. 100%.

          It’s just very possible that it either broke other stuff, or made the codebase less maintainable.

          That’s why it’s important that people actually know the codebase and know what they/the model are doing. Just going “works for me, glhf” is not a good way to keep a maintainable codebase

          • turboSnail@piefed.europe.pub
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            LOL. So true.
            On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.

            That’s a fun little mini-game that comes with vibe coding.

        • Björn@swg-empire.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          Reminds me of one job I had where my boss asked shortly after starting there if their entry test was too hard. They had gotten several submissions from candidates that wouldn’t even run.

          I envision these types of people are now vibe coding.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.