• cactusfacecomics@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 days ago

    Seems reasonable to me. If you’re using AI then you should be required to own up to it. If you’re too embarrassed to own up to it, then maybe you shouldn’t be using it.

      • eldebryn@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 days ago

        IMO if your “A*” style algorithm is used for chatbot or any kind of user interaction or content generation, it should still be explicitly declared.

        That being said, there is some nuance here about A) use of Copyrighted material and B) Non-deterministic behaviour. Neither of which is (usually) a concern in more classical non-DL approaches to AI solutions.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I’m stoked to see the legal definition of “AI”. I’m sure the lawyers and costumed clowns will really clear it all up.

      • MajorasTerribleFate@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        Prosecution: “Your Honor, the definition of artificial is ‘made or produced by human beings rather than occurring naturally,’ and as all human beings are themselves produced by human beings, we are definitionally artificial. Therefore, the actions of an intelligent human are inherently AI.”

        Defense: “The defense does not argue this point, as such. However, our client, FOX News, could not be said to be exhibiting ‘intelligence.’ Artificial they may be, but AI they are clearly not. We rest our case.”

  • pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 days ago

    It would be nice if this extended to all text, images, audio and video on news websites. That’s where the real damage is happening.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      Actually seems easier (probably not at the state level) to mandate cameras and such digitally sign any media they create. No signature or verification, no trust.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        No signature or verification, no trust

        And the people that are going to check for a digital signature in the first place, THEN check that the signature emanates from a trusted key, then, eventually, check who’s deciding the list of trusted keys… those people, where are they?

        Because the lack of trust, validation, verification, and more generally the lack of any credibility hasn’t stopped anything from spreading like a dumpster fire in a field full of dumpsters doused in gasoline. Part of my job is providing digital signature tools and creating “trusted” data (I’m not in sales, obviously), and the main issue is that nobody checks anything, even when faced with liability, even when they actually pay for an off the shelve solution to do so. And I’m talking about people that should care, not even the general public.

        There are a lot of steps before “digitally signing everything” even get on people’s radar. For now, a green checkmark anywhere is enough to convince anyone, sadly.

  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    -“If you’re an AI Cop, you have to tell me. It’s the law.”
    -“I’m not a cop.”

    • shane@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      I mean, we call the software that runs computer players in games AI, so… ¯\_(ツ)_/¯

      • Hungry_man@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        The AI chatbot brainrot is way worse tbh.someone legit said to me why don’t chatgpt cure cancer like wtf

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          As if taking all of 4-chan, scrambling it around a little, and pouring the contents out would lead to a cure for cancer. lmao

      • potpotato@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        13 days ago

        Do we? Aren’t they just bots? Like I’m not looking at an NPC and calling it AI.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    Be sure to tell this to “AI”. It would be a shame if this was a technical nonsense law to be.

  • w3dd1e@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    But Peter Thiel said regulating AI will bring the biblical apocalypse. ƪ(˘⌣˘)ʃ

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Hi there, Cancer Robot here! Excellent question iopq! We state that we cause cancer first, as is tradition.

  • minorkeys@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    If you ask ChatGPT, it says it’s guidelines include not giving the impression it’s a human. But if you ask it be less human because it is confusing you, it says that would break the guidelines.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      ChatGPT doesn’t know its own guidelines because those aren’t even included in its training corpus. Never trust an LLM about how it works or how it “thinks” because fundamentally these answers are fake.