• Rob200@lemmy.autism.place
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 hours ago

    Not sure how to fel about this, but if they are honest about the labels and accurate 100% of the time with labeling it’s a nice feature for independant fact checkers

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    You may be able to prove that a photo with certain metadata was taken by a camera (my understanding is that that’s the method), but you can’t prove that a photo without it wasn’t, because older cameras won’t have the necessary support, and wiping metadata is trivial anyway. So is it better to have more false negatives than false positives? Maybe. My suspicion is that it won’t make much difference to most people.

  • restingboredface@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    It’s of course troubling that AI images will go unidentified through this service (I am also not at all confident that Google can do this well/consistently).

    However I’m also worried about the opposite side of this problem- real images being mislabeled as AI. I can see a lot of bad actors using that to discredit legitimate news sources or stories that don’t fit their narrative.

  • apfelwoiSchoppen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    Google is planning to roll out a technology that will identify whether a photo was taken with a camera, edited by software like Photoshop, or produced by generative AI models.

    So they are going to use AI to detect AI. That should not present any problems.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          Yes, it’s called a GAN and has been a fundamental technique in ML for years.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 day ago

              My point is just that they’re effectively describing a discriminator. Like, yeah, it entails a lot more tough problems to be tackled than that sentence makes it seem, but it’s a known and very active area of ML. Sure, there may be other metadata and contextual features to discriminate upon, but eventually those heuristics will inevitably be closed up and we’ll just end up with a giant distributed, quasi-federated GAN. Which, setting aside the externalities that I’m skeptical anyone in a position of power to address is equally in an informed position of understanding, is kind of neat in a vacuum.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    looks dubious

    The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.

    • xenoclast@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      19 hours ago

      Fun fact about AI products (or any gold rush economy) it doesn’t have to work. It just has to sell.

      I mean this is generally true about anything but it’s particularly bad in these situations. Also PT Barnum had a few thoughts on this as well.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      The problem here is that if this is unreliable…

      And the problem if it is reliable is that everyone becomes dependent on Google to literally define reality.