• frengo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    I wish tools to detect if an image is real or not become as easy to use and good as these AI tools bullshit.

  • zecg@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    It’s a shitty toy that’ll make some people sorry when they don’t have any photos from their night out without tiny godzilla dancing on their table. It won’t have the staying power Google wishes it to, since it’s useless except for gags.

    But, please, Verge,

    It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph.

    get fucked

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.

    • BastingChemina@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      I really don’t have much knowledge on it but it sound like it’s would be an actual good application of blockchain.

      Couldn’t a blockchain be used to certify that pictures are original and have not been tampered with ?

      On the other hand if it was possible I’m certain someone either have already started it, it is the prefect investor magnet “Using blockchain to counter AI

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        How would that work?

        I am being serious, I am an IT and can’t see how that would work in any realistic way.

        And even if we had a working system to track all changes made to a photo, it would only work if the author submitted the original image before any change haf been made, but how would you verify that the original copy of a photo submitted to the system has not been tempered with?

        Sure, you could be required to submit the raw file from the camera, but it is only a matter of time untill AI can perfectly simulate an optical sensor to take a simulated raw of a simulated scene.

        Nope, we simply have to fall back on building trust with photo journalists, and trust digital signatures to tell us when we are seeing a photograph modified outsided of the journalist’s agency.

        • BastingChemina@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          Yep, I think we pictures are becoming a valuable as text and it is fine, we just need to get used to it.

          Before photography became mainstream the only source of information was written, it is extremely simple to make a fake story so people had to rely on trusted sources. Then for a short period of history photography became a (kinda) reliable sources of information by itself and this trust system lost its importance.

          In most cases seeing a photo means that we were seeing a true reflection of what happened, especially if we were song multiple photos of the same event.

          Now we are arriving at the end of this period, we cannot trust a photo by itself anymore, tampering a photo is becoming as easy as writing a fake story. This is a great opportunity for journalists I believe.

  • adam_y@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    It’s always been about context and provenance. Who took the image? Are there supporting accounts?

    But also, it has always been about the knowlege that no one… Absolutely no one… Does lines of coke from a woven mat floor covering.

    don't do drugs kids.

    • mctoasterson@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.

      • yamanii@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        This comment is pure gold, you are already fooled but think you have a discerning eye, you are not immune to propaganda.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        Em what. The drug power finale is what has been added in by the AI what are you talking about.

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        All the details you just mentioned are also present in the unaltered photo though. Only the “drugs” are edited in.

        Didn’t read the article, did you?

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    Meh, those edited photos could have been created in Photoshop as well.

    This makes editing and retouching photos easier, and that’s a concern, but it’s not new.

    • FlihpFlorp@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)

      I’m too tired to take a stance so I’m just sharing some arguments I’ve heard

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        Making creating fake images even easier definitely isn’t great, I agree with you there, but it’s nothing that couldn’t already be done with Photoshop.

        I definitely don’t like the idea you can do this on your phone.

        • Bimbleby@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          28 days ago

          Exactly, it was already established that pictures from untrusted sources are to be disregarded unless they can be verified by trusted sources.

          It is basically how it has been forever with the written press: Just like everyone now has the capability to manipulate a picture. Everyone can write we are being invaded by aliens, but whether we should believe it is another thing.

          It might take some time for the general public to learn this, but it should be a focus area of general schooling within the area of source criticism.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    28 days ago

    Image manipulation has always been a thing, and there are ways to counter it…

    But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it’s bow.

    Governments don’t seem to be anywhere near on top of keeping up with these AI developments either, so by the law starts accounting for all of this, the damage will be far done already.

      • yamanii@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          That’s precisely what I mean.

          The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.

    • RubberDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    People can write things that aren’t true! Oh no, now we can’t trust trustworthy texts such as scientific papers that have undergone peer review!

    • BalooWasWahoo@links.hackliberty.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      I mean… have you seen the scathing reports on scientific papers, psychology especially? Peer review doesn’t catch liars. It catches bad experimental design, and it sometimes screens out people the reviewers don’t like. Replication can catch liars sometimes, but even in the sciences that are ‘hard’ it is rare to see replication because that doesn’t bring the grant money in.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    We’ve had fake photos for over 100 years at this point.

    https://en.wikipedia.org/wiki/Cottingley_Fairies

    Maybe it’s time to do something about confirming authenticity, rather than just accepting any old nonsense as evidence of anything.

    At this point anything can be presented as evidence, and now can be equally refuted as an AI fabrication.

    We need a new generation of secure cameras with internal signing of images and video (to prevent manipulation), built in LIDAR (to make sure they’re not filming a screen), periodic external timestamps of data (so nothing can be changed after the supposed date), etc.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    Okay so it’s the verge so I’m not exactly expecting much but seriously?

    No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus

    People have been faking photographs basically since day one, with techniques like double exposure. Also even more sophisticated photo manipulation has been possible with Photoshop which has existed for decades.

    There’s a photo of me taken in the '90s on thunder mountain at Disneyland which has been edited to look like I’m actually on a mountainside rather than in a theme park. I think we can deal with fakeable photographs the only difference here is the process is automatable which honestly doesn’t make even the blindest bit of difference. It’s quicker but so what.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      It used to take professionals or serious hobbyists to make something fake look believable. Now it’s at the tip of everyone’s fingers. Fake photos were already a smaller issue, but this very well could become a tidal wave of fakes trying to grab attention.

      Think about how many scammers there are. Think about how many horny boys there are. Think about how much online political fuckery goes around these days. When believable photographs of whatever you want people to believe are at the tips of anyone’s fingers, it’s very, very easy to start a wildfire of misinformation. And think about the young girls being tormented in middle school and high school. And all the scammable old people. And all the fascists willing to use any tool at their disposal to sow discord and hatred.

      It’s not a nothing problem. It could very well become a torrent of lies.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        Come on, science fiction had similar technologies to fake things since 40s. The writing was on the wall.

        It didn’t really work outside of authors’ and readers’ imagination, but the only reason we’re scared is that we’re forced into centralized hierarchical systems in which it’s harder to defend.

        • TheFriar@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          26 days ago

          I mean, sure, deception as a concept has always been around. But let me just put it this way:

          How many more scam emails, scam texts, how many more data leaks, conspiracy theories are going around these days? All of these things always existed. The Nigerian prince scam. That one’s been around forever. The door-to-door salesman, that one’s been around forever. The snake oil charlatan. Scams and lies have been around since we could communicate, probably. But never before have we been bombarded with them like we are today. Before, it took a guy with a rotary phone and a phone book a full day to try to scam 100 people. Now 100 calls go out all at once with a different fake phone number for each, spoofed to be as close to the recipient’s number as possible.

          The effort input needed for these things have dropped significantly with new tech, and their prevalence skyrocketed. It’s not a new story. In fact, it’s a very old story. It’s just more common and much easier, so it’s taken up by more people because it’s more lucrative. Why spend all of your time trying to hack a campaign’s email (which is also still happening), when you can make one suspicious picture and get all of your bots to get it trending so your company gets billions in tax breaks? All at the click of a button. Then send your spam bots to call millions of people a day to spread the information about the picture, and your email bots to spam the picture to every Facebook conspiracy theorist. All in a matter of seconds.

          This isn’t a matter of “what if.” This is kind of just the law of scams. It will be used for evil. No question. And it does have an effect. You can’t have random numbers call you anymore without you immediately expecting their spam. Soon, you won’t be able to get photo evidence without immediately thinking it might be fake. Water flows downhill, new tech gets used for scams. The like a law of nature at this point.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            26 days ago

            Wise people still teach their children (and remind themselves) not to talk to strangers, say “no” if not sure, mind their own business because their attention and energy are not infinite, and trust only family.

            You can’t have random numbers call you anymore without you immediately expecting their spam.

            You’d be wary of people who are not your neighbors in the Middle Ages. Were you a nobleman, you’d still mostly talk to people you knew since childhood, yours or theirs, and the rare new faces would be people you’ve heard about since childhood, yours or theirs.

            It’s not a new danger. Even qualitatively - the change for a villager coming to a big city during the industrial revolution was much more radical.

            • TheFriar@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              25 days ago

              That’s exactly what I meant when I said:

              It’s not a new story. In fact, it’s a very old story.

              And you just kinda proved my point. As time has gone on, the great of deception has grown with new technology. This is just the latest iteration. And every new one has expanded the chances/danger exponentially.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                25 days ago

                What I really meant is that humanity is a self-regulating system. This disturbance will be regulated just as well as those other ones.

                The unpleasant thing is that the example I’ve given involved lots of new power being created, while our disturbance is the opposite - people\forces already having power desperately trying to preserve their relative weight, at the cost of preventing new power being created.

                But we will see if they’ll succeed. After all, the very reason they are doing this is because they can’t create power, and that is because their institutional understanding is lacking, and this in turn means that they are not in fact doing what they think they are. And by forcing those who can create power to the fringe, they are accelerating the tendencies for relief.

                • TheFriar@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  25 days ago

                  I don’t think this is the power redistribution you’re implying it is. I’m not actually sure what you mean by that. The power to create truths? To spread propaganda? I can’t think of any other power this tech would redistribute. Would you mind explaining?

        • TheFriar@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          Your point being…?

          I mean…we can all see those are inanimate, right? But that doesn’t even change my point. If anything, it kinda helps prove my point. People are gullible as hell. What’s that saying? “A lie will get halfway around the world before the truth has a chance to pull its boots on.”

          A torrent of believable fakes will call into question photographic evidence. I mean, we’ve all seen it happening already. Some kinda strange or interesting picture shows up and everyone is claiming it was AI generated. That’s the other half of the problem.

          Photographic evidence is now called into question readily. That happened with photoshop too, but like I said, throw enough shit against the wall—with millions and millions of other people also throwing shit at the wall—and some is bound to stick. The probability is skyrocketing now that it’s in everyone’s hands and the actually AIgen pictures are becoming indecipherable from photo evidence.

          That low effort fairy hoax made a bunch of people believe there were 8in. fairies just existing in the world, regardless of how silly that was. Now, stick something entirely believable into a photograph that only barely blurs the lines of reality and it can be like wildfire. Have you seen those stupid Facebook AI pages? Like shrimp Jesus, the kids in Africa building cars out of garlic cloves, etc. People are falling for that dumbass shit. Now put Kamala Harris doing something shady and release it in late October. I would honestly be surprised if we’re not hit with at least one situation like that in a few months.

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      The new technique distorts reality in a much larger way. That hasn’t been there before. When everybody has this in their smartphones, we will look at manipulated pics on an hourly basis. That’s unprecedented.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    28 days ago

    TAKING OUR JOBS

    HARASSING WOMEN AND CHILDREN

    A THREAT TO OUR WAY OF LIFE

    THEY’RE SHITTING ON THE BEACHES

    REWRITING HISTORY BY DOCTORING PHOTOS WITH NEVER SEEN BEFORE PHOTO MANIPULATIONS

    Sorry everyone I keep forgetting which zeitgeist that media is currently using to make us hate and fear something.

  • Diva (she/her)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    If I say Tiananmen Square, you will, most likely, envision the same photograph I do.

    There was film of that exact event. The guy didn’t get run over by the tank, he got on the hood and berated the driver.

    Cops in America would run you over for less

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    I think this is a good thing.

    Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.

    Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.

    • reksas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      28 days ago

      While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.

      Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            27 days ago

            But it’s irrelevant. You can watermark all you want in the algorithms you control, but it doesn’t change the underlying fact that pictures have been capable of lying for years.

            People just recognizing that a picture is not evidence of anything is better.

            • reksas@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              27 days ago

              Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                27 days ago

                It doesn’t matter. Any time there are any stakes at all (and plenty of times there aren’t), there’s someone who will do the work.

                • reksas@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  27 days ago

                  It doesnt matter if you cant trust anything you see? What if you couldn’t be sure if you weren’t talking to bot right now?

    • Hacksaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      I completely agree. This is going to free kids from someone taking a picture of them doing something relatively harmless and extorting them. “That was AI, I wasn’t even at that party 🤷”

      I can’t wait for childhood and teenage life to being a bit more free and a bit less constantly recorded.

      • gandalf_der_12te@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        yeah, every time you go to a party, and fun happens, somebody pulls out their smartphone and starts filming. it’s really bad. people can only relax when there’s privacy, and smartphones have stolen privacy from society for over 10 years now. we need to either ban filming in general (which is not doable) or discredit photographs - which we’re doing right now.

  • WoahWoah@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    This is a hyperbolic article to be sure. But many in this thread are missing the point. It’s not that photo manipulation is new.

    It’s the volume and quality of photo manipulation that’s new. “Flooding the zone with bullshit,” i.e. decreasing the signal-to-noise ratio, has demonstrable social effect.

    • JaggedRobotPubes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      It seems like the only defense against this would be something along the lines of FUTO’s Harbor, or maybe Ghost Keys. I’m not gonna pretend to know enough about them technically or practically, but a system that can anonymously prove that you’re you across websites could potentially de-fuel that fire.