A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.

    Of course it wouldn’t be very just, but then regular courts aren’t either.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    No computer algorithm can accurately reconstruct data that was never there in the first place.

    Ever.

    This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.

    Ever.

    If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it’s called.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      No computer algorithm can accurately reconstruct data that was never there in the first place.

      Okay, but what if we’ve got a computer program that can just kinda insert red eyes, joints, and plums of chum smoke on all our suspects?

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      It preemptively also includes any other future technology that aims to try the same thing

      No it doesn’t. For example you can, with compute power, for distortions introduced by camera lenses/sensors/etc and drastically increase image quality. For example this photo of pluto was taken from 7,800 miles away - click the link for a version of the image that hasn’t been resized/compressed by lemmy:

      The unprocessed image would look nothing at all like that. There’s a lot more data in an image than you can see with the naked eye, and algorithms can extract/highlight the data. That’s obviously not what a generative ai algorithm does, those should never be used, but there are other algorithms which are appropriate.

      The reality is every modern photo is heavily processed - look at this example by a wedding photographer, even with a professional camera and excellent lighting the raw image on the left (where all the camera processing features are disabled) looks like garbage compared to exactly the same photo with software processing:

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        None of your examples are creating new legitimate data from the whole cloth. They’re just making details that were already there visible to the naked eye. We’re not talking about taking a giant image that’s got too many pixels to fit on your display device in one go, and just focusing on a specific portion of it. That’s not the same thing as attempting to interpolate missing image data. In that case the data was there to begin with, it just wasn’t visible due to limitations of the display or the viewer’s retinas.

        The original grid of pixels is all of the meaningful data that will ever be extracted from any image (or video, for that matter).

        Your wedding photographer’s picture actually throws away color data in the interest of contrast and to make it more appealing to the viewer. When you fiddle with the color channels like that and see all those troughs in the histogram that make it look like a comb? Yeah, all those gaps and spikes are actually original color/contrast data that is being lost. There is less data in the touched up image than the original, technically, and if you are perverse and own a high bit depth display device (I do! I am typing this on a machine with a true 32-bit-per-pixel professional graphics workstation monitor.) you actually can state at it and see the entirety of the detail captured in the raw image before the touchups. A viewer might not think it looks great, but how it looks is irrelevant from the standpoint of data capture.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          They talked about algorithms used for correcting lens distortions with their first example. That is absolutely a valid use case and extracts new data by making certain assumptions with certain probabilities. Your newly created law of nature is just your own imagination and is not the prevalent understanding in the scientific community. No, quite the opposite, scientific practice runs exactly counter your statements.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      One little correction, digital zoom is not something that belongs on that list. It’s essentially just cropping the image. That said, “enhanced” digital zoom I agree should be on that list.

  • Voyajer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    You’d think it would be obvious you can’t submit doctored evidence and expect it to be upheld in court.

  • emptyother@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    How long until we got upscalers of various sorts built into tech that shouldn’t have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges’ screens? Cheap security cams with “enhanced night vision” might get somebody jailed.

    I love the AI tech. But its future worries me.

    • Bread@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      The real question is could we ever really trust photographs before AI? Image manipulation has been a thing long before the digital camera and Photoshop. What makes these images we see actually real? Cameras have been miscapturing image data for as long as they have existed. Do the light levels in a photo match what was actually there according to the human eye? Usually not. What makes a photo real?

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.

      The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.

      • Buelldozer@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        AI-based video codecs are on the way.

        Arguably already here.

        Look at this description of Samsungs mobile AI for their S24 phone and newer tablets:

        AI-powered image and video editing

        Galaxy AI also features various image and video editing features. If you have an image that is not level (horizontally or vertically) with respect to the object, scene, or subject, you can correct its angle without losing other parts of the image. The blank parts of that angle-corrected image are filled with Generative AI-powered content. The image editor tries to fill in the blank parts of the image with AI-generated content that suits the best. You can also erase objects or subjects in an image. Another feature lets you select an object/subject in an image and change its position, angle, or size.

        It can also turn normal videos into slow-motion videos. While a video is playing, you need to hold the screen for the duration of the video that you want to be converted into slow-motion, and AI will generate frames and insert them between real frames to create a slow-motion effect.

      • DarkenLM@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        I don’t think AI codecs will be anything revolutionary. There are plenty of lossless codecs already, but if you want more detail, you’ll need a better physical sensor, and I doubt there’s anything that can be done to go around that (that actually represents what exists, not an hallucination).

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than “dumb” compression in terms of loss of data or detail, but you don’t want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn’t really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that’s better or worse for the game’s pace and excitement is a different question.

      The thing is, ML tech isn’t a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That’s not a problem if done correctly. The issue is everybody is both assuming “generative AI” chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.

      I’m not particularly afraid of “AI tech”, but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    This is the best summary I could come up with:


    A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial.

    And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

    Lawyers for Puloka wanted to introduce cellphone video captured by a bystander that’s been AI-enhanced, though it’s not clear what they believe could be gleaned from the altered footage.

    For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022.

    Using the slider below, you can see the pixelated image that went viral before people started feeding it through AI programs and “discovered” things that simply weren’t there in the original broadcast.

    Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that’s simply not what’s happening under the hood.


    The original article contains 730 words, the summary contains 166 words. Saved 77%. I’m a bot and I’m open source!

  • Neato@ttrpg.network
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren’t, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can’t decipher the logic (I’ve literally seen compositional/generative AI that shows its work), they’ll never realize what it’s actually doing.

    So then you get innocent CCTV footage this AI “clarifies” and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I’m sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.

  • ChaoticNeutralCzech@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Sure, no algorithm is able to extract any more information from a single photo. But how about combining detail caught in multiple frames of video? Some phones already do this kind of thing, getting multiple samples for highly zoomed photos thanks to camera shake.

    Still, the problem remains that the results from a cherry-picked algorithm or outright hand-crafted pics may be presented.

  • Rob T Firefly@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    According to the evidence, the defendant clearly committed the crime with all 17 of his fingers. His lack of remorse is obvious by the fact that he’s clearly smiling wider than his own face.

  • Stovetop@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    “Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm.”

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    During Kyle Rittenhouse’s trial the defense attorney objected to using the pinch to zoom feature of an iPad because it (supposedly) used AI. This was upheld by the judge so the prosecution couldn’t zoom in on the video.

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I think we need to STOP calling it “Artificial Intelligence”. IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.

    • Gabu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I do not consider guided pattern recognition to be intelligence.

      That’s a you problem, this debate happened 50 years ago and we decided Intelligence is the right word.

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I do not consider guided pattern recognition to be intelligence.

      Humanity has entered the chat

      Seriously though, what name would you suggest?

        • CileTheSane@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Calling it Bob is not going to help discourage people from attributing intelligence. They’ll start wishing “Bob” a happy birthday.

          Do not personify the machine.

      • exocortex@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        on the contrary! it’s a very old buzzword!

        AI should be called machine learning. much better. If i had my way it would be called “fancy curve fitting” henceforth.

        • Hackerman_uwu@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Technically speaking AI is any effort on the part of machines to mimic living things. So computer vision for instance. This is distinct from ML and Deep Learning which use historical statistical data to train on and then forecast or simulate.

          • exocortex@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            “machines mimicking living things” does not mean exclusively AI. Many scientific fields are trying to mimic living things.

            AI is a very hazy concept imho as it’s difficult to even define when a system is intelligent - or when a human is.

            • Hackerman_uwu@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              6 months ago

              That’s not what I said.

              What I typed there is not my opinion.

              This the technical, industry distinction between AI and things like ML and Neural networks.

              “Mimicking living things” is obviously not exclusive to AI. It is exclusive to AI as compared to ML, for instance.

              • maynarkh@feddit.nl
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                There is no technical, industry specification for what AI is. It’s solely and completely a marketing term. The best thing I’ve heard is that you know it’s ML if the file extension is cpp or py, and you know it’s AI if the extension is pdf or ppt.

                I don’t see how “AI” is mimicking living things while neural networks are, just because neural networks are based on neurons, the living things in your head.

        • boeman@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I can’t disagree with this… After basing the size off of the vertical pixel count, we’re now going to switch to the horizontal count to describe the resolution.

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      You, and humans in general, are also just sophisticated pattern recognition and matching machines. If neural networks are not intelligent, then you are not intelligent.

      • Chakravanti@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        You can say what you like but absolutely zero true and full understand of what human intelligence actually is or how it works.

        “AI”, or whatever you want to call it, is not at all similar.

    • rdri@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      How is guided pattern recognition is different from imagination (and therefore intelligence) though?

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        There’s a lot of other layers in brains that’s missing in machine learning. These models don’t form world models and some have an understanding of facts and have no means of ensuring consistency, to start with.

        • lightstream@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Statistical associations is not equivalent to a world model, especially because they’re neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language

            • lightstream@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              It models only use of language

              This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

              If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.

              You can’t really use language *unless* you have a model of the universe.

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                6 months ago

                But it doesn’t model the actual universe, it models rumor mills

                Today’s LLM is the versificator machine of 1984. It cares not for truth, it cares for distracting you

                • lightstream@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 months ago

                  They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.

        • rdri@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          6 months ago

          I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won’t be improved by much even if we made them closer to real “intelligence”.

          The point is that reconstruction itself can’t reliably produce missing details, not that a “properly intelligent” mind will be any better at it than current ai.

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Your comment is a good reason why these tools have no place in the courtroom: The things you describe as imagination.

        They’re image generation tools that will generate a new, unrelated image that happens to look similar to the source image. They don’t reconstruct anything and they have no understanding of what the image contains. All they know is which color the pixels in the output might probably have given the pixels in the input.

        It’s no different from giving a description of a scene to an author, asking them to come up with any event that might have happened in such a location and then trying to use the resulting short story to convict someone.

        • rdri@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          They don’t reconstruct anything and they have no understanding of what the image contains.

          With enough training they, in fact, will have some understanding. But that still leaves us with that “enhance meme” problem aka the limited resolution of the original data. There are no means to discover what exactly was hidden between visible pixels, only approximate. So yes you are correct, just described it a bit differently.

          • lightstream@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            they, in fact, will have some understanding

            These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.

            What they do not have (and IMO won’t ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It’s very illuminating on the subject of what consciousness is, by providing a new example of what it is not.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Optical Character Recognition used to be firmly in the realm of AI until it became so common that even the post office users it. Nowadays, OCR is so common that instead of being proper AI, it’s just another mundane application of a neural network. I guess, eventually Large Language Models will be outside there scope of AI.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Think about how they reconstructed what the Egyptian Pharoahs looks like, or what a kidnap victim who was kidnapped at age 7 would look like at age 12. Yes, it can’t make something look exactly right, but it also isn’t just randomly guessing. Of course, it can be abused by people who want jurys to THINK the AI can perfectly reproduce stuff, but that is a problem with people’s knowledge of tech, not the tech itself.

    • zout@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Unfortunately, the people with no knowledge of tech will then proceed to judge if someone is innocent or guilty.