Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned that the tech giant had briefly helped the US military develop AI to study drone footage. In 2020 he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don’t control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn’t I be more responsible?”

The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.

  • dan1101@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I tried various photos, any of my personal photos with metadata stripped, and was surprised how accurate it was.

    It seemed really oriented towards detecting people and their moods, the socioeconomic status of things, and objects and their perceived quality.

  • chaosCruiser@futurology.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    23 days ago

    Have you’ve ever felt bad for buying cheap electronics or plastic products, because they aren’t good for the environment or the people working at the factories? Well, this article gives you a digital version of the same feeling.

      • rustyricotta@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        I think it’s pretty likely that online LLMs keep user inputs for training of future versions/models. Though it probably gets filtered for obvious stuff like this.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    Ngl, it’s kind of cool! I put one of the two public facing images I have published of myself and it’s trying to guess some details, some right, a few hilariously wrong.

    A human or AI sleuth could probably figure out where I live within 10km with information on the internet, but I just have to live with that. It’s a tricky balance between putting enough out there to show you’re not just an AI vs. not giving out so much info that an AI could convincingly impersonate you.

    Information about me is scattered across the Net like horcruxes, and you’d have to know someone I know to easily piece things together. I am worried that AI has the ability to analyze these large datasets faster than ever before, whether it is my writing style or anything else, but it will still be computationally intensive with a large dataset to be able discern any details with confidence.

  • AllNewTypeFace@leminal.space
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I uploaded a photo of an outdoor scene and got a three paragraph description giving the location (taken from GPS coordinates, presumably), a description of the scene, weather conditions, and the statement that there were things in the sky that could be UFOs.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      Well, if it’s in the sky, and the AI didn’t know what it was, it’s a UFO.

  • socsa@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I gave it a picture of houseplants and it said they looked healthy and well cared for which actually made me feel pretty happy and validated.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I tested with a few images, particularly drawings and arts. Then I had the idea of trying something different… and I discovered that it seems like it’s vulnerable to the “Ignore all previous instructions” command, just like LLMs: