• slaacaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    “At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus”

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    The “why would they make this” people don’t understand how important this type of research is. It’s important to show what’s possible so that we can be ready for it. There are many bad actors already pursuing similar tools if they don’t have them already. The worst case is being blindsided by something not seen before.

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Vasa? Like, the Swedish ship that sank 10 minutes after it was launched? Who named that project?

  • Dasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    One use of this I’m in favour of is recreating Majel Barret’s voice as an AI for computer systems.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Microsoft’s research teams always makes some pretty crazy stuff. The problem with Microsoft is that they absolutely suck at translating their lab work into consumer products. Their labs publications are an amazing archive of shit that MS couldn’t get out the door properly or on time. Example - multitouch gesture UIs.

    As interesting as this is, I’ll bet MS just ends up using some tech that Open AI launches before MS’s bureaucratic product team can get their shit together.

      • T00l_shed@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Yes I hate what AI is becoming capable of. Last year everyone was laughing at the shitty fingers, but were quickly moving past that. I’m concerned that in the near future it will be hard to tell truth from fiction.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Microsoft: I know this will only be used for evil, but I’ll be damned if I’m gonna pass up on the hype-boost to my market share.

      Every other big corp: same!

  • Ms. ArmoredThirteen@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    These vids are just off enough that I think doing a bunch of mushrooms and watching them would be a deeply haunting experience

  • Maeve@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    A long time ago, someone from a not free country wrote a white paper on why we should care about privacy, because written words can be edited to level false accusations (charges) with false evidence. This chills me to the bone.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I’d be less-concerned about the impact on not-free countries than free countries. Dictator Bob doesn’t need evidence to have the justice system get rid of you, because he controls the justice system.

  • antlion@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Since it’s trained on celebrities, can it do ugly people or would it try to make them prettier in animation?

    The teeth change sizes, which is kinda weird, but probably fixable.

    It’s not too hard to notice for an up close face shot, but if it was farther away it might be hard - the intonation and facial expressions are spot on. They should use this to re-do all the digital faces in Star Wars.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    This is the best summary I could come up with:


    On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track.

    In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

    To show off the model, Microsoft created a VASA-1 research page featuring many sample videos of the tool in action, including people singing and speaking in sync with pre-recorded audio tracks.

    The examples also include some more fanciful generations, such as Mona Lisa rapping to an audio track of Anne Hathaway performing a “Paparazzi” song on Conan O’Brien.

    While the Microsoft researchers tout potential positive applications like enhancing educational equity, improving accessibility, and providing therapeutic companionship, the technology could also easily be misused.

    “We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection,” write the researchers.


    The original article contains 797 words, the summary contains 183 words. Saved 77%. I’m a bot and I’m open source!

  • ReallyActuallyFrankenstein@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I mean, I know it’s scary, but I’ll admit it is impressive, even when I watched it with jaded “every day is another AI breakthrough” exhaustion.

    The subtle face movements, eyebrow expression, everything seems to correctly infer how the face would articulate those specific words. When you think of how many decades something like this would be in the uncanny valley even with a team of trained people hand -tweaking the image and video, and this is doing it better in nearly every way, automatically, with just an image? Insane.