• Fungah@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 days ago

      That’s what they said basically.

      Like. You can compile better or more diverse datasets to train a model on. But you can also have better code training on the same dataset.

      The model is what the code poops out after its eaten the dataset I haven’t read the paper so no idea if the better training had to do with some super unique spin on their dataset but I’m assuming its better code.