ylai@lemmy.ml to Technology@lemmy.worldEnglish · 10 months agoVR Headsets Are Approaching the Eye’s Resolution Limitsspectrum.ieee.orgexternal-linkmessage-square37fedilinkarrow-up1178arrow-down143
arrow-up1135arrow-down1external-linkVR Headsets Are Approaching the Eye’s Resolution Limitsspectrum.ieee.orgylai@lemmy.ml to Technology@lemmy.worldEnglish · 10 months agomessage-square37fedilink
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up14arrow-down9·10 months ago maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows. I don’t understand why so many people assume an LLM would make glaring errors like this…
minus-squaredrislands@lemmy.worldlinkfedilinkEnglisharrow-up18arrow-down5·10 months ago…because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up29arrow-down5·edit-210 months agoThey make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.
minus-squaredrislands@lemmy.worldlinkfedilinkEnglisharrow-up3·10 months agoY’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·10 months agoAh apologies, I’m terrible with tracking usernames, I’ll edit for clarity.
minus-squareGarbanzo@lemmy.worldlinkfedilinkEnglisharrow-up1·10 months agoI’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·10 months agoThat could easily happen with reconfiguring throw own writing as well though.
minus-squareZammy95@lemmy.worldlinkfedilinkEnglisharrow-up3arrow-down1·10 months agoI think he was being sarcastic lol. I…hope
I don’t understand why so many people assume an LLM would make glaring errors like this…
…because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.
They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.
Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.
Ah apologies, I’m terrible with tracking usernames, I’ll edit for clarity.
I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.
That could easily happen with reconfiguring throw own writing as well though.
I think he was being sarcastic lol. I…hope