JOMusic@lemmy.ml to World News@lemmy.worldEnglish · 3 days agoOpen-source Deepseek R1 dethrones commercial AI, now allegedly being hit by cyberattackwww.cnbc.comexternal-linkmessage-square20fedilinkarrow-up11arrow-down10cross-posted to: technology@lemmy.world
arrow-up11arrow-down1external-linkOpen-source Deepseek R1 dethrones commercial AI, now allegedly being hit by cyberattackwww.cnbc.comJOMusic@lemmy.ml to World News@lemmy.worldEnglish · 3 days agomessage-square20fedilinkcross-posted to: technology@lemmy.world
minus-squareSoftestSapphic@lemmy.worldlinkfedilinkEnglisharrow-up0·2 days agoBecause there aren’t enough pictures of tail-less cats out there to train on. It’s literally impossible for it to give you a cat with no tail because it can’t find enough to copy and ends up regurgitating cats with tails. Same for a glass of water spilling over, it can’t show you an overfilled glass of water because there aren’t enough pictures available for it to copy. This is why telling a chatbot to generate a picture for you will never be a real replacement for an artist who can draw what you ask them to.
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up0·2 days agoso… with all the supposed reasoning stuff they can do, and supposed “extrapolation of knowledge” they cannot figure out that a tail is part of a cat, and which part it is.
minus-squareKuvwert@lemm.eelinkfedilinkEnglisharrow-up0·1 day agoThe “reasoning” models and the image generation models are not the same technology and shouldn’t be compared against the same baseline.
minus-squareSoftestSapphic@lemmy.worldlinkfedilinkEnglisharrow-up0·2 days agoThe “reasoning” you are seeing is it finding human conversations online, and summerizing them
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up0·1 day agoI’m not seeing any reasoning, that was the point of my comment. That’s why I said “supposed”
minus-squareblakenong@lemmings.worldlinkfedilinkEnglisharrow-up0·2 days agoOh, that’s another good test. It definitely failed. There are lots of Manx photos though. Manx images: https://duckduckgo.com/?q=manx&iax=images&ia=images
Because there aren’t enough pictures of tail-less cats out there to train on.
It’s literally impossible for it to give you a cat with no tail because it can’t find enough to copy and ends up regurgitating cats with tails.
Same for a glass of water spilling over, it can’t show you an overfilled glass of water because there aren’t enough pictures available for it to copy.
This is why telling a chatbot to generate a picture for you will never be a real replacement for an artist who can draw what you ask them to.
so… with all the supposed reasoning stuff they can do, and supposed “extrapolation of knowledge” they cannot figure out that a tail is part of a cat, and which part it is.
The “reasoning” models and the image generation models are not the same technology and shouldn’t be compared against the same baseline.
The “reasoning” you are seeing is it finding human conversations online, and summerizing them
I’m not seeing any reasoning, that was the point of my comment. That’s why I said “supposed”
Oh, that’s another good test. It definitely failed.
There are lots of Manx photos though.
Manx images: https://duckduckgo.com/?q=manx&iax=images&ia=images