• ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    When it encounters something that it didn’t predict, it’ll tell you that “yeah this happened and this is why you did that”. Quite often the explanation for doing something is made up after it happened.

    There are interesting stories about tests done with split-brain patients, where the bridge connecting the left and right brain hemispheres, the corpus callosum, is severed. There are then ways to provide information to one hemisphere, have that hemisphere initiate an action, and then ask the other hemisphere why it did that. It will immediately make up a lie, even though we know that’s not the actual reason. Other than being consciouss, we’re not that different from ChatGPT. If the brain doesn’t know why something happened, it’ll make up a convincing explanation.