• bstix@feddit.dk
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    3 months ago

    I don’t think it’s the brain but rather our consciousness that is limited. Our sensory inputs are always on and processed by the brain, but our consciousness is very picky and also slow.

    People can sometimes recall true memories that they weren’t aware of, or react to things they didn’t think of and such.

    Consciousness is also somehow lagging behind the actual decision making, but always presents itself as the cause of action.

    Sort of like Windows telling you that you removed a USB stick 2 seconds after you did it and was well aware of it happening. Consciousness is like that, except it takes responsibility for it too…

    When it encounters something that it didn’t predict, it’ll tell you that “yeah this happened and this is why you did that”. Quite often the explanation for doing something is made up after it happened.

    This is a good thing mostly, because it allows you to react faster than having to consider your options consciousnessly. You do not need to or have time to make a conscious decision to dodge a dodgeball, but you’ll still think you did.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      When it encounters something that it didn’t predict, it’ll tell you that “yeah this happened and this is why you did that”. Quite often the explanation for doing something is made up after it happened.

      There are interesting stories about tests done with split-brain patients, where the bridge connecting the left and right brain hemispheres, the corpus callosum, is severed. There are then ways to provide information to one hemisphere, have that hemisphere initiate an action, and then ask the other hemisphere why it did that. It will immediately make up a lie, even though we know that’s not the actual reason. Other than being consciouss, we’re not that different from ChatGPT. If the brain doesn’t know why something happened, it’ll make up a convincing explanation.

    • fine_sandy_bottom@lemmy.federate.cc
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      This isn’t really what OP is talking about.

      We really can’t see very well at all outside of the centre of our focus. this paper says 6 degrees, I heard this as a coin held at arms length.

      Our minds “render” most of the rest of what we think we see.

      You’re right that we discard most of our sensory inputs, but with visual inputs there’s much less data than it appears.

      • bstix@feddit.dk
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 months ago

        You’re right. OPs second question is more specifically about vision, while I answered more broadly.

        Anyway, comparing it to data from a camera is not really possible.

        Analoge vs. digital and so, but also in the way that we experience it.

        The minds interpretation of vision is developed after birth. It takes several weeks before an infant can recognise anything and use the eyes for any purpose. Infants are probably blissfully experiencing meaningless raw sensory inputs before that. All the pattern recognition that is used to focus on things are learned features and so also dependent on actually learning them.

        I can’t find the source for this story, but allegedly there was this missionary in Africa who came across a tribe who lived in the jungle and was used to being surrounded by dense forest their entire life. He took some of them to the savannah and showed them the open view. They then tried to grab the animals that were grassing miles away. They didn’t develop a sense of perspective for things in longer distance, because they’d never experienced it.

        I don’t know if it’s true, but it makes a point. Some people are better at spotting things in motion or telling colours apart etc. than others. It matters how we use vision. Even in the moment. If I ask you to count all the red things in a room, you’ll see more red things that you were generally aware of. So the focus is not just the 6° angle or whatever. It’s what your brain is recognising for the pattern at mind.

        So the idea of quantifying vision to megapixels and framerate is kind of useless in understanding both vision and the brain. It’s connected.

        Same with sound. Some people have proved being able to use echo localisation similar to bats. You could test their vision blindfolded and they’d still make their way through a labyrinth or whatever.

        Testing senses is difficult because the brain tends to compensate in that way. It’d need to be a very precise testing method to make any kind of quantisation for a particular sense.