• 0 Posts
  • 21 Comments
Joined 3 months ago
cake
Cake day: March 16th, 2025

help-circle
  • A lot of people go into physics because they want to learn how the world works, but then are told that is not only not the topic of discussion but it is actively discouraged from asking that question. I think, on a pure pragmatic standpoint, there is no problem with this. As long as the math works it works. As long as the stuff you build with it functions, then you’ve done a good job. But I think there are some people who get disappointed in that. But I guess that’s a personal taste. If you are a pure utilitarian, I guess I cannot construct any argument that would change your mind on such a topic.

    I’m not sure I understand your last question. Of course your opinion on physical reality doesn’t make any different to reality. The point is that these are different claims and thus cannot all be correct. Either pilot wave people are factually correct that there are pilot waves or they are wrong. Either many worlds people are factually correct that there is a multiverse or they are wrong. Either objective collapse people are factually correct that there is an objective collapse or they are wrong (also objective collapse theories make different predictions, so they are not the same empirically).

    If we are not going to be a complete postmodernist, then we would have to admit that only one description of physical reality is actually correct, or, at the very least, if they are all incorrect, some are closer to reality than others. You are basically doing the same thing religious people do when they say there should be no problem believing a God exists as long as they don’t use that belief to contradict any of the known scientific laws. While I see where they are coming from, and maybe this is just due to personal taste, at the end of the day, I personally do care whether or not my beliefs are actually correct.

    There is also a benefit of having an agreement on how to understand a theory, which is it then becomes more intuitive. You’re not just told to “shut up and calculate” whenever someone asks a question. If you take a class in general relativity, you will be given a very intuitive mental picture of what’s going on, but if you take a class in quantum mechanics, you will not only not be given one, but be discouraged from even asking the question of what is going on. You just have to work with the maths in a very abstract and utilitarian sense.


  • No, it’s the lack of agreement that is the problem. Interpreting classical mechanics is philosophical as well, but there is generally agreement on how to think about it. You rarely see deep philosophical debates around Newtonian mechanics on how to “properly” interpret it. Even when we get into Einsteinian mechanics, there are some disagreements on how to interpret it but nothing too significant. The thing is that something like Newtonian mechanics is largely inline with our basic intuitions, so it is rather easy to get people on board with it, but QM requires you to give up a basic intuition, and which one you choose to give up on gives you an entirely different picture of what’s physically going on.

    Philosophy has never been empirical, of course any philosophical interpretation of the meaning of the mathematics gives you the same empirical results. The empirical results only change if you change the mathematics. The difficulty is precisely that it is more difficult to get everyone on the same page on QM. There are technically, again, some disagreements in classical mechanics, like whether or not the curvature of spacetime really constitutes a substance that is warping or if it is just a convenient way to describe the dispositions of how systems move. Einstein for example criticized the notion of reifying the equations too much. You also cannot distinguish which interpretation is correct here as it’s, again, philosophical.

    If we just all decided to agree on a particular way to interpret QM then there wouldn’t be an issue. The problem is that, while you can mostly get everyone on board with classical theories, with QM, you can interpret it in a time-symmetric way, a relational way, a way with a multiverse, etc, and they all give you drastically different pictures of physical reality. If we did just all pick one and agreed to it, then QM would be in the same boat as classical mechanics: some minor disagreements here and there but most people generally agree with the overall picture.




  • What I mean by subjective experience is what you might refer to as what reality looks like from a specific viewpoint or what it appears like when observed.

    So… reality? Why are you calling reality subjective? Yes, you have a viewpoint within reality, but that’s because reality is relative. It’s nothing inherent to conscious subjects. There is no such thing as a viewpoint-less reality. Go make a game in Unity and try to populate the game with objects without ever assigning coordinates to any of the objects or speeds to any of the object’s motion, and see how far you can go… you can’t, you won’t be able to populate the game with objects at all. You have to choose a coordinate system in order to populate the world with anything at all, and those coordinates are arbitrary based on an arbitrarily chosen viewpoint. Without picking a viewpoint, it is impossible to assign objects the majority of their properties.

    If you claim that the physical world doesn’t exist independently of observation, and is thus nothing beyond the totality of observed appearances

    No such thing as “appearances.” As Kant himself said: “though we cannot know these objects as things in themselves, we must yet be in a position at least to think them as things in themselves; otherwise we should be landed in the absurd conclusion that there can be appearance without anything that appears,” i.e. speaking of “appearances” makes no sense unless you believe there also exists an unobserved thing that is the cause of the appearances.

    But there is neither an unobserved thing causing the appearances, nor is what we observe an appearance. What we observe just is reality. We don’t observe the “appearance” of objects. We observe objects.

    If there is no object being observed

    Opposite of what I said.

    and the fact it it apparent from multiple perspectives is simply a consequence of the coherence of observation

    What we call the object is certain symmetries that are maintained over different perspectives, but there is no object independently of the perspectives.

    where do the qualities of those appearances originate from? How come things don’t cease to exist when they’re not being observed?

    They cease to exist in one viewpoint but they continue to exist in others, and symmetries allow you to predict when/how those objects may return to your own viewpoint.

    If you claim that the appearances don’t exist independently of the physical world being observed

    I am claiming appearances don’t exist at all.

    why does the world appear different from different perspectives?

    Reality is just perspectival. It just is what it is.

    How do you explain things like hallucinations (there is no physical object being observed, but still some appearance is present)?

    If they perceive a hallucinated tree and believe it is the same as a non-hallucinated tree, this is a failure of interpretation, not of “appearance.” They still indeed perceived something and that something is real, it reflects something real in the physical world. If they correctly interpret it as a different category of objects than a non-hallucinated tree then there is no issue.


  • There’s no such thing as “subjective experience,” again the argument for this is derived from a claim that reality is entirely independent of one’s point of view within it, which is just a wild claim and absolutely wrong. Our experience doesn’t “contain” the physical world, experience is just a synonym for observation, and the physical sciences are driven entirely by observation, i.e. what we observe is the physical world. I also never claimed “the experience of redness is the same thing as some pattern of neurons firing in the brain,” no idea where you are getting that from. Don’t know why you are singling out “redness” either. What about the experience of a cat vs an actual cat?


  • There is no “hard problem.” It’s made up. Nagel’s paper that Chalmers bases all his premises on is just awful and assumes for no reason at all that physical reality is something that exists entirely independently of one’s point of view within it, never justifies this bizarre claim and builds all of his arguments on top of it which then Chalmers cites as if they’re proven. “Consciousness” as Chalmers defines it doesn’t even exist and is just a fiction.




  • Many-worlds is nonsensical mumbo jumbo. It doesn’t even make sense without adding an additional unprovable postulate called the universal wave function. Every paper just has to assume it without deriving it from anywhere. If you take MWI and subtract away this arbitrary postulate then you get RQM. MWI - big psi = RQM. So RQM is inherently simpler.

    Although the simplest explanation isn’t even RQM, but to drop the postulate that the world is time-asymmetric. If A causes B and B causes C, one of the assumptions of Bell’s theorem is that it would be invalid to say C causes B which then causes A, even though we can compute the time-reverse in quantum mechanics and there is nothing in the theory that tells us the time-reverse is not equally valid.

    Indeed, that’s what unitary evolution means. Unitarity just means time-reversibility. You test if an operator is unitary by multiplying it by its own time-reverse, and if it gives you the identity matrix, meaning it completely cancels itself out, then it’s unitary.

    If you just accept time-symmetry then it is just as valid to say A causes B as it is to say C causes B, as B is connected to both through a local causal chain of events. You can then imagine that if you compute A’s impact on B it has ambiguities, and if you compute C’s impact on B it also has ambiguities, but if you combine both together the ambiguities disappear and you get an absolutely deterministic value for B.

    Indeed, it turns out quantum mechanics works precisely like this. If you compute the unitary evolution of a system from a known initial condition to an intermediate point, and the time-reverse of a known final condition to that intermediate point, you can then compute the values of all the observables at that intermediate point. If you repeat this process for all observables in the experiment, you will find that they evolve entirely locally and continuously. Entangled particles form their correlations when they locally interact, not when you later measure them.

    But for some reason people would rather believe in an infinite multiverse than just accept that quantum mechanics is not a time-asymmetric theory.



  • pcalau12i@lemmy.worldtomemes@lemmy.worldDeterminism
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    18 days ago

    Speaking of predicting outcomes implies a forwards arrow of time. As far as we know, the arrow of time is a macroscopic feature of the universe and just doesn’t exist at a fundamental level. You cannot explain it with entropy without appealing to the past hypothesis, which then requires appealing to the Big Bang, which is in and of itself an appeal to general relativity, something which is not part of quantum mechanics.

    Let’s say we happen to live in a universe where causality is genuinely indifferent to the arrow of time. This doesn’t mean such a universe would have retrocausality, because retrocausality is just causality with an arrow facing backwards. If its causal structure was genuinely independent of the arrow of time, then its causal structure would follow what the physicist Emily Adlam refers to as global determinism and an "all-at-once* structure of causality.

    Such a causal model would require the universe’s future and past to follow certain global consistency rules, but each taken separately would not allow you to derive the outcomes of systems deterministically. You would only ever be able to describe the deterministic evolution of a system retrospecitvely, when you know its initial and final state, and then subject it to those consistency rules. Given science is usually driven by predictive theories, it would thus be useless in terms of making predictions, as in practice we’re usually only interested in making future predictions and not giving retrospective explanations.

    If the initial conditions aren’t sufficient to predict the future, then any future prediction based on an initial state, not being sufficient to constrain the future state to a specific value, would lead to ambiguities, causing us to have to predict it probabilistically. And since physicists are very practically-minded, everyone would focus on the probabilistic forwards-evolution in time, and very few people would be that interested in reconstructing the state of the system retrospectively as it would have no practical predictive benefit.

    I bring this all up because, as the physicists Ken Wharton, Roderick Sutherland, Titus Amza, Raylor Liu, and James Saslow have pointed out, you can quite easily reconstruct values for all the observables in the evolution of system retrospectively by analyzing its weak values, and those values appear to evolve entirely locally, deterministically, and continuously, but doing so requires conditioning on both the initial and final state of the system simultaneously and evolving both ends towards that intermediate point to arrive at the value of the observable at that intermediate point in time. You can therefore only do this retrospectively.

    This is already built into the mathematics. You don’t have to add any additional assumptions. It is basically already a feature of quantum mechanics that if you evolve a known eigenstate at t=-1 and a known eigenstate at t=1 and evolve them towards each other simultaneously until they intersect at t=0, at the interaction you can seemingly compute the values of the observables at t=0. Even though the laws of quantum mechanics do not apply sufficient constraints to recover the observables when evolving them in a single direction in time, either forwards or backwards, if you do both simultaneously it gives you those sufficient constraints to determine a concrete value.

    Of course, there is no practical utility to this, but we should not necessarily confuse practicality with reality. Yes, being able to retrospectively reconstruct the system’s local and deterministic evolution is not practically useful as science is more about future prediction, but we shouldn’t declare from this practical choice that therefore the system has no deterministic dynamics, that it has no intermediate values and when it’s in a superposition of states it has no physical state at all or is literally equivalent to its probability distribution (a spread out wave in phase space). You are right that reconstructing the history of the system doesn’t help us predict outcomes better, but I don’t agree it doesn’t help us understand reality better.

    Take all the “paradoxes” for example, like the Einstein-Podolsky-Rosen paradox or, my favorite, the Frauchiger–Renner paradox. These are more conceptual problems dealing with an understanding of reality and ultimately your answer to them doesn’t change what predictions you make with quantum mechanics in any way. Yet, I still think there is some benefit, maybe on a more philosophical level, of giving an answer to those paradoxes. If you reconstruct the history of the systems with weak values for example, then out falls very simple solutions to these conceptual problems because you can actually just look directly at how the observables change throughout the system as it evolves.

    Not taking retrospection seriously as a tool of analysis leads to people believing in all sort of bizarre things like multiverses or physically collapsing wave functions, that all disappear if you just allow for retrospection to be a legitimate tool of analysis. It might not be as important as understanding the probabilistic structure of the theory that is needed for predictions, but it can still resolve confusions around the theory and what it implies about physical reality.


  • pcalau12i@lemmy.worldtomemes@lemmy.worldDeterminism
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    18 days ago

    According to our current model, we would probably observe un-collapsed quantum field waves, which is a concept inaccessible from within the universe, and could very well just be an artifact of the model instead of ground truth.

    It so strange to me that this is the popular way people think about quantum mechanics. Without reformulating quantum mechanics in any way or changing any of its postulates, the theory already allows you to recover the intermediate values of all the observables in any system through retrospection, and it evolves locally and deterministically.

    The “spreading out as a wave” isn’t a physical thing, but an epistemic one. The uncertainty principle makes it such that you can’t accurately predict the outcome of certain interactions, and the probability distribution depends upon the phase, which is the relative orientation between your measurement basis and the property you’re trying to measure. The wave-like statistical behavior arises from the phase, and the wave function is just a statistical tool to keep track of the phase.

    The “collapse” is not a physical process but a measurement update. Measurements aren’t fundamental to quantum mechanics. It is just that when you interact with something, you couple it to the environment, and this coupling leads to the effects of the phase spreading out to many particles in the environment. The spreading out of the influence of the phase dilutes its effects and renders it negligible to the statistics, and so the particle then briefly behaves more classically. That is why measurement causes the interference pattern to disappear in the double-slit experiment, not because of some physical “collapsing waves.”

    People just ignore the fact that you can use weak values to reconstruct the values of the observables through any quantum experiment retrospectively, which is already a feature baked into the theory and not something you need to add, and then instead choose to believe that things are somehow spreading out as waves when you’re not looking at them, which leads to a whole host of paradoxes: the Einstein-Podolsky-Rosen paradox, the Wigner’s friend paradox, the Frauchiger-Renner paradox, etc.

    Literally every paradox disappears if we stop pretending that systems are literally waves and that the wave-like behavior is just the result of the relationship between the phase and the statistical distribution of the system, and that the waves are ultimately a weakly emergent phenomena. We only see particle waves made up of particles. No one has ever seen a wave made up of nothing. Waves of light are made up of photons of light, and the wave-like behavior of the light is a weakly emergent property of the wave-like statistical distributions you get due to the relationship between the statistical uncertainty and the phase. It in no way implies everything is literally made up waves that are themselves made of nothing.




  • On the contrary, this breaks semi-classical gravity’s usage of quantum mechanics. The predictions the approximation makes are not compatible with our observations of how quantum mechanics works, and scientists are working on an experiment that can disprove the hypothesis. ( https://doi.org/10.1103/PhysRevLett.133.180201 )

    The paper is interesting and in the right direction but is just a proposal. It needs to actually be performed, because the results can finally point in the right direction rather than just guessing at what the right direction is.

    I’m afraid you’ve got that precisely backwards. Falsifiability is the core of science

    No, it’s a justification for pseudoscience by allowing anyone to invent anything out of whole cloth based on absolutely nothing at all and call it “science.”

    as it is the method by which factually-deficient hypotheses are discarded

    Except it’s precisely used to justify them.

    If there is no contradiction between the theory and experimental practice then either all false theories have been discarded or we have overlooked an experiment that could prove otherwise.

    Those two, or the third case that we just haven’t conducted the experiment yet that would contradict with current theories (still talking about GR/QFT here specifically).

    That’s distinctly false. The Higgs Boson was only proposed in 1964 and wasn’t measured 'til just 13 years ago.

    I am obviously not defending that position and you know for a fact that is a position that has gained a lot of steam recently, you’re just trying to annoyingly turn it around on me to make it seem like I am defending a position I am not by stating something rather obvious.

    Because we still have falsifiable hypotheses to test.

    And this is exactly why you’re a promoter of pseudoscience: if a theory is “falsifiable” it’s “science” and “needs to be tested,” even if it’s literally based on nothing and there is no good reason anyone should take it seriously. If I claim there is a magical teapot orbiting Saturn that is the cause of some of its currently not well-understood weather patterns and if you just built a specialized 20 billion dollar telescope with a special lens on it and pointed it at specific coordinates you’d discover the proof, technically you can falsify this claim so by your logic it’s “science” and therefore we should go out of our way to investigate it. I don’t get why it is so difficult to just accept that there is more to a reasonable scientific proposal than it just technically can be falsified. That is obviously not a sufficient criteria at all and treating it as just allows for a ton of factually-deficient hypotheses based on nothing to be taken seriously.

    Whatever bullshit nonsense or mysticism someone makes up, as long as there is technically some way to conduct an experiment to falsify it, you will say that’s “science.” Popper has been complete poison to the academic discourse. In the past I would have to argue against laymen mystics, the equivalent of the modern day “quantum healing” types. But these days I don’t even care about those mystics because we have much more problematic mystics: those in academia who promote nonsense like “quantum immortality” and “quantum consciousness” or whatever new “multiverse” theory someone came up with based on pure sophistry, and they pass this off as genuine science, and we are expected to take it seriously by because “erm it technically can be falsified.”

    Although, my magic teapot analogy isn’t even good because the analogy says the teapot is proposed to explain not well-understood weather patterns, so it is proposed to explain an actual problem we haven’t solved. A more accurate analogy would be for a person to claim that they believe the hexagon cloud on Saturn should actually be a triangle. Why? No reason, they just feel it should be a triangle, because triangles seem more natural to them. According to you, again, this is technically still science because technically their theory can indeed be falsified by building the special telescope and pointing it at those coordinates.

    It’s impossible to combat pseudoscience mentality in the public and to combat things like quantum mysticism when some of the #1 promoters of quantum mysticism these days are academics themselves. Half the time when I see a completely absurd headline saying that quantum mechanics proves material reality doesn’t exist and “everything is consciousness,” or that quantum mechanics proves we’re immortal, or that quantum mechanics proves we live inside of a multiverse or a simulation, I click the article to see the source and no, it doesn’t go back to a Deepak Chopra sophist, it goes back to “legitimate” publications by an actual academic with credentials in the field who is taken seriously because “falsifiability.”

    How am I supposed to then tell the laymen the article they’re reading is bologna? I can’t, because they don’t understand quantum physics, so they wouldn’t even have the tools to understand it if I explained to them why it’s wrong, so they just trust it because it’s written by someone with “credentials.” Mysticism in academia is way more serious than mysticism among laymen because even otherwise reasonable laymen who do view science positively will end up believing in mysticism if it is backed an academic.

    We have, actually. The list of unsolved problems in physics on Wikipedia is like 15 pages long and we’re developing new experiments to address those questions constantly.

    Why are you intentionally being intellectually dishonest? We have been talking about a very specific theory and a very specific field of research this whole time, and you are trying to deflect this to science generally. I am sorry I even engaged with you at all, you are not in any way intellectually honest in the slightest and intentionally trying to misrepresent everything I say to “own” me and constantly are trying to pretend my position is something that it is not.

    By criticizing a small handful of pseudoproblems in science you are now trying to dishonestly pretend I am claiming there are no genuinely unsolved problems, because you don’t want to actually address my point and are just a hack and I am blocking you after this post for such a ridiculously dishonest way to try and smear me rather than just address my point.

    Likewise, there’s no reason to assume that the universe is not acting the way we’d like it to except where contradicted by observable evidence.

    We should just assume the universe is behaving exactly the way we observe it to behave based on the evidence.

    What we “like” is irrelevant. We should just observe the evidence and accept that is how the universe works until additional evidence shows otherwise.

    If the laws of physics can “break down” then they aren’t “laws”, merely approximations that are only accurate under a limited range of conditions.

    Plenty of laws of physics are only applicable to certain conditions, like the ideal gas law. Although, that’s not the impression I got from this conversation on how you were using “break down” in the first place, as we were talking about semi-classical gravity where you have singularities at black holes, and you were using “break down” in that sense. There is no change in the law of physics at black holes in semi-classical gravity, the singularity arises from the very structure of the theory and is not in contradiction with it, i.e. its fundamental principles don’t suddenly change at a black hole. The singularity at the black hole is a result of its underlying principles.

    The fact that the universe continues to exist despite the flaws in our theories proves that there must be a set of rules which are applicable in all cases.

    You want them to apply to cases that currently have not been demonstrated by physically even possible to probe, so you have not even demonstrated it is an actual “case” at all. I am not denying it isn’t physically possible to probe either before you dishonestly try to turn my statement around to intentionally misrepresent me as you love to do. I am saying quite the opposite: that we should try to probe the areas that seem to not make much in our current theories. We should be trying to probe quantum effects and gravitational effects at the same time to see how they behave, because that’s how we could actually make progress if semi-classical gravity is indeed wrong.

    We shouldn’t be constantly inventing fake “theories” based on literally nothing that are technically falsifiable then acting surprised when they are falsified, and then slightly tweaking them so they are not longer falsified with the previous experiment but still technically falsifiable with a future experiment. This would be like if you pointed the expensive telescope at Saturn and did not see the magical teapot, so I just changed my mind and said the teapot is actually orbiting Neptune so we need a bigger telescope and then the theory would be falsified!

    I could play this game forever and keep tweaking my nonsensical claim every time it is falsified, and according to you this is science! What I am saying is this is not science because science is not just falsifiability. There are tons of genuinely unsolved problems in science, but there are also a small number of “problems” which are poorly motivated, like the “fine-tuning problem” which is also not a genuine scientific problem.

    It’s really like 99.9% of the stuff in physics that’s perfectly fine. Most people in the real world are actually working on practical problems and not nonsense like “quantum consciousness” or whatever. The handful of people I am criticizing is largely a small minority, but they have a huge impact on public discourse and public understanding of science as they tend to be very vocal

    And if the rules can change, then our theories will have to be updated to describe those changes and the conditions where they occur.

    Obviously.


  • I understand that in semi-classical gravity the curvature of spacetime is based on the expectation value of the stress energy tensor, and so a massive object in a superposition of two possible location would curve spacetime as if the object was in the middle-point of the two locations, but when the state vector is reduced it would suddenly shift to one of those two points. While this does seem weird, no one has ever physically demonstrated that measuring this is actually possible, so until there is a demonstration that it is actually physically possible to measure this, there isn’t actually a contradiction between theory and experimental practice. All we can say is “that seems weird” but that’s not a scientific argument against it.

    You say it diverges from reality but… how do you know that? No experiment has ever demonstrated this. It could be that this is just how reality works, or it could also be that it’s just not physically possible to probe this in the first place, and so it’s just a nonphysical quirk of the theory of computing something nonphysical in the first place. We can’t say for certain it is wrong until someone actually conducts an experiment to probe it, and if we find it is wrong, then not only would we rule out semi-classical gravity, but we would have the data needed to actually replace it with something.

    This is my issue with “fundamental physics” these days in general: they do not actually demonstrate any contradiction between theory and experimental practice. The desire to unify quantum mechanics and general relativity is just based on some preconceptions that information shouldn’t be destroyed and gravity should be quantizable like every other force, but how do you know that with certainty? You did not derive this from experimental observation, because semi-classical gravity is currently compatible with all experimental observations. It is more that one begins with a preconception of how they think reality should work and says the theory is wrong because it does not fit those preconceptions. Yes, certain aspects of semi-classical gravity are “weird,” but that’s not a scientific argument against it.

    Because of the influence of Karl Popper, people think science = falsifiability, so new theories are then constructed not based on experimental evidence but by trying to better fit into our preconceptions, but are also made to falsifiable because that is “science.” When they are falsified by an experiment that just reconfirms the predictions of semi-classical gravity, they just tweak it a bit so the theory is not falsified by that experiment any longer but still technically falsifiable, and they do this ad infinitum. You end up with decades doing this and what do you have, String Theory that is only applicable to an anti-de Sitter space, a universe we don’t actually live in? Or Loop Quantum Gravity which can’t even reproduce Einstein’s field equations?

    Popper has been a detrimental influence onto the sciences. Science is not falsifiability. Science is about continually updating our models to resolve contradictions between the theory and experimental practice. If there is no contradiction between the theory and experimental practice then there is no justification to update the model. I have seen a mentality growing more popular these days which is that “fundamental physics hasn’t made progress in nearly a century.” But my response to this is why should it make progress? Why have not encountered a contradiction between experimental practice and theory, so all this “research” into things like String Theory is just guesswork, there is no reason to expect it to actually go anywhere.

    The same is also true of the so-called “measurement problem” which as physicists like Carlo Rovelli and Francois-Igor Pris have pointed out only arise because we have certain metaphysical preconceptions about how reality should work which when applied to quantum theory lead to absurdities and so people often conclude the theory must be wrong somehow, that it’s “incomplete,” that it needs to be replaced by something like an objective collapse theory or a multiverse theory or something similar. Yet, this is not a scientific criticism, the theory is in no contradiction with the experimental evidence. We should just get rid of our preconceptions about how reality should work and accept how reality does. As Bohr said: stop telling God what to do.

    There is no reason to assume the universe acts the way we’d like it to. Maybe the laws of physics really are just convoluted and break down at black holes. While yes, maybe one day we will discover a theory where it does not break down, it is anti-scientific to begin with an a priori assumption that this must necessarily be the case. It could be that the next breakthrough in fundamental physics even makes the mathematics more convoluted! You cannot just begin with a starting point prior to investigation that this is how nature works, you have to derive that a posteriori through investigation, and currently this is what our best theory derived from investigation states. It may be wrong, but there is no justification in claiming it is wrong without showing a contradiction between theory and experimental practice.

    This is my issue here. The desire to replace semi-classical gravity with something else, the measurement problem, the desire to unify all forces of nature into a “theory of everything,” trying to solve the “fine-tuning problem,” these are all ultimately pseudoproblems because they do no derive from any contradiction between experimental practice and theory. They are not genuine scientific problems. I am not even against people looking into these, because who knows, maybe they will stumble across something interesting, but the issue with treating these all as genuine “problems” is that when they go “unsolved” for a century, it makes it look like there is a “crisis in fundamental physics.” There just isn’t. In fact, it’s quite the opposite: every experimental test reconfirms our current best theories, this is the exact opposite of a “crisis.” People pretend like we have a “crisis” because our current theories are too good!


  • If I am not mistaken, information loss inside of a black hole comes out of semi-classical gravity. If these symmetries are tied to the assumption that the laws of physics don’t change and the symmetries break down in semi-classical gravity, then does that mean in semi-classical gravity the laws of physics change? Is there a particular example of that in the theory you could provide so I can understand?

    I don’t disagree that information is conserved in general relativity and quantum mechanics taken separately, but when you put them together it is not conserved, and my concern is that I don’t understand why we must therefore conclude that this necessarily wrong and it can’t just be that information conservation only holds true for limiting cases when you aren’t considering how gravitational effects and interference effects operate together simultaneously. I mean, energy conservation breaks down when we consider galactic scales as well in the case of cosmic redshift.

    Yes, we can experimentally verify these laws of conservation, because in practice we can only ever observe gravitational effects and interference effects separately, as a limiting case, and thus far there hasn’t been an experiment demonstrating the plausibility of viewing them simultaneously and how they act upon each other. In semi-classical gravity these “weird” aspects like information loss in a black hole only arise when we actually consider them together, which is not something we have observed yet in a lab, so I don’t see the basis of thinking it is wrong.

    You seem to suggest that thinking it is wrong implies the laws of physics change, but I’m not really sure what is meant by this. Is semi-classical gravity not a self-consistent mathematical framework?


  • I still don’t really understand why the information just can’t be destroyed. It seems like we’re starting from an assumption that it shouldn’t be destroyed despite it being so in semi-classical gravity, and then trying to think of alternative theories which could preserve it such as on the boundary or in its charge/mass/spin. Maybe that’s correct but it seems like speculation, and it’s not speculation based on any actual contradiction between theory and practice, i.e. not because semi-classical gravity has actually made an incorrect prediction in an experiment we can go out and verify, but only because we have certain preconceptions as to how nature should work which aren’t compatible with it. So it doesn’t really come across to me as a scientific “problem” but more of a metaphysical one.