If i’m comparing ints with floats, it is my fault in the first place
Exactly, I’d expect a warning, if not an error.
Well-written 👍
Do we have a js type situation here
Probably more like the old precision problem. It ecists in C/C++ too and it’s just how fliats and ints work.
I dont think comparisons should be doing type conversion if i compare a float to an int i want it to say false cos types are different.
I don’t think that’s how most programmers expect it to work at all.
However most people would also expect 0.1+0.2==0.3 to return true, so what do I know.
Floating point is something most of us ignore until it bites us in the ass. And then we never trust it again.
Thats why i recon its good to keep u aware of it. Mind u i find its often fine as long as my ide and chagpt know what type it is im usually fine.
I do kinda like the rigidity of types tho. Proper Python type hints are a godsend.
Then most people shouldn’t be writing code, I don’t know what else to tell you, this is probably one of the first thing you learn about FP arithmetic, and any decent compiler/linter should warn you about that.
That makes sense, but then you’d just have people converting the int to a float manually and run into the exact same issues.
They wouldn’t be running into an issue, but creating one, that’s different
Meh. Imo anyone comparing an integer to a float and not expecting one of them to be implicitly casted to the other’s type will create that issue for themselves when doing the same thing with an explicit cast.
What I meant is, the former can be a genuine mistake, the latter is a conscious (probably uneducated) decision
I mean honestly its expected u should check floats similarity not equivalence.
Agreed. But the less experienced programmers I know are surprisingly naive about this.
Yeah its gonna be one of those problems chatgpt ain’t gonna help
Idiots downvoting you (c/technology…) but this how e.g. Haskell and rust handle that, and probably most strongly typed languages
But how far should that be taken should 8 == 8 return false because one is an unsigned int and the other is signed? Or 0.0 == 0.0 where they are floats and doubles? You can make a case for those due to a strong type system but it would go against most peoples idea of what equality is.
If bits aren’t same then i dont want it to tell me they are the same. And python just has one implementation for int and float.
I like python cos everything’s an object i dont want different types of objects to evaluate the same they are fundamentally different objects is that not what u would expect?
Even in python you can have control of what types of numbers are used under the hood with numpy arrays (and chances are if you are using floats in any quantity you want to be using numpy). I would be very surprised if
array([1,2,3], dtype=uint8) == array([1,2,3], dtype=int16)
gave[False, False, False]
. In general I think==
for numbers should give mathematical equivalence, with the understanding that comparing floats is highly likely to give false negatives unless you are extremely careful with what you are comparing.Numpys more or less a math wrapper for c isnt it?
More Fortran than C, but its the same for any language doing those sorts of array mathematics, they will be calling compiled versions of blas and lapack. Numpy builds up low level highly optimised compiled functions into a coherant python ecosystem. A numpy array is a C array with some metadata sure, but a python list is also just a C array of pointers to pyobjects.
Way too late for that. Every language I know makes some kind of auto conversion for numeric comparisons… and sometimes for strings as well.
I know of Rust, which is pedantic enough to not allow comparing integers to floats directly.
In certain situations, it even disallows making assumptions about equality and ordering between floats.
I still cant properly manage my head around the rust object borrowing. My ray tracer implementation from that blog on ray tracing was slow as shiiiit.
Not sure, what blog post you’re talking about, but there’s only really three things you can be doing wrong:
- Tons of cloning.
- Running your application from a debug build rather than release build.
- All the usual things one can be doing wrong in any programming language. Like, I imagine real-world raytracing is done on the GPU and uses highly optimized algorithms. I doubt a blog post would dive into those depths. And well, any kind of graphics programming is extremely slow, if you don’t issue the exact right incantations that the GPU manufacturer optimized for.
I geuss it’s something like : if close enough, set to true.
Now I’ll read the article and discover it’s like 100x more complex.
Edit : It is indeed at least 100x more complex.
it’s not only more complex it also doesn’t work like you described at all
TL;DR:
In Python, following returns False.
9007199254740993 == 9007199254740993.0
The floating point number 9007199254740993.0 is internally represented in memory as 9007199254740992.0 (due to how floating point works).
Python has special logic for comparing int with floats. Here it will try to compare the int 9007199254740993 with the float 9007199254740992.0. Python sees that the integer parts are different, so it will stop there and return False.
Comparing floats for equality is generally a bad idea anyways.
Floats should really only be used for approximate math. You need something like Java’s BigDecimal or BigInteger to handle floating point math with precision.
Looks like this is the equivalent for Python:
Comparing is fine, but it should be fuzzy. Less than and greater than are fine, so you basically should only be checking for withing a range of values, not a specific value.
I assume this is because that number is so large that it loses precision, in which case this is more of a quirk of floating point than a quirk of Python.
Disclaimer: Have not read the article yet.
It’s both. As you said it’s because of loss of floating point precision, but it’s also with some of the quirks how Python compares int with float. These two together causes this strange behavior.
Did nobody read the manual?
IEEE 754 double precision: The 53-bit significand precision gives from 15 to 17 significant decimal digits precision.
I’m not sure where the 17 comes from. It’s 15.
The “15 to 17” part is worded somewhat confusingly, but it’s not wrong.
The number of bits contained in a double is equivalent to ~15.95 decimal digits. If you want to store exactly a decimal number with a fixed number of significant digits,
floor(15.95) = 15
digits is the most you can hope for. However, if you want to store exactly a double by writing it out as a decimal number, you need 17 digits.