Binary floating point math is like THIS. In most programming languages, it is based on the IEEE 754 standard.
所以,一切要从说起 IEEE 754 standard 说起、从二进制电脑如何模拟十进制运算说起
* Is floating point math broken?
https://stackoverflow.com/questions/588004/is-floating-point-math-brokenJavaScript treats decimals as floating point numbers, which means operations like addition might be subject to rounding error. You might want to take a look at this article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Just for information, ALL numeric types in javascript are IEEE-754 Doubles.
JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10.
In most programming languages, it is based on the IEEE 754 standard.
Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard.
P.S.
位数问题,math broken 情况,数字精度问题最终是位数问题
C# 是有通过专门的 decimal 数据类型作为值类型来解决这个问题的(),搜 C# do math float or decimal
https://stackoverflow.com/questions/753948/why-is-floating-point-arithmetic-in-c-sharp-impreciseC# float only has 7 digit precision, and 99.999999f has 8 digits.
A single-precision IEEE-754 float will only be 32-bits, which gives around 7 decimal digits of precision. If you want better than that, use a double.
C# 是有通过专门的 decimal 数据类型作为值类型来解决这个问题的,搜 IEEE-754 vs C# decimal
https://stackoverflow.com/questions/9079225/decimal-type-in-c-sharp-vs-ieee-754-standardC# decimal doesn't follow the same rules as IEEE 754 floating point numbers. The C# 4 specification is quite clear on how it should behave ... 别的语言应该有类似的库
关于 C# decimal 数据类型本身的弱点:
as far as I know, decimal is not compatible with anything - it's a Microsoft only invention - so you can't think of it in the same terms as normal floating point stuff. And it doesn't have subnormal values.
P.S.
你看到的控制台输出怪异结果(或者编程语言的怪异行为)是一个根据 IEEE-754 作浮点数运算的计算机语言如何处理浮点数的正确方式。别大惊小怪了