• Knusper@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yeah, this works especially well for currencies (effectively doing all calculations in cents/pennies), as you do need perfect precision throughout the calculations, but the final results gets rounded to two-digit-precision anyways.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      quite a horrible hack, most modern languages have decimal type that handles floating rounding. And if not, you should just use rounding functions to two digits with currency.

      • em7@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Not sure what financing applications you develop. But what you suggest wouldn’t pass a code review in any financial-related project I saw.

        Using integers for currency-related calculations and formatting the output is no dirty hack, it’s industry standard because floating-point arithmetic is, on contemporary hardware, never precise (can’t be, see https://en.wikipedia.org/wiki/IEEE_754 ) whereas integer arithmetic (or integers used to represent fixed-point arithmetic) always has the same level of precision across all the range it can represent. You typically don’t want to round the numbers you work with, you need to round the result ;-) .

        • BruceDoh@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Phew. Sometimes I read things and think I’m going crazy. I work in ERP/accounting software and was sure the monetary data type I’ve been using was backed by integers, but the post you’re replying to had me second guessing myself…

      • Knusper@feddit.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Had to think about it, but yeah, I guess, you can’t do division or non-integer multiplication with integer cents, as standard integer math always rounds downwards and it forces you to round after every step.
        You could convert to a float for the division/multiplication and you do get more efficient addition/subtraction as well as simpler de-/serialization, but in most situations, it’s probably less trouble to use decimals.

        • nous@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You do not want to use floats for any part of calculating money. The larger the value the larger the error in them - not a trait you want when dealing with money. Fixed point numbers/decimals/big ints are much better for this, if you want greater than cent precision, treat the values as fractions of a cent (aka move the arbitrary decimal over one more place or however many you need for your application). The maths is the same no matter where you place the decimal point in it.