I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • wischi@programming.devOP
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    6 months ago

    It’s not as simple as that. A lot of “computer things” are not exact powers of two. A prominent example would be HDDs.

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      6 months ago

      In terms of storage 1000 and 1024 take the same amount of bytes bits to represent. So from a computer point of view 1024 makes a lot more sense.

      It’s just a binary Vs decimal thing. 1000 is not nicely represented in binary the same as 1024 isn’t in decimal.

      Edit: was talking about storing the actual number.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        In terms of storage 1000 and 1024 take the same amount of bytes.

        What? No. A terabyte in 1024 units is 8,796,093,022,208 bits. In 1000 units it’s 8,000,000,000,000 bits.

        The difference is substantial with larger numbers.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          6 months ago

          Both require the same amount of bits again. So the second one makes more sense for a computer.