Researchers in the UK claim to have translated the sound of laptop keystrokes into their corresponding letters with 95 percent accuracy in some cases.

That 95 percent figure was achieved with nothing but a nearby iPhone. Remote methods are just as dangerous: over Zoom, the accuracy of recorded keystrokes only dropped to 93 percent, while Skype calls were still 91.7 percent accurate.

In other words, this is a side channel attack with considerable accuracy, minimal technical requirements, and a ubiquitous data exfiltration point: Microphones, which are everywhere from our laptops, to our wrists, to the very rooms we work in.

  • Wilzax@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    The method is not based on timings. It is based on identifying the unique sound profile of each keystroke

    • ryannathans@aussie.zone
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      How can you make that claim? They used deep learning, does anyone know what characteristics the AI is using?

      • Wilzax@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I can only make that claim with as much confidence as yours that a non-standard keyboard layout would protect you. By your own argument, how can you make the claim that using a different keyboard layout will protect you? Everyone has a different typing style and their keyboards have different sound profiles, so clearly the AI needs to sample enough data from you and compare it to a dictionary to individually on audio collected of your typing. How do you know that the characteristics of the keyboard layout play an identifying role in this kind of attack when the AI has no concept of which sounds correspond to which keys until it has done its training?