SleepyBear

All about me. My Bio.

  • 7 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle







  • This is more complex than you’d think because the USB spec has changed many times over the years, with updates in the connectors used, along with other sub-category changes to cables too. So there’s USB versions 1, 2, 3, and 4 (and sub-versions too), along with different types of connector, eg. USB-A comes in regular and V3 (blue inside), and USB-C which is the later. Newer specs can transfer much larger amounts of data. Power Delivery (PD) is another sub-set of specification, which currently allows up to 240W of power with USB4, that’s a lot, enough to charge multiple laptops at once, vastly more then the 2.5W allowed for USB 3. For more confusion there is also USB Power Delivery Programmable Power Supply, which is a sub-set to help devices negotiate charging speeds.

    Another challenge - USB-C connectors can also support Thunderbolt, which gives it a whole other set of capabilities. This depends on both the cable and the port.

    This explains that mess that is USB-C: https://www.androidauthority.com/state-of-usb-c-870996/

    Key part:

    The latest USB data speed protocols are split into several standards. There are legacy USB 1.0 and 2.0, USB 3.0, USB 3.1, USB 3.2, and the latest USB 4.0, all of which can be supported over USB-C. Confusing enough, but these have since been revised and updated to include various sub-standards, which have encompassed USB 3.1 Gen 1, USB 3.1 Gen 2, and USB 3.2 Gen 2, along with the more recent USB 3.2 Gen 1×1, USB 3.2 Gen 1×2, and USB 3.2 Gen 2×2 revisions. Good luck deciphering the differences without a handbook. Hopefully, the graph below helps.

    You’d hope USB4 fixes it, but no. USB4 already boasts Gen 2×1, Gen 2×2, Gen 3×1, Gen 3×2, and Gen 4 variations, with data speeds ranging from 10 to 80 Gbps.

    Cable lengths can also have an impact. The spec only allows for a specific length after which you need active cables, which include chips in them to strengthen the signal.

    Several years ago a Google engineer started buying USB-C cables from Amazon and reviewing them in a lot of detail: https://www.amazon.com/gp/profile/amzn1.account.AFLICGQRF6BRJGH2RRD4VGMB47ZA

    If you read some you’ll see there are plenty of manufacturers who just don’t even stick by the rules, so it’s not always clear what you’ll actually get. It doesn’t help either that some products also don’t play by the rules and have custom sockets that need specific vendor cables. I’ve had keyboards, for example, that only work with their specific vendor cables, not general USB-C ones.

    This means you need to stick to a reputable set of brands, or the cables that came with the product. Decide if you need to charge something serious with it - eg. laptop, vs just a phone, watch, or small device, or whether you need data connectivity.

    As another poster mentioned, just buy Anker, they’re well made come with a reputable warranty, and aren’t actually that expensive. Don’t buy the cables you find by the supermarket/CVS checkout, or some ultra-cheap site. They might work, they might not.

    Oh, and the Google engineer had his laptop fried by bad cables: https://www.engadget.com/2016-02-03-benson-leung-chromebook-pixel-usb-type-c-test.html




  • No, and no.

    Having your own domain can make it easier to route traffic and control your subdomains, but it’s not necessary.

    Nginx isn’t technically required either, but can make it easier and more secure. Easier especially with multiple services.

    Eg. I have a lemmy at home, and a voyager at home. Nginx takes all https traffic on port 443 and then forwards as appropriate to the back end service on the correct ports. It also handles the TLS termination so your traffic is externally encrypted.

    The nginx can run in a docket at home too though.

    Personally I have my own domain and use ddclient to update Cloudflare where I’m hosting DNS to keep my external host names up to date in case of IP changes, but there are also dynamic dns services that do the same.

    To get TLS it’s easier your own domain though given how validation can be through DNS record changes.

    Basically, there are many ways to achieve what you want.


  • Last time my dishwasher died I just had to take it had and clean the pump underneath. Basically the connections apart under and had to just scrub them out. One tiny bit of plastic was gumming it up, causing some checks to fail. Stopped it running.

    They’re surprisingly simple machines.

    For Samsung I always buy the extended warranty. For our washer and dryer Assurion must have spent a fortune keeping them running. A lot more than I ever did to guy them. They’re only 8 years old too. It’s sad, but for Samsung they work nicely but fail frequently,

    For your next one but Bosche. They’re all good, get a base model and it’ll clean well and reliably.


  • To echo the other comment, you’ll probably run far more than expected.

    But you also need to think through and consider usage for each service. Is this BitWarden for you, or you and a thousand friends?

    Most services you run will scale up their hardware usage depending on how much load they’re being subjected to.

    Eg, I run Crafty Controller for my kids to manage their Minecraft servers. There’s a huge load different for each additional server.

    Wireguard with no traffic uses barely any resources. Pump high amounts of traffic through it for a lot of simultaneous connections and that’ll change.



  • Given this is !privacy and the advertise as front page features both “works will all your messaging apps” and “end to end encryption”, it seems important to flag currently those aren’t mutually compatible.

    It’s not their fault the apps don’t have e2e APIs, it’s a tough problem, but the secrecy and privacy guarantee is just “trust us to stick to our policy”. And they’re a start-up, tooling isn’t perfect (or even exist), mistakes happen, etc

    Their self-hosting looks interesting, but then it said to use your own clients too, which took the fun out of that.


  • “For example, if you send a message from Beeper to a friend on WhatsApp, the message is encrypted on your Beeper client, sent to the Beeper web service, which decrypts and re-encrypts the message with WhatsApp’s proprietary encryption protocol.”

    So, not really end to end for most common use-cases.


  • And the billion dollars is to pay the salaries, run the computers, etc

    Running a big credit card company is expensive. Things like PCI compliance make it far more complex than a regular company with extra staff just handling compliance functions.

    I’d also assume Apple require competent support staff, minimal automation, etc which raises the cost again, but delivers a better product. Support for AppleCard through GS is really good.

    Then making and physically mailing anything adds costs.

    The cost is also the whole retail unit, which includes Marcus banking and other products, not just AppleCard.









  • Generally you want to model out what the risks are and how acceptable each risk is, compared to the effort to mitigate it.

    Physical access, where someone can break in and get access to your secret material. So, you can just have full disk encryption to mitigate that. If you want to remotely reboot, you can install dropbear with SSH keys.

    If you want to control and secure remote access you care about authentication and authorization, as well as the data in transit. To protect the data in transit you need to ensure it’s encrypted, so use SSH (or a similar encrypted protocol).

    For auth and auth you want to minimize the risk. You can control who can log-in and from where with allow-lists of IP ranges, or control their keys. For example, for external SSH access I only use keys that are physically backed, either through a Yubikey or secure-enclave generated key. Those are non-extractable, so the only machine it can come from is the ones holding it. No one can get in without stealing the device. The device itself is full-disk encrypted, or course. You can also control how the keys are used, either biometric auth, password auth, or both, if you want.

    Then, within your network for automated job you can setup ssh certs between accounts that have no other access other than to run the specific task. If you need further abilities, run those locally based on a trigger of the transfer itself. Sure, the cert used is in memory, and on disk, but it’s encrypted on the disk. To be read from memory you’d need a very skilled attacker.

    At that point you’re worried about someone physically breaking it and reading memory in a running system. That’s a deeply unlikely scenario. The other risk is someone breaking SSH cert-only auth, with strong certs, and that’s only if you have SSH publicly open.

    Turn off external access to everything and you’re back to local network attacks, which you can mitigate with: password controlled wifi, MAC control if you want (although can be spoofed), and even then you can limit communication on your local network. You could vlan out the jobs you care about to their own network which isn’t wifi accessible.

    Basically it comes down to making the cert/password low-risk if you lose it, and then mitigating everything to that point. Then not worrying.




Moderates