• 0 Posts
  • 48 Comments
Joined 11 months ago
cake
Cake day: July 30th, 2023

help-circle


  • Full extension rails are probably best going to come from the original vendor as a general principle, rather than attempting to use universal rails.

    If you have a wall mounted rack, unless your walls are not drywall, physics is working against you. It’s already a pretty intense heavy cantilever, and putting a server in there that can extend past the front edge is only going to make that worse.

    If you want to use full extension rails, you should get a rack that can sit squarely on the floor on either feet or appropriately rated casters. You should also make sure your heaviest items are on the bottom ESPECIALLY if you have full extension rails - it will make the rack less likely to overbalance itself and tip over when the server is extended.


  • Adding on one aspect to things others have mentioned here.

    I personally have both ports/URLs opened and VPN-only services.

    IMHO, it also depends on the exposure tolerance the software has or risk of what could get compromised if an attacker were to find the password.

    Start by thinking of the VPN itself (Taliscale, Wireguard, OpenVPN, IPSec/IKEv2, Zerotier) as a service just like the service your considering exposing.

    Almost all (working on the all part lol) of my external services require TOTP/2FA and are required to be directly exposed - i.e. VPN gateway, jump host, file server (nextcloud), git server, PBX, music reflector I used for D&D, game servers shared with friends. Those ones I either absolutely need to be external (VPN, jump) or are external so that I don’t have to deal with the complicated networking of per-user firewalls so my friends don’t need to VPN to me to get something done.

    The second part for me is tolerance to be external and what risk it is if it got popped. I have a LOT of things I just don’t want on the web - my VM control panels (proxmox, vSphere, XCP), my UPS/PDU, my NAS control panel, my monitoring server, my SMB/RDP sessions, etc. That kind of stuff is super high risk - there’s a lot of damage that someone could do with that, a LOT of attack surface area, and, especially in the case of embedded firmware like the UPSs and PDUs, potentially software that the vendor hasn’t updated in years with who-knows-what bugs lurking in it.

    So there’s not really a one size fits all kind of situation. You have to address the needs of each service you host on a case by case basis. Some potential questions to ask yourself (but obviously a non-exhaustive list):

    • does this service support native encryption?
      • does the encryption support reasonably modern algorithms?
      • can I disable insecure/broken encryption types?
      • if it does not natively support encryption, can I place it behind a reverse proxy (such as nginx or haproxy) to mitigate this?
    • does this service support strong AAA (Authentication, Authorization, Auditing)?
      • how does it log attempts, successful and failed?
      • does it support strong credentials, such as appropriately complex passwords, client certificate, SSH key, etc?
      • if I use an external authenticator (such as AD/LDAP), does it support my existing authenticator?
      • does it support 2FA?
    • does the service appear to be resilient to internet traffic?
      • does the vendor/provider indicate that it is safe to expose?
      • are there well known un-patched vulnerabilities or other forum/social media indicators that hosting even with sane configuration is a problem?
      • how frequently does the vendor release regular patches (too few and too many can be a problem)?
      • how fast does the vendor/provider respond to past security threats/incidents (if information is available)?
    • is this service required to be exposed?
      • what do I gain/lose by not exposing it?
      • what type of data/network access risk would an attacker gain if they compromised this service?
      • can I mitigate a risk to it by placing a well understood proxy between the internet and it? (for example, a well configured nginx or haproxy could mitigate some problems like a TCP SYN DoS or an intermediate proxy that enforces independent user authentication if it doesn’t have all the authentication bells and whistles)
      • what VLAN/network is the service running on? (*if you have several VLANs you can place services on and each have different access classes)
      • do I have an appropriate alternative means to access this service remotely than exposing it? (Is VPN the right option? some services may have alternative connection methods)

    So, as you can see, it’s not just cut and dry. You have to think about each service you host and what it does.

    Larger well known products - such as Guacamole, Nextcloud, Owncloud, strongswan, OpenVPN, Wireguard - are known to behave well under these circumstances. That’s going to factor in to this too. Many times the right answer will be to expose a port - the most important thing is to make an active decision to do so.




  • I’m probably the overkill case because I have AD+vC and a ton of VMs.

    RPO 24H for main desktop and critical VMs like vCenter, domain controllers, DHCP, DNS, Unifi controller, etc.

    Twice a week for laptops and remote desktop target VMs

    Once a week for everything else.

    Backups are kept: (may be plus or minus a bit)

    • Daily backups for a week
    • Weekly backups for a month
    • Monthly backups for a year
    • Yearly backups for 2-3y

    The software I have (Synology Active Backup) captures data using incremental backups where possible, but if it loses its incremental marker (system restore in windows, change-block tracking in VMware, rsync for file servers), it will generate a full backup and deduplicate (iirc).

    From the many times this has saved me from various bad things happening for various reasons, I want to say the RTO is about 2-6h for a VM to restore and 18 for a desktop to restore from the point at which I decide to go back to a backup.

    Right now my main limitation is my poor quad core Synology is running a little hot on the CPU front, so some of those have farther apart RPOs than I’d like.



  • Going to summarize a lot of comments here with one - VPNs are very powerful tools that can do lots of things. Traffic can be configured to go in several directions. We really have to know more about your use case to advise you as to what config you might need.

    Going to just write a ton of words on paper here - OP, let me know if any of this sounds like what you’re trying to do, and I can try to give a better explanation (or if something was confusing, let me know).

    VPN that uses the client’s IP when sending data out of the VPN server

    That’s the specific sentence I’m getting caught on myself. It could mean several things, some of which have been mentioned, some haven’t.

    • Site to site VPN: Two (generally) fixed devices operate a VPN connection between them and utilize some form of non-NAT routing so that every child device behind each site sees it’s “real” counterpart without getting NATed. However, NAT is typically still configured for IPv4 facing the internet, so each device shows an internet “exit IP” matching the site it’s on. Typically, the device with the most powerful / most stable / most central / least restrictive would be the receiver, while the other nodes would be initiators pointed to that receiver. In larger maps, you could build multiple hub/spoke systems as needed.

    • Sub-type of site to site possible: where one site tunnels all of its data over to the second site, and the second site is the one that provides NAT. This is similar in nature to how GL.Inet routers operate their VPN switch, but IMHO more powerful of you have greater control over the server compared to subscribing to a public VPN service. Notably for you example, the internet NAT exit device can be either the initiator or the receiver.

    • Normal VPN but without NAT: this is another possible expansion of what you’ve written, with one word adjusted - it operates the VPN but preserves the client IP as it’s entering the network. This is how most corporate remote access VPNs operate, since it would be overloaded and pointless to have every remote worker from a small pool of IP addresses when you don’t even need to use a NAT engine for intranet.

    My remote access VPN for my home lab is of the latter type, and I have a few of the sites to site connections floating around with various protocols.

    For mine, I have two VPN servers: one internal server that works tightly with my home firewall, and one remote server running inside a VPS. Both the firewall and VPS apply NAT rules to egress traffic, but internal bound traffic is not NATed and simply passed along the site to site connections to wherever it needs to go. My home-side remote access VPN is simply a “dumb” VPN server that has the VPN protocol port forwarded back to it and passes almost raw traffic to the firewall for processing.

    For routing, since each VPN requires its own subnet, I use FRR with a mixture of OSPF and iBGP (depending on how old the link is)

    For VPN protocols, I currently am using strongSwan for IPsec, but it’s really easy to slap OpenVPN onto that routing stack I already set up and have the routes propagate inward.



  • Any VPN that terminates on the firewall (be it site to site or remote access / “road warrior”) may be affected, but not all will. Some VPN tech uses very efficient computations. Notably affected VPNs are OpenVPN and IPSec / StrongSwan.

    If the VPN doesn’t terminate on the firewall, you’re in the clear. So even if your work provided an OpenVPN client to you that’s affected by AES-NI, because the tunnel runs between your work laptop and the work server, the firewall is not part of the encryption pipeline.

    Another affected technology may be some (reverse) proxies and web servers. This would be software running on the firewall like haproxy, nginx, squid. See https://serverfault.com/a/729735 for one example. In this variation of the check, you’d be running one of these bits of software on the firewall itself and either exposing an internal service (such as Nextcloud) to the internet, or in the case of squid doing some HTTP/S filtering for a tightly locked down network. However, if you just port forwarded 443/TCP to your nextcloud server (as an example), your nextcloud server would be the one caring about the AES-NI decrypt/encrypt. Like VPN, it matters to the extent of where the AES decrypt/encrypt occurred.

    Personally, I’d recommend you get AES-NI if you can. It makes running a personal VPN easier down the road if you think you might want to go that route. But if you know for sure you won’t need any of the tech I mentioned (including https web proxy on the firewall), you won’t miss it if it’s not there.

    Edit: I don’t know what processors you’re looking at that are missing AES-NI, but I think you have to go to some really really old tech on x86 to be missing it. Those (especially if they’re AMD FX / Opteron from the Bulldozer/Piledriver era) may have other performance concerns. Specifically for those old AMD processors (Not Ryzen/Epyc), just hard pass if you need something that runs slightly fast. They’re just too inefficient.


  • Counterpoint: if you system is configured such that the mere act of trying to send an email results in serious delays and regular bounces, you’re doing email wrong. Even push notifications may require third party routing through Google, Apple, or similar to get to the core OS in some cases.

    Yes, I recognize that hosting an SMTP server is difficult these days and can’t always be done at home due to IP restrictions. But that doesn’t mean you have to have an email server at home. I have a third party email on my domain and I can dispatch SMTP which arrives at expected non-delayed times even to Google and Microsoft accounts.

    I honestly wish more software would simply speak to an SMTP server of choice rather than defaulting to just hitting the CLI mail send or attempting a direct SMTP connection.


  • Actually, I legally can’t make money off of it for reasons that would dox me.

    I already pay for both VMware and Microsoft licensing among several others. If I can get my SSO by saving a little bit of money by using a different product, I will. I don’t mind paying for software I use when it makes sense, I only disagree with companies up-charging features like SSO that should be available to all customers.



  • You’ll find a lot of pessimistic people here because there are few unicorns when a commercial company buying an open source project didn’t go badly for the open source people. Most of the time after a sell-out the projects ends up under highly restrictive licensing, features behind paywalls, and many other problems making it a shadow of its former self.

    The most notable recent examples I can think of is IBM buys Red Hat buys CentOS, and that ended with forks as AlmaLinux and Rocky Linux. Oracle buys MySQL ended up forked as MariaDB. Businesses love to push their commercial offerings on open source products, and it’s not always in the form of plain old support agreements (like the people behind AlmaLinux). Often (this is common especially in databases) they’ll tax features like SSO, backups, or literally simple the privilege of having stable software. Projects like CentOS and VyOS don’t have stable OSS versions, and soooo many databases will put LDAP/Kerberos behind the commercial product, charging monthly or yearly operating costs.

    Even GitHub (which to be clear was closed source to begin with, but is a haven for F/OSS so I’ll give it an honorable mention here) started showing Microsoft-isms after M$ bought the platform.



  • Just because you’ve used it professionally, doesn’t mean it’s OK.

    Run the installation file to install the RDPwrap dynamic link library (DLL). This software provides the necessary functionality to enable Remote Desktop from a Windows 10 Home system.

      begin
        if not Reg.OpenKey('\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core', True) then
        begin
          Code := GetLastError;
          Writeln('[-] OpenKey error (code ', Code, ').');
          Halt(Code);
        end;
        try
          Reg.WriteBool('EnableConcurrentSessions', True);
        except
          Writeln('[-] WriteBool error.');
          Halt(ERROR_ACCESS_DENIED);
        end;
        Reg.CloseKey;
    

    So essentially the RDPwrap software subverts Windows 10 Home security to enable Remote Desktop Connections.

    Even without disassembling their shim DLL, just their readme language and installer code doesn’t give me warm fuzzies about this software’s ability to survive legal scrutiny or a Microsoft audit.

    Just like with backups, in my professional IT Admin opinion: if its expensive enough to need remote access, it’s expensive enough to remote access the right way. There’s plenty of free remote options on Windows that don’t require monkey patching the core services and using a Home license professionally. Plus, if you have more than a few Windows installs, you probably want Group Policy anyways, so you’re up to the pro license key for that anyway, plus the Windows Server license key(s) for the AD controller.

    Yeah, windows is expensive when used professionally. If you need windows that badly deal with it or talk to your software vendors about getting Linux or Mac software.


  • If you’re ok leaving a monitor plugged in (but can be off), my go-to is Parsec. Bonus points is that it works without needing a VPN (it uses UDP NAT hole punching like Chrome Remote Desktop). If you’ll be far far away from home, Chrome Remote Desktop tends to be slightly more reliable over high latency than parsec for me - but that could just be because I tuned mine for super low latency when nearby.

    Good news is, you can run both at the same time and see how they treat ya! (And both are free for base use, but parsec has a handful of premium features you can pay for if you like it) I have Parsec, CRD, RDP, and SSH all set up in various forms to get back “home” when I’m not.


  • (if this comment reads like I feel slighted it’s because I do)

    Their networking ecosystem is very focused on a specific class of prosumer and once in it can be very difficult to upgrade out of that bubble to toys that have more growth capacity, from both a tech and learning perspective.

    I have an advanced network with dynamic routing (iBGP and OSPF), as well as several VPN protocols for both site to site and access VPN. I also have redundant layer 3 gateways everywhere in the main site. Ubiquiti has had the tech to make redundant layer 3 for YEARS, but they refuse to and instead stop updating useful product lines that have more features and instead focus on gimmick products that have flashy marketing campaigns. Even on one of their more feature-ful routers (ER-4), I have to use OpenVPN gateway servers because Ubiquiti doesn’t support plugins that I can get on *sense for full mesh VPNs.

    I can really only use them at layer 2 because once I hit my network core I need redundancy protocols at L2 (stacking or vPC/MLAG) to maintain a system that can keep vSAN and Ceph happy.

    I’m really glad I went the *sense route instead of taking a chance on a USG-3 and depending on the custom json file to load OSPF, because that’s a feature they removed from newer gateways iirc.