"Buy Me A Coffee"

  • 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Yes it would. In my case though I know all of the users that should have remote access snd I’m more concerned about unauthorized access than ease of use.

    If I wanted to host a website for the general public to use though, I’d buy a VPS and host it there. Then use SSH with private key authentication for remote management. This way, again, if someone hacks that server they can’t get access to my home lan.


  • Their setup sounds similar to mine. But no, only a single service is exposed to the internet: wireguard.

    The idea is that you can have any number of servers running on your lan, etc… but in order to access them remotely you first need to VPN into your home network. This way the only thing you need to worry about security wise is wireguard. If there’s a security hole / vulnerability in one of the services you’re running on your network or in nginx, etc… attackers would still need to get past wireguard first before they could access your network.

    But here is exactly what I’ve done:

    1. Bought a domain so that I don’t have to remember my IP address.
    2. Setup DDNS so that the A record for my domain always points to my home ip.
    3. Run a wireguard server on my lan.
    4. Port forwarded the wireguard port to the wireguard server.
    5. Created client configs for all remote devices that should have access to my lan.

    Now I can just turn on my phone’s VPN whenever I need to access any one of the services that would normally only be accessible from home.

    P.s. there’s additional steps I did to ensure that the masquerade of the VPN was disabled, that all VPN clients use my pihole, and that I can still get decent internet speeds while on the VPN. But that’s slightly beyond the original ask here.




  • I’m also running Ubuntu as my main machine at home. (I have a Mac and do Android development for my day job).

    But at home, I do a lot of website and backend dev.

    1. Code in VSCode
    2. Build using docker buildx
    3. Test using a local container on my machine
    4. Upload the tested code to a feature brach on git (self hosted server)
    5. Download that same feature branch on a RaspberryPi for QA testing.
    6. Merge that same code to develop 6a. That kicks off a CI build that deploys a set of docker images to DockerHub.
    7. Merge that to main/master.
    8. That kicks off another CI build.
    9. SSH into my prod machine and run docker compose up -d

  • That looks like 8.8.8.8 actually responded. The ::1 is ipv6’s localhost which seems odd. As for the wong ipv4 I’m not sure.

    I normally see something like requested 8.8.8.8 but 1.2.3.4 responded if the router was forcing traffic to their DNS servers.

    You can also specify the DNS server to use when using nslookup like: nslookup www.google.com 1.1.1.1. And you can see if you get and different answers from there. But what you posted doesn’t seem out of the ordinary other than the ::1.

    Edit just for shits and giggles also try nslookup xx.xx.xx.xx where xx.xx… is the wrong up from the other side of the world and see what domain it returns.


  • Another thing that can be happening is that the router or firewall is redirecting all port 53 traffic to their internal DNS servers. (I do the same thing at home to prevent certain devices from ignoring my router’s DNS settings cough Android cough)

    One way you can check for this is to run “nslookup some.domain” from a terminal and see where the response comes from.




  • Btw I appreciate the fediverse and decentralization as much as the next guy, heck I’m even writing software for the fediverse. But I feel like there’s a handful of people out there that want to try and apply the fediverse concept to everything. Similar to what happened with Blockchain. Everyone and everything had to be implemented via Blockchain even if it didn’t make sense in the end.

    IMO though, GitHub is just one “instance” in an already decentralized system. Sure it may be the largest but it’s already incredibly simple for me to move and host my code anywhere else. GitHub’s instance just happens to provide the best set of tools and features available to me.

    But back to my original concerns. Let’s assume you have an ActivityPub based git hosting system. For the sake of argument let’s assume that there’s two instances in this federation today. Let’s just call them Hub and Lab…

    Say I create an account on Hub and upload my repository there. I then clone it and start working… It gets federated to Lab… But the admin on Lab just decides to push a commit to it directly because reasons… Hub can now do a few things:

    1. They could just de-federate but who knows what will happen to that repo now.
    2. Hub could reject the commit, but now we’re in a similar boat, effectively the repo has been forked and you can’t really reconcile the histories between the two. Anyone on Lab can’t use that repo anymore.
    3. Accept the change. But now I’m stuck with a repo with unauthorized edits.

    Similarly if Hub was to go down for whatever reason. Let’s assume we have a system in place that effectively prevents the above scenario from happening… If I didn’t create an account on Lab prior to Hub going down I now no longer have the authorization to make changes to that repository. I’m now forced to fork my own repository and continue my work from the fork. But all of my users may still be looking for updates to the original repository. Telling everyone about the new location becomes a headache.

    There’s also issues of how do you handle private repositories? This is something that the fediverse can’t solve. So all repos in the fediverse would HAVE to be public.

    And yes, if GitHub went down today, I’d have similar issues, but that’s why you have backups. And git already has a solution for that outside the fediverse. Long story short, the solutions that the fediverse provides aren’t problems that exist for git and it raises additional problems that now have to be solved. Trying to apply the fediverse to git is akin to “a solution in search of a problem”, IMHO.


  • I don’t get what benefit hosting your own git brings to be honest

    Just another level of backup. Personally I tend to have:

    1. A copy of my repo on my dev machine
    2. A copy on a self hosted git server. Currently I’m using gitbucket though.
    3. A copy on GitHub.

    This way I should always have 2 copies of my code that’s accessable at all times. So that there’s very slim chance that I’ll lose my code, even temporarily.


  • marsara9@lemmy.worldtoNo Stupid Questions@lemmy.worldWhy GitHub?
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    2
    ·
    1 year ago

    IMHO federation doesn’t bring any real benefits to git and introduces a lot of risks.

    The git protocol, if you will, already allows developers to backup and move their repositories as needed. And the primary concern with source control is having a stable and secure place to host it. GitHub already provides that, free of charge.

    Introducing federation, how do you control who can and cannot make changes to your codebase? How do you ensure you maintain access if a server goes down?

    So while it’s nice that you can self host and federate git with GitLab, what value does that provide over the status quo? And how do those benefits outweigh the risks outlined above?








  • Using threads / ActivityPub does make it easier though.

    If they’re using a traditional crawler you could in theory block them at the user agent level (i.e. Cloudflare). If they’re using the public APIs, they’d have to write an interface for each distinct piece of software (Lemmy, Kbin, Mastodon, etc…) (How my search engine works)

    But with ActivityPub were essentially just sending them the data in near real-time all using the same rough structure. Individual instances may block them but it wouldn’t be hard to setup proxies/relays that the community as a whole just isn’t aware of. (i.e. a new “Lemmy” instance comes online that just looks like a single user server, but it’s actually just a relay to Meta). The only real gotcha with ActivityPub is that there’s no real way to get historical data (nothing from the past).

    Now I still have mixed feelings about Meta joining the fediverse, but if we’re just talking about blocking them from getting the content we have here, then things get difficult.