Sorry, noob here. I have been using Linux for a decade at least, but some basic stuff still stump me. Today, it’s file sharing: The idea is that the server is good at CPU and the NAS is good at storage. My NAS does run Docker but the services are slow; and my server runs a bunch of Docker containers just fine but has limited disk space (SSD).

Want:

  • Share a directory on my NAS, so that my homelab server can use it.
  • Security is not important; the share does not need to be locked down.

Have:

  • Server+NAS are on their own little 1Gb Cisco switch, so network latency should be minimal.
  • Linux NAS and Linux server have separate users/UID/GID.

Whatever I try, it always ends up with errors about ‘access denied’ or read-only or something. I conclude that I am not smart enough to figure it out.

Help?

  • manwichmakesameal@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’m 100% sure that your problem is permissions. You need to make sure the permissions match. Personally, I created a group specifically for my NFS shares then when I export them they are mapped to the group. You don’t have to do this, you can use your normal users, you just have to make sure the UID/GID numbers match. They can be named different as long as the numbers match up.

    • marche_ck@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      True Another possibility is the permission settings on the mount point of the nfs volume on the server.

    • PlutoniumAcid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      make sure the UID/GID numbers match

      But how? Can I change the numbers?

      I totally get that Linux is by design a multi-user system, but it is frustrating to deal with when I am the only person to ever work with these machines. I know that my docker user+group is 1038/66544 but most docker commands require sudo so I am not even sure those values are the right ones. It is so non-transparent what ID’s are in effect, for what commands --and on what machine!-- when I am just me.

      • manwichmakesameal@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Basically when you make a new group or user, make sure that the NUMBER that it’s using matches whatever you’re using on your export. So for example: if you use groupadd -g 5000 nfsusers just make sure that whenever you make your share on your NAS, you use GID of 5000 no matter what you actually name it. Personally, I make sure the names and GIDs/UIDs are the same across systems for ease of use.

        • manwichmakesameal@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Also, to add to this: you’re setup sounds almost identical to mine. I have a NAS with multiple TBs of storage and another machine with plenty of CPU and RAM. Using NFS for your docker share is going to be a pain. I “fixed” my pains by also using shares inside my docker-compose files. What I mean by that is specify your share in a volume section:

          volumes:
            media:
              driver: local
              driver_opts:
                type: "nfs"
                o: "addr=192.168.0.0,ro"
                device: ":/mnt/zraid_default/media"
          

          Then mount that volume when the container comes up:

          services:
            ...
            volumes:
                  - type: volume
                  source: media
                  target: /data
                  volume:
                    nocopy: true
          

          This way, I don’t have to worry as much. I also use local directories for storing all my container info. e.g.: ./container-data:/path/in/container

  • nivenkos@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I got it working following - https://wiki.archlinux.org/title/NFS

    The main issue I had IIRC was making sure that the local user owns the share directory itself (e.g. on the NAS in your case) rather than root.

    If you post more details (i.e. the error logs and configuration), I can take a look at my configuration tonight.

    It’s quite a hassle at first (especially using IP addresses), but once you get it working it’s cool as you can even put it in fstab, etc. to make it all automatic.

  • here2fap@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Basically, what you need for NFS is remote storage with, what’s called, an export (a directory available from outside of the host exporting it). And a client allowed to mount this export. NFS doesn’t really do security, you can add some (whitelisting, limiting users, read only export, etc.), but it’s not mandatory. I think this is a good tutorial

  • here2fap@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Basically, what you need for NFS is remote storage with, what’s called, an export (a directory available from outside of the host exporting it). And a client allowed to mount this export. NFS doesn’t really do security, you can add some (whitelisting, limiting users, read only export, etc.), but it’s not mandatory. I think this is a good tutorial

  • sep@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Details… what do your exports file look like. What do your fstab entry look like, what error do you get when you try ro mount it?

    Normaly on nfs you define the directories to share in the /etc/exports file with what ip prefix are allowed to mount, and some flags for features. Your nas may hide this behind a web interface.

    Have you shared a path to the prefix your server is on?

    The server mount the path normaly with a fstab entry.