Sorry, noob here. I have been using Linux for a decade at least, but some basic stuff still stump me. Today, it’s file sharing: The idea is that the server is good at CPU and the NAS is good at storage. My NAS does run Docker but the services are slow; and my server runs a bunch of Docker containers just fine but has limited disk space (SSD).

Want:

  • Share a directory on my NAS, so that my homelab server can use it.
  • Security is not important; the share does not need to be locked down.

Have:

  • Server+NAS are on their own little 1Gb Cisco switch, so network latency should be minimal.
  • Linux NAS and Linux server have separate users/UID/GID.

Whatever I try, it always ends up with errors about ‘access denied’ or read-only or something. I conclude that I am not smart enough to figure it out.

Help?

  • manwichmakesameal@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’m 100% sure that your problem is permissions. You need to make sure the permissions match. Personally, I created a group specifically for my NFS shares then when I export them they are mapped to the group. You don’t have to do this, you can use your normal users, you just have to make sure the UID/GID numbers match. They can be named different as long as the numbers match up.

    • marche_ck@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      True Another possibility is the permission settings on the mount point of the nfs volume on the server.

    • PlutoniumAcid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      make sure the UID/GID numbers match

      But how? Can I change the numbers?

      I totally get that Linux is by design a multi-user system, but it is frustrating to deal with when I am the only person to ever work with these machines. I know that my docker user+group is 1038/66544 but most docker commands require sudo so I am not even sure those values are the right ones. It is so non-transparent what ID’s are in effect, for what commands --and on what machine!-- when I am just me.

      • manwichmakesameal@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Basically when you make a new group or user, make sure that the NUMBER that it’s using matches whatever you’re using on your export. So for example: if you use groupadd -g 5000 nfsusers just make sure that whenever you make your share on your NAS, you use GID of 5000 no matter what you actually name it. Personally, I make sure the names and GIDs/UIDs are the same across systems for ease of use.

        • manwichmakesameal@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Also, to add to this: you’re setup sounds almost identical to mine. I have a NAS with multiple TBs of storage and another machine with plenty of CPU and RAM. Using NFS for your docker share is going to be a pain. I “fixed” my pains by also using shares inside my docker-compose files. What I mean by that is specify your share in a volume section:

          volumes:
            media:
              driver: local
              driver_opts:
                type: "nfs"
                o: "addr=192.168.0.0,ro"
                device: ":/mnt/zraid_default/media"
          

          Then mount that volume when the container comes up:

          services:
            ...
            volumes:
                  - type: volume
                  source: media
                  target: /data
                  volume:
                    nocopy: true
          

          This way, I don’t have to worry as much. I also use local directories for storing all my container info. e.g.: ./container-data:/path/in/container