Monday 3 November 2014

Mount HDFS using NFS on a Windows Client

Configure settings for HDFS NFS gateway:
NFS gateway uses the same configurations as used by the NameNode and DataNode. Configure the following properties based on your application's requirements:
  1. Edit the hdfs-default.xml file on your NFS gateway machine and modify the following property:
    <property>
      <name>dfs.namenode.accesstime.precision</name>
      <value>3600000</value>
      <description>The access time for HDFS file is precise up to this value. 
                   The default value is 1 hour. Setting a value of 0 disables
                   access times for HDFS.
      </description>
    </property>
  2. Add the following property to hdfs-site.xml:
    <property>    
        <name>dfs.nfs3.dump.dir</name>    
        <value>/tmp/.hdfs-nfs</value> 
    </property>
  3. Start the NFS gateway service.
    Three daemons are required to provide NFS service: rpcbind (or portmap), mountd and nfsd. The NFS gateway process has both nfsd and mountd. It shares the HDFS root "/" as the only export. We recommend using the portmap included in NFS gateway package as shown below:
    1. Stop nfs/rpcbind/portmap services provided by the platform:
      service nfs stop
      service rpcbind stop
    2. Start the included portmap package (needs root privileges):
      hadoop portmap
      OR
      hadoop-daemon.sh start portmap
    3. Start mountd and nfsd.
      No root privileges are required for this command. However, verify that the user starting the Hadoop cluster and the user starting the NFS gateway are same.
      hadoop nfs3
      
      OR
      hadoop-daemon.sh start nfs3
    1. Verify that the HDFS namespace is exported and can be mounted.
      showmount -e $nfs_server_ip                         
      You should see output similar to the following:
      Exports list on $nfs_server_ip :
      / *
  4. Mount the export “/”  on a windows client.
    mount -o nolock $your_ip:/!  W:

No comments:

Post a Comment