Configure settings for HDFS NFS gateway:
NFS gateway uses the same configurations as used by the NameNode and DataNode. Configure the following properties based on your application's requirements:
-
Edit the
hdfs-default.xml
file on your NFS gateway machine and modify the following property:<property> <name>dfs.namenode.accesstime.precision</name> <value>3600000</value> <description>The access time for HDFS file is precise up to this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. </description> </property>
-
Add the following property to
hdfs-site.xml
:<property> <name>dfs.nfs3.dump.dir</name> <value>/tmp/.hdfs-nfs</value> </property>
-
Start the NFS gateway service.Three daemons are required to provide NFS service:
rpcbind
(orportmap
),mountd
andnfsd
. The NFS gateway process has bothnfsd
andmountd
. It shares the HDFS root "/
" as the only export. We recommend using theportmap
included in NFS gateway package as shown below:- Stop
nfs
/rpcbind
/portmap
services provided by the platform:service nfs stop service rpcbind stop
- Start the included
portmap
package (needs root privileges):hadoop portmap
ORhadoop-daemon.sh start portmap
- Start
mountd
andnfsd
.No root privileges are required for this command. However, verify that the user starting the Hadoop cluster and the user starting the NFS gateway are same.hadoop nfs3
ORhadoop-daemon.sh start nfs3
- Verify that the HDFS namespace is exported and can be mounted.
showmount -e $nfs_server_ip
You should see output similar to the following:Exports list on $nfs_server_ip : / *
Mount the export “/” on a windows client.mount -o nolock $your_ip:/! W:
No comments:
Post a Comment