張 zi 浩
发布于

手动配置Kebreros HDFS OushuDB

传送门:

1.1 部署 Kerberos

1.2 生成 keytab

1.3 修改 HDFS 配置

1.4 配置 SSL

1.5 HDFS 启动

1.6 OushuDB

2.1 安全模式 datanode 说明

2.2 Cannot find key of appropriate type to decrypt AP REP

2.3 OushuDB 相关

2.4 CI 环境 hdfs-site.xml

1、搭建环境

​ 搭建 Kerberos 认证的 HDFS 之前可以先使用 lava 平台初始化一个 Zookeeper 和 HDFS 以节省手工部署时间。

1.1 部署 Kerberos

安装 KDC

yum install -y krb5-libs krb5-workstation krb5-server

#关于时间同步:
#使用 yum install ntp -y 安装ntp服务
#时间同步这里提供一种方案:
#配置集群其他系节点同步某一台节点时间(包括OushuDB集群)
#ntpdate hostname
#可以直接同步时间
# 或者:
#其他节点可以编辑 /etc/ntp.conf
#注释 estrict default ignore 这行
#注释 server *  可能有多行,都需要注释
#添加一行: 
#server hostname
#启动ntp服务后,后面会自动同步时间

修改配置文件 /var/kerberos/krb5kdc/kdc.conf

  [kdcdefaults]
  kdc_ports = 88
  kdc_tcp_ports = 88
  [realms]
  OUSHU.COM = {
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  max_renewable_life = 7d
  supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal

修改配置文件 /etc/krb5.conf

# Configuration snippets may be placed in this directory as well
  includedir /etc/krb5.conf.d/
  [logging]
  default = FILE:/var/log/krb5libs.log
  kdc = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log 
  [libdefaults]
  default_realm = OUSHU.COM
  dns_lookup_realm = false
  dns_lookup_kdc = false
  ticket_lifetime = 24h
  forwardable = true
  clockskew = 120
  udp_preference_limit = 1
  [realms]
  OUSHU.COM = {
   kdc = hostname           //改为服务器主机名
   admin_server = hostname  //改为服务器主机名
  }
  [domain_realm]
  .OUSHU.COM = OUSHU.COM
  OUSHU.COM = OUSHU.COM

创建 KDC 数据库(以 root 用户运行)

  kdb5_util create -s -r OUSHU.COM
  密码输入为admin

修改配置文件 /var/kerberos/krb5kdc/kadm5.acl

  */admin@OUSHU.COM *

启动 KDC 服务

systemctl start kadmin krb5kdc

其他节点部署客户端

yum install -y krb5-workstation

#部署之后把 /etc/krb5.conf拷贝过去

1.2 生成 keytab

kadmin.local

addprinc -randkey hdfs/host1@OUSHU.COM
addprinc -randkey hdfs/host2@OUSHU.COM
addprinc -randkey hdfs/host3@OUSHU.COM
addprinc -randkey HTTP/host1@OUSHU.COM
addprinc -randkey HTTP/host2@OUSHU.COM
addprinc -randkey HTTP/host3@OUSHU.COM
 
xst -k /var/kerberos/krb5kdc/hdfs.keytab hdfs/host1@OUSHU.COM
xst -k /var/kerberos/krb5kdc/hdfs.keytab hdfs/host2@OUSHU.COM
xst -k /var/kerberos/krb5kdc/hdfs.keytab hdfs/host3@OUSHU.COM
xst -norandkey -k /etc/security/keytabs/hdfs.keytab HTTP/host1@OUSHU.COM
xst -norandkey -k /etc/security/keytabs/hdfs.keytab HTTP/host2@OUSHU.COM
xst -norandkey -k /etc/security/keytabs/hdfs.keytab HTTP/host3@OUSHU.COM

生成的 keytab 文件记的修改权限 chown hdfs:hadoop /etc/security/keytabs/hdfs.keytab

**#注意:此处的 keytab 每个节点都是自己专属的 hdfs 和 HTTP 共用一个 keytab#**

1.3 修改 HDFS 配置

​ 修改 HDFS 配置文件一定要注意新增配置之前检查下之前有没有配置项,避免冲突。

vi /etc/hadoop/conf/core-site.xml

<!--新增如下配置-->

<property>
	<name>hadoop.security.authentication</name>
	<value>kerberos</value>
</property>
<property>
	<name>hadoop.security.authorization</name>
	<value>true</value>
</property>
<property>
	<name>hadoop.rpc.protection</name>
	<value>authentication</value>
</property>

vi /etc/hadoop/conf/hdfs-site.xml

<!--新增或修改如下配置-->
	<property>
		<name>dfs.block.access.token.enable</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.namenode.keytab.file</name>
		<value>/etc/security/keytabs/hdfs.keytab</value>
	</property>
	<property>
		<name>dfs.namenode.kerberos.principal</name>
		<value>hdfs/_HOST@OUSHU.COM</value>
	</property>
	<property>
		<name>dfs.web.authentication.kerberos.principal</name>
		<value>HTTP/_HOST@OUSHU.COM</value>
	</property>
	<property>
		<name>dfs.web.authentication.kerberos.keytab</name>
		<value>/etc/security/keytabs/hdfs.keytab</value>
	</property>
	<!-- 配置对NameNode Web UI的SSL访问 -->
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.http.policy</name>
		<value>HTTPS_ONLY</value>
	</property>
	<!-- 配置集群datanode的kerberos认证 -->
	<property>
		<name>dfs.datanode.keytab.file</name>
		<value>/etc/security/keytabs/hdfs.keytab</value>
	</property>
	<property>
		<name>dfs.datanode.kerberos.principal</name>
		<value>hdfs/_HOST@OUSHU.COM</value>
	</property>
	
	<!-- 配置datanode SASL配置 -->
	<property>  
		<name>dfs.datanode.data.dir.perm</name>  
		<value>750</value>  
	</property>
	<property>
		<name>dfs.data.transfer.protection</name>
		<value>authentication</value>
	</property>
	<!-- 配置集群journalnode的kerberos认证 -->
	<property>
		<name>dfs.journalnode.keytab.file</name>
		<value>/etc/security/keytabs/hdfs.keytab</value>
	</property>
	<property>
		<name>dfs.journalnode.kerberos.principal</name>
		<value>hdfs/_HOST@OUSHU.COM</value>
	</property>
	<property>
		<name>dfs.journalnode.kerberos.internal.spnego.principal</name>
		<value>HTTP/_HOST@OUSHU.COM</value>
	</property>

1.4 配置 SSL(可选)

​ 关于 SSL 配置说明请先阅读 2.1 安全模式 datanode 说明

1. 生成 ssl 所需文件

$hostname 替换为 active namenode hostname

cd /var/lib/hadoop-hdfs/
openssl req -new -x509 -keyout bd_ca_key -out bd_ca_cert -days 9999 -subj '/C=CN/ST=beijing/L=beijing/O=$hostname/OU=$hostname/CN=$hostname'

2. 将得到的两个文件复制到集群其他节点

scp /var/lib/hadoop-hdfs/bd_ca_key hostname:/var/lib/hadoop-hdfs/
scp /var/lib/hadoop-hdfs/bd_ca_cert hostname:/var/lib/hadoop-hdfs/

3.依次在每个节点执行下面命令

每个节点都需要执行下面 6 条命令。方便起见,所有需要输入密码的位置全部输入 password

keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=$hostname, OU=$hostname, O=$hostname, L=beijing, ST=beijing, C=CN"
keytool -keystore truststore -alias CARoot -import -file bd_ca_cert
keytool -certreq -alias localhost -keystore keystore -file cert
openssl x509 -req -CA bd_ca_cert -CAkey bd_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial -passin pass:password 
keytool -keystore keystore -alias CARoot -import -file bd_ca_cert
keytool -keystore keystore -alias localhost -import -file cert_signed

参考:https://blog.csdn.net/weixin_39971186/article/details/88023066

4. 拷贝上一步文件

 mkdir -p /etc/security/https && chmod 755 /etc/security/https
 cp keystore /etc/security/https/keystore
 cp truststore /etc/security/https/truststore

5.l 添加 ssl-client.xml(/etc/hadoop/hdfs/conf/)

<configuration>
<property>
<name>ssl.client.truststore.location</name>
<value>/etc/security/https/truststore</value>
<description>Truststore to be used by clients like distcp. Must be specified.  </description>
</property>
<property>
<name>ssl.client.truststore.password</name>
<value>password</value>
<description>Optional. Default value is "". </description>
</property>
<property>
<name>ssl.client.truststore.type</name>
<value>jks</value>
<description>Optional. The keystore file format, default value is "jks".</description>
</property>
<property>
<name>ssl.client.truststore.reload.interval</name>
<value>10000</value>
<description>Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds). </description>
</property>
<property>
<name>ssl.client.keystore.location</name>
<value>/etc/security/https/keystore</value>
<description>Keystore to be used by clients like distcp. Must be   specified.   </description>
</property>
<property>
<name>ssl.client.keystore.password</name>
<value>password</value>
<description>Optional. Default value is "". </description>
</property>
<property>
<name>ssl.client.keystore.keypassword</name>
<value>password</value>
<description>Optional. Default value is "". </description>
</property>
<property>
<name>ssl.client.keystore.type</name>
<value>jks</value>
<description>Optional. The keystore file format, default value is "jks". </description>
</property>
</configuration>

6. 添加 ssl-server.xml(/etc/hadoop/hdfs/conf/)

<configuration>
<property>
<name>ssl.server.truststore.location</name>
<value>/etc/security/https/truststore</value>
<description>Truststore to be used by NN and DN. Must be specified.</description>
</property>
<property>
<name>ssl.server.truststore.password</name>
<value>password</value>
<description>Optional. Default value is "". </description>
</property>
<property>
<name>ssl.server.truststore.type</name>
<value>jks</value>
<description>Optional. The keystore file format, default value is "jks".</description>
</property>
<property>
<name>ssl.server.truststore.reload.interval</name>
<value>10000</value>
<description>Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds). </description>
</property>
<property>
<name>ssl.server.keystore.location</name>
<value>/etc/security/https/keystore</value>
<description>Keystore to be used by NN and DN. Must be specified.</description>
</property>
<property>
<name>ssl.server.keystore.password</name>
<value>password</value>
<description>Must be specified.</description>
</property>
<property>
<name>ssl.server.keystore.keypassword</name>
<value>password</value>
<description>Must be specified.</description>
</property>
<property>
<name>ssl.server.keystore.type</name>
<value>jks</value>
<description>Optional. The keystore file format, default value is "jks".</description>
</property>
<property>
<name>ssl.server.exclude.cipher.list</name>
<value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
SSL_RSA_WITH_RC4_128_MD5</value>
<description>Optional. The weak security cipher suites that you want excludedfrom SSL communication.</description>
</property>
</configuration>

1.5 HDFS 启动

​ 若 HDFS 集群是新搭建的集群,就正常按照步骤初始化集群启动。

# start
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start journalnode
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start namenode
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start datanode
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start zkfc
# stop
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop zkfc
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop datanode
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop namenode
sudo -u hdfs /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop journalnode

1.6 OushuDB

​ 安装 OushuDB 这里只记录和 Kerberos 有关步骤,其他步骤参照官网即可。

1、在 OushuDB 安装 HDFS 客户端,方便后面定位问题

yum install hadoop hadoop-hdfs -y

#
#拷贝HDFS集群的hdfs-site.xml core-site hadoop-env.sh到/etc/hadoop/conf
#更改hadoop-env.sh里的JAVA_HOME就可以了

2、拷贝 /etc/krb5.conf

​ 注意要拷贝到 OushuDB 所有节点

scp /etc/krb5.conf hostname:/etc/

3、创建 OushuDB 的 principal

​ OushuDB 4.9.0.0 版本之后用户不一定必须是 posgres。可以配置

kadmin.local -q "addprinc -randkey postgres@HADOOP.COM"
kadmin.local -q "xst -norandkey -k /etc/security/keytabs/hawq.keytab postgres@HADOOP.COM"

4、将 hawq.keytab 分发至 OushuDB Master 节点

scp /etc/security/keytabs/hawq.keytab hostname:/home/gpadmin/hawq.keytab

5、更改 keytab 权限

chown gpadmin:gpadmin /home/gpadmin/hawq.keytab
chmod 400 /home/gpadmin/hawq.keytab

6、修改 hawq-site.xml

<!--新增或修改如下内容-->
<property>
	<name>enable_secure_filesystem</name>
	<value>ON</value>
</property>
<property>
	<name>krb_server_keyfile</name>
	<value>/home/gpadmin/keys/hawq.keytab</value>
</property>
<property>
	<name>krb_srvname</name>
	<value>postgres</value>
</property>

7、修改 hdfs-client.xml

<property>
	<name>hadoop.security.authentication</name>
	<value>kerberos</value>
</property>
<property>
	<name>hadoop.rpc.protection</name>
	<value>authentication</value>
</property>

8、同步配置文件

hawq scp -r -f ~/hostfile  /usr/local/hawq/etc/* =:/usr/local/hawq/etc/

9、创建 HDFS 数据目录

#根据hawq-site.xml创建
hdfs dfs -mkdir -p /hawq/default_filespace
hdfs dfs -chown -R postgres:gpadmin /hawq

10、初始化 OushuDB

#可以测试HDFS连通性
#kinit -kt /home/gpadmin/hawq.keytab postgres
#
#初始化OushuDB
hawq init cluster

2.1 安全模式 datanode 说明

参考 Hadoop 官网 : https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/SecureMode.html#Secure_DataNode

​ 简单来说就是要在安全模式启动 datanode,有两种方式:

1、使用 jsvc 以 root 身份启动 DataNode 并绑定到特权端口。这种方式需要额外的依赖已经过时,不推荐使用这种方式。

#配置方式:
#下载commons-daemon-1.2.2-bin.tar.gz和commons-daemon-1.2.2-src.tar.gz
#链接:https://archive.apache.org/dist/commons/daemon/source/
#拷贝到服务器解压
#编译生成jsvc,并拷贝至指定目录
cd commons-daemon-1.2.2-src/src/native/unix/
./configure
make
cp ./jsvc /usr/hdp/2.5.3.0-37/hadoop/libexec

#拷贝commons-daemon-1.2.2.jar
cd JSVC_packages/commons-daemon-1.2.2/
cp commons-daemon-1.0.13.jar /usr/hdp/2.5.3.0-37/hadoop-hdfs/lib
chown hdfs:hadoop commons-daemon-1.2.2.jar 

#vi ./etc/hadoop/hadoop-env.sh
#追加如下内容:

export HADOOP_SECURE_DN_USER=hdfs
export JSVC_HOME=/usr/hdp/2.5.3.0-37/hadoop/libexec


#要使用jsvc模式启动datanode必须使用root用户启动集群。并且
#1、去掉hdfs-site.xml中的dfs.data.transfer.protection
#2、将hdfs-site.xml中的dfs.datanode.address设置为特权端口(0.0.0.0:1004)
#3、将hdfs-site.xml中的dfs.http.policy设置为HTTP_ONLY
#4、将hdfs-site.xml中的dfs.datanode.http.address设置为特权端口(0.0.0.0:1006)
#
#datanode启动成功后jps查看进程.进程名字是Secur

2、从 2.6.0 版本开始,在 hdfs-site.xml 中设置 dfs.data.transfer.protection dfs.datanode.address 设置非特权端口,将 dfs.http.policy 设置为 HTTPS_ONLY 并确保 HADOOP_SECURE_DN_USER 环境变量未定义。也就是 1.4 配置 SSL 的内容。

必须确保:
(1)hdfs-site.xml 配置了 dfs.data.transfer.protection

(2)hdfs-site.xml 配置 dfs.datanode.address 为非特权端口(大于 1024)

(3)hdfs-site.xml 配置 dfs.datanode.http.address 为非特权端口(大于 1024)

(4)hdfs-site.xml 必须将 dfs.http.policy 配置为 HTTPS_ONLY

(5)hadoop-env.sh 不能配置 HADOOP_SECURE_DN_USER

真实部署情况下,经测试:dfs.datanode.addressdfs.datanode.http.address 可以不用配置。

否则启动 HDFS 集群可能会出现如下报错:

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMainjava.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP. Using privileged resources in combination with SASL RPC data transfer protection is not supported.z

​ 这一部分调研结论,可能有错误的地方,之前参考过几套环境的 HDFS 配置,发现 dfs.http.policy 在配置为 HTTP_ONLY 的情况下也可以配置 SASL 认证。目前尚不清楚具体配置原理。有清楚的朋友欢迎补充纠正。

2.2 Cannot find key of appropriate type to decrypt AP REP

如启动 HDFS 集群报错:

org.ietf.jgss.GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - RC4 with HMAC)

这个报错看上去是因为有不支持解密的加密类型,但其实很可能是因为 keytab 文件不正确,无法解密 keytab,建议使用下面的命令重新生成 keytab:

kadmin.local -q "xst -k hdfs.keytab hdfs/hostname HTTP/hostname"

记得要替换 hostname 为实际主机名,并且一定要没生成一个 keytab 就发送到对应的节点。不要弄混,keytab 是每台节点独立的,不能共用。

2.3 OushuDB 相关

​ 如果 OushuDB 可以成功初始化,可以建表但是不能写数据,可以按照下面思路尝试解决

1、确保 /etc/hosts配置了所有datanode节点
2、确保datanode节点端口是连通的

​ 正常情况下检查好就可以的。但是如果还是不可以,可以先看下 HDFS 集群 hdfs-site.xml 配置文件中 dfs.http.policy 的 value 是什么,如果是 HTTPS_ONLY。并且 segment 日志报错:

could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.

可以尝试以下方案:

<!--在hdfs-client.xml新加以下内容后重启集群-->
<property>
	<name>dfs.block.access.token.enable</name>
	<value>true</value>
</property>
<property>
	<name>dfs.encrypt.data.transfer</name>
	<value>true</value>
</property>
<property>
	<name>dfs.data.transfer.protection</name>
	<value>authentication,privacy</value>
</property>

hadoop.rpc.protectiondfs.data.transfer.protection 要跟 HDFS 集群 hdfs-site.xml 一致嗷

2.4 CI 环境 hdfs-site.xml

<!--附上CI环境hdfs-site.xml供参考-->
<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>/data1/hdfs/namenode</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.data.dir</name>
        <value>/data1/hdfs/datanode,/data2/hdfs/datanode</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.permissions</name>
        <value>true</value>
    </property>

    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>750</value>
    </property>

    <property>
        <name>dfs.datanode.handler.count</name>
        <value>60</value>
    </property>

    <property>
        <name>dfs.datanode.max.transfer.threads</name>
        <value>40960</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.replication.min</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.datanode.socket.write.timeout</name>
        <value>7200000</value>
        <description>
            used for sockets to and from datanodes. It is 8 minutes by default. Some
            users set this to 0, effectively disabling the write timeout.
        </description>
    </property>
    <property>
        <name>dfs.namenode.accesstime.precision</name>
        <value>0</value>
    </property>


    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>

    <property>
        <name>dfs.allow.truncate</name>
        <value>true</value>
    </property>

    <property>
        <name>dfs.namenode.fs-limits.min-block-size</name>
        <value>1024</value>
    </property>

    <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>true</value>
    </property>

    <property>
        <name>dfs.domain.socket.path</name>
        <value>/var/lib/hadoop-hdfs/dn_socket</value>
    </property>

    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
        <description>
            If "true", access tokens are used as capabilities for accessing
            datanodes.
            If "false", no access tokens are checked on accessing datanodes.
        </description>
    </property>
   
   <property>
        <name>dfs.client.socket-timeout</name>
        <value>300000000</value>
    </property>

   <property>
        <name>dfs.client.use.legacy.blockreader.local</name>
        <value>false</value>
    </property>

   <property>
        <name>dfs.namenode.handler.count</name>
        <value>600</value>
    </property>
   <property>
        <name>dfs.blocksize</name>
        <value>134217728</value>
    </property>
    
   <property>
        <name>dfs.nameservices</name>
        <value>oushu</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.oushu</name>
        <value>nn2,nn1</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.oushu.nn1</name>
        <value>lava-b7fba3c2-1:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.oushu.nn1</name>
        <value>lava-b7fba3c2-1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.oushu.nn2</name>
        <value>lava-b7fba3c2-2:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.oushu.nn2</name>
        <value>lava-b7fba3c2-2:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://lava-b7fba3c2-1:8485;lava-b7fba3c2-2:8485;lava-b7fba3c2-3:8485/oushu</value>
    </property>
         <property>
        <name>dfs.ha.automatic-failover.enabled.oushu</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.oushu</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data1/hdfs/journaldata</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>shell(/bin/true)</value>
    </property>

    <property>
        <name>dfs.permissions.superusergroup</name>
        <value>hadoop</value>
</property>
  <!-- NameNode security config -->
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/etc/security/keytabs/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>namenode/_HOST@KDCSERVER.OUSHU.COM</value>
</property>
<property>
  <name>dfs.namenode.kerberos.internal.spnego.principal</name>
  <value>HTTP/_HOST@KDCSERVER.OUSHU.COM</value>
</property>

<!-- Secondary NameNode security config -->
<property>
  <name>dfs.secondary.namenode.keytab.file</name>
  <value>/etc/security/keytabs/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.principal</name>
  <value>namenode/_HOST@KDCSERVER.OUSHU.COM</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
  <value>HTTP/_HOST@KDCSERVER.OUSHU.COM</value>
</property>

<!-- DataNode security config and SASL -->
<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/etc/security/keytabs/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
<value>datanode/_HOST@KDCSERVER.OUSHU.COM</value>
</property>
<property>
  <name>dfs.data.transfer.protection</name>
<value>authentication</value>
</property>
<property>
  <name>dfs.http.policy</name>
<value>HTTPS_ONLY</value>
</property>

<!-- Web Authentication config -->
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@KDCSERVER.OUSHU.COM</value>
 </property>
<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/etc/security/keytabs/hdfs.keytab</value>
</property>

<!-- JournalNode security config -->

<property>
  <name>dfs.journalnode.kerberos.principal</name>
  <value>journalnode/_HOST@KDCSERVER.OUSHU.COM</value>
</property>

<property>
  <name>dfs.journalnode.keytab.file</name>
  <value>/etc/security/keytabs/hdfs.keytab</value>
</property>

<property>
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  <value>HTTP/_HOST@KDCSERVER.OUSHU.COM</value>
</property>

  <property>
    <name>dfs.qjournal.start-segment.timeout.ms</name>
    <value>90000</value>
  </property>
    <property>
    <name>dfs.qjournal.select-input-streams.timeout.ms</name>
    <value>90000</value>
  </property>

  <property>
    <name>dfs.qjournal.write-txns.timeout.ms</name>
    <value>90000</value>
  </property>

      <property>
        <name>dfs.namenode.fs-limits.max-directory-items</name>
         <value>2097152</value>
     </property>

</configuration>
评论(5)
  • 罗名岳
    罗名岳 回复

    总结的很详细很全面,值得分享

  • 亚平宁的眼泪
    亚平宁的眼泪 回复
    張 zi 浩 張 zi 浩 2022-04-19 09:58:10

    支持,可以同时配置 authentication,privacy ,但是会影响性能。

    好的,一般情况下都会配置一个,但是有些其他场上的 hdfs 存在复用的情况

  • 張 zi 浩
    張 zi 浩 回复
    亚平宁的眼泪 亚平宁的眼泪 2022-04-19 09:55:48

    hadoop.rpc.protection 这个参数现在支持配置多个吗

    支持,可以同时配置 authentication,privacy ,但是会影响性能。

  • 亚平宁的眼泪
    亚平宁的眼泪 回复

    hadoop.rpc.protection 这个参数现在支持配置多个吗

  • huor
    huor 回复

    总结的很详细很全面,值得分享

test