Enable HTTPS for HDFS

Short Description:

Steps to enable HTTPS (or SSL) for Web HDFS

Article

Here is complete steps to enable HTTPS for web HDFS.

Step1 .

First get the keystore to use in HDFS configurations.

Follow below steps in case cert is getting signed by CA

 
  
  1. 1. Generate a JKS
  2. keytool -genkey -keyalg RSA -alias c6401 -keystore /tmp/keystore.jks -storepass bigdata -validity 360 -keysize 2048
  3.  
  4. 2. Generate CSR from above keystore
  5. keytool -certreq -alias c6401 -keyalg RSA -file /tmp/c6401.csr -keystore /tmp/keystore.jks -storepass bigdata
  6.  
  7. 3. Now get the singed cert from CA - file name is /tmp/c6401.crt
  8.  
  9. 4. Import the root cert to JKS first. (Ignore if it already present)
  10. keytool -import -alias root -file /tmp/ca.crt -keystore /tmp/keystore.jks
  11. Note: here ca.crt is root cert
  12.  
  13. 5. Repeat step4 for intermediate cert if there is any.
  14.  
  15. 6. Import signed cert into JKS
  16. keytool -import -alias c6401 -file /tmp/c6401.crt -keystore /tmp/keystore.jks -storepass bigdata
  17.  
  18. 7. Import root to trust store (Here it creates new truststore.jks )
  19. keytool -import -alias root -file /tmp/ca.crt -keystore /tmp/truststore.jks -storepass bigdata
  20.  
  21. 8. Import intermediate cert (if there is any) to trust store (similar to step 7)
  22.  

If it is self signed cert

 
  
  1. 1. Generate a JKS
  2. keytool -genkey -keyalg RSA -alias c6401 -keystore /tmp/keystore.jks -storepass bigdata -validity 360 -keysize 2048
  3.  
  4. Note: Use keystore.jks for Truststore configurations as well.

Follow step1 for every master component/host.

Step2:

Login to Ambari and configure/add below properties in core-site.xml

 
  
  1. hadoop.ssl.require.client.cert=false
  2. hadoop.ssl.hostname.verifier=DEFAULT
  3. hadoop.ssl.keystores.factory.class=org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
  4. hadoop.ssl.server.conf=ssl-server.xml
  5. hadoop.ssl.client.conf=ssl-client.xml

Step3:

Set the following properties (or add the properties if required) in hdfs-site.xml:

 
  
  1. dfs.http.policy=HTTPS_ONLY
  2. dfs.client.https.need-auth=false
  3. dfs.datanode.https.address=0.0.0.0:50475
  4. dfs.namenode.https-address=NN:50470
  5.  
  6. Note: you can also set dfs.http.policy=HTTP_AND_HTTPS

Step4:

扫描二维码关注公众号,回复: 522371 查看本文章

Update below configurations under "Advanced ssl-server" (ssl-server.xml)

 
  
  1. ssl.server.truststore.location=/tmp/truststore.jks
  2. ssl.server.truststore.password=bigdata
  3. ssl.server.truststore.type=jks
  4. ssl.server.keystore.location=/tmp/keystore.jks
  5. ssl.server.keystore.password=bigdata
  6. ssl.server.keystore.keypassword=bigdata
  7. ssl.server.keystore.type=jks

Note: create separate keystore file for each NAMENODE host with the file as as keystore.jks and have it under /tmp/

Step5:

Update below configurations under "Advanced ssl-client" (ssl-client.xml)

 
  
  1. ssl.client.truststore.location=/tmp/truststore.jks
  2. ssl.client.truststore.password=bigdata
  3. ssl.client.truststore.type=jks

ssl.client.keystore.location=/tmp/keystore.jks

ssl.client.keystore.password=bigdata

ssl.client.keystore.keypassword=bigdata

ssl.client.keystore.type=jks

Steps6:

Re-start HDFS service

Step7:

Make sure you import the CA root to Ambari-server by running "ambari-server setup-security"

Step8:

You should be able to access UI in https mode on 50470 port.

Note: When you enable the HTTPS for HDFS, Journal node and NN starts in HTTPS mode, check for journal node and name node logs for any errors. copy keystore.jks files for all Namenodes and Journal nodes and Truststore files to all the HDFS nodes.

More articles

*. To enable HTTPS for MAPREDUCE2 and YARN - https://community.hortonworks.com/articles/52876/enable-https-for-yarn-and-mapreduce2.html

*. To enable HTTPS for HBASE - https://community.hortonworks.com/articles/51165/enable-httpsssl-for-hbase-master-ui.html

猜你喜欢

转载自blog.csdn.net/houzhizhen/article/details/80278851