`

hadoop2.5.2配置httpfs服务

阅读更多

hadoop2.5.2配置httpfs服务

 

httpfs hadoop hdfs

 

测试环境

  • ubuntu 14.04 单机
  • hadoop2.5.2 伪分布式
  • jdk1.7

作用

  • 通过HttpFs你可以在浏览器里面管理HDFS上的文件,功能同hadoop shell相似
  • HttpFs还提供了一套REST 风格的API可以用来管理HDFS

缺点

  • 网络安全性
  • 需要配置acl

修改core-site.xml

 
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>

添加上述两个配置hadoop.proxyuser.root.hosts允许通过httpfs方式访问hdfs的主机名或者域名;hadoop.proxyuser.root.groups允许访问的客户端的用户组

配置httpfs环境变量

 
export CATALINA_BASE=/opt/work/hadoop/share/hadoop/httpfs/tomcat

临时配置,也可以在配置文件配置环境变量

重启hadoop集群

 
root@localhost:/opt/nfs#/opt/work/hadoop/sbin//stop-all.sh 
root@localhost:/opt/nfs#/opt/work/hadoop/sbin//start-all.sh 

启动httpfs服务

 
root@localhost:/opt/git/hadoop_dev/hdfsToInfoBright# /opt/work/hadoop/sbin/httpfs.sh start

Setting HTTPFS_HOME:          /opt/work/hadoop
Setting HTTPFS_CONFIG:        /opt/work/hadoop/etc/hadoop
Sourcing:                    /opt/work/hadoop/etc/hadoop/httpfs-env.sh
Setting HTTPFS_LOG:           /opt/work/hadoop/logs
Setting HTTPFS_TEMP:           /opt/work/hadoop/temp
Setting HTTPFS_HTTP_PORT:     14000
Setting HTTPFS_ADMIN_PORT:     14001
Setting HTTPFS_HTTP_HOSTNAME: localhost
Setting HTTPFS_SSL_ENABLED: false
Setting HTTPFS_SSL_KEYSTORE_FILE:     /root/.keystore
Setting HTTPFS_SSL_KEYSTORE_PASS:     password
Using   CATALINA_BASE:       /opt/work/hadoop/share/hadoop/httpfs/tomcat/
Setting HTTPFS_CATALINA_HOME:       /opt/work/hadoop/share/hadoop/httpfs/tomcat/
Setting CATALINA_OUT:        /opt/work/hadoop/logs/httpfs-catalina.out
Setting CATALINA_PID:        /tmp/httpfs.pid

Using   CATALINA_OPTS:       
Adding to CATALINA_OPTS:     -Dhttpfs.home.dir=/opt/work/hadoop -Dhttpfs.config.dir=/opt/work/hadoop/etc/hadoop -Dhttpfs.log.dir=/opt/work/hadoop/logs -Dhttpfs.temp.dir=/opt/work/hadoop/temp -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=localhost -Dhttpfs.ssl.enabled=false -Dhttpfs.ssl.keystore.file=/root/.keystore -Dhttpfs.ssl.keystore.pass=password
Using CATALINA_BASE:   /opt/work/hadoop/share/hadoop/httpfs/tomcat/
Using CATALINA_HOME:   /opt/work/hadoop/share/hadoop/httpfs/tomcat
Using CATALINA_TMPDIR: /opt/work/hadoop/share/hadoop/httpfs/tomcat//temp
Using JRE_HOME:        /usr/local/jdk1.7.0
Using CLASSPATH:       /opt/work/hadoop/share/hadoop/httpfs/tomcat//bin/tomcat-juli.jar:/opt/work/hadoop/share/hadoop/httpfs/tomcat/bin/bootstrap.jar
Using CATALINA_PID:    /tmp/httpfs.pid
Existing PID file found during start.
Removing/clearing stale PID file.
root@localhost:/opt/git/hadoop_dev/hdfsToInfoBright# 

并查看启动日志,并确定环境变量的正确CATALINA_BASE的值

浏览器访问httpfs

http://localhost:14000/ 
页面显示:HttpFs service, service base URL at /webhdfs/v1. 根目录为:/webhdfs/v1

curl访问httpfs

 
root@localhost:/opt/nfs# curl -i -X PUT -T /opt/test.json "http://localhost:14000/webhdfs/v1/tmp/test.json?op=CREATE&data=true&user.name=root" -H "Content-Type:application/octet-stream"

将文件上传到hdfs 
http访问文件: 
http://localhost:14000/webhdfs/v1/tmp/test.json?user.name=root&op=open 
即可下载test.json文件

考虑

  • httpfs的安全性
  • httpfs的性能:httpfs是一个tomcat启动的web服务,当数据文件非常大,上传下载的性能将骤减
  • httpfs的并发访问:tomcat的并发问题

参考文献

分享到:
评论
2 楼 duguyiren3476 2015-08-06  
确实不怎么快,httpfs的上传你可以理解为普通的httpupload,影响的因素很多:网络IO,磁盘IO等因素
1 楼 风过有声 2015-07-19  
你好,能问一个问题吗,我最近页也试用了一下httpfs,发现一个问题,上传文件的速度实在是有点慢,不知道你有没有遇到过这种问题?谢谢

相关推荐

Global site tag (gtag.js) - Google Analytics