spark running thrift server
1 启动 thrift server
cd $SPARK_HOME/
sh sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.server2.thrift.bind.host=yf-hive01 --master spark://testnode:7077 --driver-class-path /home/test/hadoop/spark-1.2.0-bin-1.0.3/lib/mysql-connector-java-5.1.32-bin.jar --executor-memory 10g
其实也也可以直接 sh sbin/start-thriftserver.sh 默认端口10000
或直接配置一个端口号
sh sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=8000
2 JDBC 连接
Class.forName("org.apache.hive.jdbc.HiveDriver").newInstance(); Connection conn = DriverManager.getConnection(url, name, pwd); Statement stat = con.createStatement(); stat.executeQuery("select * from test");
需引入包
<dependency> <groupId>org.spark-project.hive</groupId> <artifactId>hive-exec</artifactId> </dependency> <dependency> <groupId>org.spark-project.hive</groupId> <artifactId>hive-jdbc</artifactId> </dependency> <dependency> <groupId>org.spark-project.hive</groupId> <artifactId>hive-service</artifactId> < </dependency> <dependency> <groupId>org.spark-project.hive</groupId> <artifactId>hive-common</artifactId> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> </dependency>
参考
http://blog.csdn.net/wind520/article/details/44061563