花了点时间将drill+apache hadoop2.2 编译出来了。
之前在drill邮件列表将编译过程整理了一下,现在也整理一份放在我的iteye上面。。
1. add a profile section to pom.xml:
<profile> <id>apache</id> <properties> <alt-hadoop>apache</alt-hadoop> </properties> <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.2.0</version> <exclusions> <exclusion> <artifactId>commons-logging</artifactId> <groupId>commons-logging</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>jline</groupId> <artifactId>jline</artifactId> <version>2.10</version> </dependency> </dependencies> </profile>
2: complie the src:
mvn clean install -DskipTests -Papache
3. after compile successfully,
$ cd distribution/ $ ls pom.xml src target $ cd target/ $ tar -xf apache-drill-1.0.0-m1-incubating-binary-release.tar.gz $ ls apache-drill-1.0.0-m1-incubating/lib/protobuf-java-2.4.1.jar apache-drill-1.0.0-m1-incubating/lib/protobuf-java-2.4.1.jar
You need to change the protobuf-java-2.4.1.jar to protobuf-java-2.5.0.jar.
Test:
4. add a dfs storage in conf/storage-engines.json
"parquet-dfs" : { "type":"parquet", "dfsName" : "hdfs://hadoop2:8020/drill" }
5.You also need to make some changes in the bin/drill-config.sh
if [ "${HADOOP_HOME}x" != "x" ] then HADOOP_CLASSPATH="" for jar in `ls[color=red] $HADOOP_HOME/share/hadoop/*/*.jar`[/color] do echo $jar | grep -v -f $DRILL_HOME/bin/hadoop-excludes.txt >/dev/null if [ "$?" -eq "0" ] then HADOOP_CLASSPATH=$jar:$HADOOP_CLASSPATH fi done export HADOOP_CLASSPATH=$HADOOP_HOME/conf:$HADOOP_CLASSPATH //若你配置了HA,或者Federation,你同样需要将你的hadoop的conf添加进来 //这里改成export HADOOP_CLASSPATH=$HADOOP_HOME/etc/hadoop/:$HADOOP_CLASSPATH fi
6.$ ./bin/sqlline -u jdbc:drill:schema=parquet-dfs
Loaded singnal handler: SunSignalHandler
/home/drill/.sqlline/sqlline.properties (No such file or directory)
scan complete in 25ms
scan complete in 4053ms
Connecting to jdbc:drill:schema=parquet-dfs
Connected to: Drill (version 1.0)
Driver: Apache Drill JDBC Driver (version 1.0)
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
sqlline version ??? by Marc Prud'hommeaux
0: jdbc:drill:schema=parquet-dfs> select * from "/drill/region.parquet"
. . . . . . . . . . . . . . . . > ;
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| _MAP |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| {"R_REGIONKEY":0,"R_NAME":"AFRICA","R_COMMENT":"lar deposits. blithely final packages cajole. regular waters are final requests. regular accounts are according |
| {"R_REGIONKEY":1,"R_NAME":"AMERICA","R_COMMENT":"hs use ironic, even requests. s"} |
| {"R_REGIONKEY":2,"R_NAME":"ASIA","R_COMMENT":"ges. thinly even pinto beans ca"} |
| {"R_REGIONKEY":3,"R_NAME":"EUROPE","R_COMMENT":"ly final courts cajole furiously final excuse"} |
| {"R_REGIONKEY":4,"R_NAME":"MIDDLE EAST","R_COMMENT":"uickly special accounts cajole carefully blithely close requests. carefully final asymptotes haggle furious |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
5 rows selected (4.928 seconds)
0: jdbc:drill:schema=parquet-dfs>