Phoenix安装部署和事物支持配置遇到的问题:本人在cdh5.12集群上部署phoenix,并让其支持事物的经验总结!!
过程遇到几个比较关键的问题,希望对你们有所帮助
1)准备安装包:
编译完成的包:phoenix-4.9.0-cdh5.9.1.tar.gz
2)部署:
解压:tar -zxvf phoenix-4.9.0-cdh5.9.1.tar.gz
把解压出的文件放到hbase的lib包目录下:并分发到所有集群
Cp phoenix-4.9.0-cdh5.9.1-server.jar /opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hbase/lib/
然后记得:重启hbase集群服务!!
修改配置文件:很关键!
cd /home/test/phoenix-4.9.0-cdh5.9.1/bin
Vi hadoop-metrics2-hbase.properties
最后增加一行:phoenix.schema.isNamespaceMappingEnabled=true
Vi hadoop-metrics2-phoenix.properties
最后增加一行:phoenix.schema.isNamespaceMappingEnabled=true
替换hbase-site.xml:替换不替换都行,最好替换!
cp /etc/hbase/conf/hbase-site.xml /home/test/phoenix-4.9.0-cdh5.9.1/bin
增加支持事物配置:vi hbase-site.xml
<property>
<name>phoenix.schema.isNamespaceMappingEnabled</name>
<value>true</value>
</property>
<property>
<name>phoenix.transactions.enabled</name>
<value>true</value>
</property>
<property>
<name>data.tx.snapshot.dir</name>
<value>/tmp/tephra/snapshots</value>
</property>
<property>
<name>data.tx.timeout</name>
<value>60</value>
</property>
修改启动脚本:读取自己修改后的配置文件,否则报错,最好保证/etc/hbase/conf写在$CLASSPATH的前面
Vi tephra
CLASSPATH = /etc/hbase/conf:$CLASSPATH
启动事物支持服务:./bin/tephra restart
查看日志:tail -f /tmp/tephra-bdmp_test/*.log
启动phoenix脚本:
Vi startpnew49
cd ~
kinit -kt bdmp_test.keytab bdmp_test//kerbers
cd /home/test/phoenix-4.9.0-cdh5.9.1/bin
./sqlline.py zk01,zk02,zk03:2181:/hbase
3)验证命令:
进入phoenix命令行:./startpnew49
关闭自动提交:!autocommit off
手工提交:!commit
创建表:
!autocommit off
drop table pdbname.my_table2 ;
CREATE TABLE pdbname.my_table2 (k BIGINT PRIMARY KEY, v VARCHAR) TRANSACTIONAL=true;
插入数据:
UPSERT INTO pdbname.my_table2 VALUES (1,'A');
SELECT count(*) FROM pdbname.my_table2 WHERE k=1; -- Will see uncommitted row
Result: 1
不执行提交,重新开启一个命令行查询:
SELECT count(*) FROM pdbname.my_table2 WHERE k=1;
Result: 0
切换原来的命令行:重新提交!
!commit执行提交。
在新开的命令行查询:
SELECT count(*) FROM pdbname.my_table2 WHERE k=1;
Result: 1
4) 遇到的问题和解决方法:
问题信息一:
Exception in thread "HDFSTransactionStateStorage STARTING" java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration. at com.google.common.base.Preconditions.checkState(Preconditions.java:149) at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93) at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43) at java.lang.Thread.run(Thread.java:745)0 [ThriftRPCServer] ERROR org.apache.tephra.distributed.TransactionService - Transaction manager aborted, stopping transaction serviceException in thread "ThriftRPCServer" com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration. at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015) at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001) at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220) at org.apache.tephra.distributed.TransactionServiceThriftHandler.init(TransactionServiceThriftHandler.java:177) at org.apache.tephra.rpc.ThriftRPCServer.startUp(ThriftRPCServer.java:177) at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) at java.lang.Thread.run(Thread.java:745)Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration. at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015) at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001) at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220) at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106) at org.apache.tephra.TransactionManager.doStart(TransactionManager.java:216) at com.google.common.util.concurrent.AbstractService.start(AbstractService.java:170) ... 5 more
Caused by: java.lang.IllegalStateException: Snapshot directory is not configured. Please set data.tx.snapshot.dir in configuration.
at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
... 1 more
问题分析:配置了这个项目为何还报错呢?配置文件读取的还是为修改支持事物的。
解决方法:强制使用读取自己修改后的配置文件即可,通过上面的修改tephra
Vi tephra
CLASSPATH = /etc/hbase/conf:$CLASSPATH
异常信息二:
Error: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes; (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes;
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:774)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:720)
at org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at sqlline.BufferedRows.<init>(BufferedRows.java:37)
at sqlline.SqlLine.print(SqlLine.java:1649)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes;
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:769)
... 12 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.OperationWithAttributes.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/OperationWithAttributes;
at org.apache.tephra.hbase.TransactionAwareHTable.addToOperation(TransactionAwareHTable.java:672)
at org.apache.tephra.hbase.TransactionAwareHTable.transactionalizeAction(TransactionAwareHTable.java:561)
at org.apache.tephra.hbase.TransactionAwareHTable.getScanner(TransactionAwareHTable.java:289)
at org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:170)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:124)
at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
问题分析:使用phoenix-4.8.0-cdh5.8.0时,打包编译问题引起,换成phoenix-4.9.0-cdh5.9.1解决 或者 获取源码重新编译(phoenix-4.8.0-cdh5.8.0,待尝试)
异常信息三:
18/05/29 10:27:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue May 29 10:28:24 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0 (state=,code=0)
java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue May 29 10:28:24 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2492)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue May 29 10:28:24 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:862)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:421)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2412)
... 20 more
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68459: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=redhat214.life.com,60020,1527517854293, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to redhat214.life.com/10.31.20.214:60020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to redhat214.life.com/10.31.20.214:60020 is closing. Call id=9, waitTime=37
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:289)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1273)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:355)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to redhat214.life.com/10.31.20.214:60020 is closing. Call id=9, waitTime=37
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1085)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:864)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:581)
sqlline version 1.2.0
解决方法: CATALOG表是系统自带的表,HBase中已存在的表不会能自动映射,需要修改配置:phoenix.schema.isNamespaceMappingEnabled
通过比对:发现配置问题!
C:\Users\dell\Desktop\wMyWork\bigData\phoenix\phoenix-4.8.0-cdh5.8.0
C:\Users\dell\Desktop\wMyWork\bigData\phoenix\phoenix-4.9.0-cdh5.9.1
未修改配置文件引起,按照以上步骤修改相关phoenix配置文件即可,例如以上关键配置:phoenix.schema.isNamespaceMappingEnabled=true
问题信息四:
ERROR: not found class org.apache.tephra.TransactionServiceMain
解决方法:把启动的jar包放入一份到phoenix的lib包下面即可!
Cp phoenix-4.9.0-cdh5.9.1-server.jar /home/test/phoenix-4.9.0-cdh5.9.1/lib
或者修改:待实验!
phoenix-4.9.0-cdh5.9.1\dev\make_rc.sh
phx_jars=$(find -iwholename "./*/target/phoenix-*.jar")
改为:phx_jars=$(find -iname phoenix-*.jar)