TiDB docker单机体验

版权声明:本文为博主原创文章,转载请注明出处 https://blog.csdn.net/vkingnew/article/details/81781266
准备工作:
1.安装docker:
yum install -y yum-utils device-mapper-persistent-data   lvm2 
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce-17.03.2
# docker --version
Docker version 17.03.2-ce, build f5ec1e2
2.安装git
yum -y install git 
3.安装docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
docker-compose version 1.22.0, build f46880fe
部署工作:
4. 下载tidb-docker-compose:
git clone https://github.com/pingcap/tidb-docker-compose.git
5.下载最新的docker镜像,创建并启动集群:
cd tidb-docker-compose && docker-compose pull 
下载的组件:
# docker-compose pull
Pulling pd0                 ... 
Pulling pd1                 ... 
Pulling pd2                 ... 
Pulling tikv0               ... 
Pulling tikv1               ... 
Pulling tikv2               ... 
Pulling tidb                ... 
Pulling tispark-master      ... 
Pulling tispark-slave0      ... 
Pulling tidb-vision         ... 
Pulling pushgateway         ... 
Pulling prometheus          ... 
Pulling grafana             ... 
Pulling dashboard-installer ... 

docker-compose up -d
注意:启动之前端口不要被占用,被占用的话需要关闭相关应用。
# docker-compose up -d                
tidb-docker-compose_tidb-vision_1 is up-to-date
Starting tidb-docker-compose_prometheus_1 ... 
Starting tidb-docker-compose_grafana_1    ... 
tidb-docker-compose_dashboard-installer_1 is up-to-date
tidb-docker-compose_pd0_1 is up-to-date
tidb-docker-compose_pd2_1 is up-to-date
tidb-docker-compose_pushgateway_1 is up-to-date
tidb-docker-compose_pd1_1 is up-to-date
tidb-docker-compose_tikv1_1 is up-to-date
tidb-docker-compose_tikv2_1 is up-to-date
tidb-docker-compose_tikv0_1 is up-to-date
tidb-docker-compose_tidb_1 is up-to-date
Starting tidb-docker-compose_prometheus_1 ... done
Starting tidb-docker-compose_grafana_1    ... done

可以看到所占用的端口信息:
# netstat -nultp | grep -i docker
tcp6       0      0 :::8080                 :::*                    LISTEN      70593/docker-proxy  
tcp6       0      0 :::3000                 :::*                    LISTEN      73173/docker-proxy  
tcp6       0      0 :::4000                 :::*                    LISTEN      70653/docker-proxy  
tcp6       0      0 :::10080                :::*                    LISTEN      70617/docker-proxy  
tcp6       0      0 :::32768                :::*                    LISTEN      69872/docker-proxy  
tcp6       0      0 :::38081                :::*                    LISTEN      70792/docker-proxy  
tcp6       0      0 :::32769                :::*                    LISTEN      69943/docker-proxy  
tcp6       0      0 :::9090                 :::*                    LISTEN      73140/docker-proxy  
tcp6       0      0 :::32770                :::*                    LISTEN      70048/docker-proxy  
tcp6       0      0 :::7077                 :::*                    LISTEN      70604/docker-proxy  
tcp6       0      0 :::8010                 :::*                    LISTEN      69919/docker-proxy  

# docker images | grep -i pingcap
pingcap/tikv                                                 latest              fb94b4f0c5e0        10 days ago         175 MB
pingcap/tidb                                                 latest              d71a917ea387        10 days ago         58 MB
pingcap/pd                                                   latest              2a77c0aad81d        13 days ago         75.4 MB
pingcap/tispark                                              latest              aa044a92789b        3 weeks ago         793 MB
pingcap/tidb-vision                                          latest              e9b25d9f7bdb        3 months ago        47.5 MB
pingcap/tidb-dashboard-installer                             v1.0.0              c4dbc1379ec7        11 months ago       73.9 MB

6.访问集群
CLI访问:
mysql -h 127.0.0.1 -P 4000 -u root

登录后的初步体验:
# mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.10-TiDB-v2.1.0-beta-179-g5a0fd2d MySQL Community Server (Apache License 2.0)

Copyright (c) 2009-2018 Percona LLC and/or its affiliates
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
4 rows in set (0.00 sec)

mysql> show variables like '%char%';
+--------------------------------------+--------------------------------------------------------+
| Variable_name                        | Value                                                  |
+--------------------------------------+--------------------------------------------------------+
| character_set_system                 | utf8                                                   |
| character_set_connection             | utf8                                                   |
| character_sets_dir                   | /usr/local/mysql-5.6.25-osx10.8-x86_64/share/charsets/ |
| character_set_client                 | utf8                                                   |
| character_set_results                | utf8                                                   |
| character_set_server                 | latin1                                                 |
| validate_password_special_char_count | 1                                                      |
| character_set_filesystem             | binary                                                 |
| character_set_database               | latin1                                                 |
+--------------------------------------+--------------------------------------------------------+
9 rows in set (0.01 sec)

mysql> show variables like 'coll%';
+----------------------+-------------------+
| Variable_name        | Value             |
+----------------------+-------------------+
| collation_connection | utf8_general_ci   |
| collation_database   | latin1_swedish_ci |
| collation_server     | latin1_swedish_ci |
+----------------------+-------------------+
3 rows in set (0.01 sec)

mysql> select host,user,password from mysql.user;
+------+------+----------+
| host | user | password |
+------+------+----------+
| %    | root |          |
+------+------+----------+
1 row in set (0.00 sec)

webUI访问:
http://localhost:8010
监控界面:
访问集群 Grafana 监控页面:http://localhost:3000 默认用户名和密码均为 admin。

完成快速部署后,以下组件已默认部署:3 个 PD,3 个 TiKV,1 个 TiDB 和监控组件 Prometheus,Pushgateway,Grafana 以及 tidb-vision。

7.访问spark:
访问 Spark shell 并加载 TiSpark
向 TiDB 集群中插入一些样本数据:
# cd tidb-docker-compose/
# docker-compose exec tispark-master bash
bash-4.4# cd /opt/spark/data/tispark-sample-data
bash-4.4# ls
customer.tbl  dss.ddl  lineitem.tbl  nation.tbl  orders.tbl  part.tbl  partsupp.tbl  region.tbl  sample_data.sh  supplier.tbl
bash-4.4# mysql -h tidb -P 4000 -u root < dss.ddl
bash-4.4# exit
exit

当样本数据加载到 TiDB 集群之后,可以使用 docker-compose exec tispark-master /opt/spark/bin/spark-shell 来访问 Spark shell。

#docker-compose exec tispark-master /opt/spark/bin/spark-shell

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark-2.1.1-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark-2.1.1-bin-hadoop2.7/jars/tispark-core-1.0.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/08/17 08:09:36 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/17 08:09:41 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.1.1-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
18/08/17 08:09:41 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.1.1-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
18/08/17 08:09:41 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.1.1-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar."
18/08/17 08:09:47 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/08/17 08:09:47 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/08/17 08:09:48 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://172.18.0.11:4040
Spark context available as 'sc' (master = local[*], app id = local-1534493378017).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.1
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_172)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 


scala> import org.apache.spark.sql.TiContext
import org.apache.spark.sql.TiContext

scala> val ti = new TiContext(spark)
ti: org.apache.spark.sql.TiContext = org.apache.spark.sql.TiContext@5069a91b

scala> ti.tidbMapDatabase("TPCH_001")

scala> spark.sql("select count(*) from lineitem").show
+--------+
|count(1)|
+--------+
|   60175|
+--------+
scala> :quit
你也可以通过 Python 或 R 来访问 Spark:

docker-compose exec tispark-master /opt/spark/bin/pyspark
python的退出:
Python 2.7.14 (default, Dec 14 2017, 15:51:29) 
[GCC 6.4.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark-2.1.1-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark-2.1.1-bin-hadoop2.7/jars/tispark-core-1.0.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/08/17 08:12:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/17 08:13:04 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.1.1
      /_/

Using Python version 2.7.14 (default, Dec 14 2017 15:51:29)
SparkSession available as 'spark'.
>>> help
Type help() for interactive help, or help(object) for help about object.
>>> help()
help> quit
>>> quit()

docker-compose exec tispark-master /opt/spark/bin/sparkR

猜你喜欢

转载自blog.csdn.net/vkingnew/article/details/81781266