(连载)一个关于ORACLE数据库的虚拟工程(七)--安装grid

上传oracle软件

本次安装的是网上下载的学习版本

p13390677_112040_Linux-x86-64_1of7.zip

p13390677_112040_Linux-x86-64_2of7.zip

p13390677_112040_Linux-x86-64_3of7.zip

点击链接SFTP标签页,按下图步骤上传安装文件;

在 rac1 上解压安装包,解压到/tmp 目录下, 解压出 database 目录和 grid 目录;

# unzip -d /tmp/ p13390677_112040_Linux-x86-64_1of7.zip
# unzip -d /tmp/ p13390677_112040_Linux-x86-64_2of7.zip
# unzip -d /tmp/ p13390677_112040_Linux-x86-64_3of7.zip

在 rac2 上,提取一个cvuqdisk-1.0.9-1.rpm包,是一个验证工具;

[root@rac2 ~]# mkdir -p /tmp/grid/rpm
[root@rac2 ~]# scp [email protected]:/tmp/grid/rpm/cvuqdisk-1.0.9-1.rpm /tmp/grid/rpm/
The authenticity of host '192.168.5.111 (192.168.5.111)' can't be established.
RSA key fingerprint is 5d:4b:e5:6a:d6:d2:e2:e3:2d:3a:86:21:1c:41:c1:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.5.111' (RSA) to the list of known hosts.
[email protected]'s password: 
cvuqdisk-1.0.9-1.rpm                                                                              
100% 8288     8.1KB/s   00:00

安装集群验证工具(cvu)

//检查有没有安装cvuqdisk包
[root@rac1 tmp]# rpm -qa cvuqdisk
//切换环境
[root@rac1 tmp]#export CVUQDISK_GRP=oinstall
[root@rac1 tmp]#cd /tmp/grid/rpm
[root@rac1 tmp]#rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]

//rac2上也要操作
[root@rac2 ~]# export CVUQDISK_GRP=oinstall
[root@rac2 ~]# cd /tmp/grid/rpm
[root@rac2 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]

配置ssh互信

我们使用sshUserSetup.sh的脚本,该脚本在我们刚才上传后解压的目录下(/tmp/grid/sshsetup),在rac1上以root用户执行以下命令;

扫描二维码关注公众号,回复: 10878229 查看本文章
[root@rac1 sshsetup]# /tmp/grid/sshsetup/sshUserSetup.sh -user grid -hosts "rac1 rac2" -advanced exverify -confirm
。。。输入yes及grid密码,然后一路回车就可以了
[root@rac1 sshsetup]# /tmp/grid/sshsetup/sshUserSetup.sh -user oracle -hosts "rac1 rac2" -advanced exverify -confirm
。。。输入yes及oracle密码,然后一路回车就可以了

测试一下,第二次不用输入密码,则表示SSH对等性配置成功。

[root@rac1 sshsetup]# su - grid
[grid@rac1 ~]$ ssh rac2 date
Tue Aug 13 09:59:14 CST 2019
[grid@rac1 ~]$ ssh rac1 date
Tue Aug 13 09:59:23 CST 2019
[grid@rac1 ~]$ ssh rac1-priv date
The authenticity of host 'rac1-priv (192.168.2.1)' can't be established.
RSA key fingerprint is 5d:4b:e5:6a:d6:d2:e2:e3:2d:3a:86:21:1c:41:c1:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.2.1' (RSA) to the list of known hosts.
Tue Aug 13 09:59:36 CST 2019
[grid@rac1 ~]$ ssh rac2-priv date
The authenticity of host 'rac2-priv (192.168.2.2)' can't be established.
RSA key fingerprint is 5d:4b:e5:6a:d6:d2:e2:e3:2d:3a:86:21:1c:41:c1:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,192.168.2.2' (RSA) to the list of known hosts.
Tue Aug 13 09:59:42 CST 2019
[grid@rac1 ~]$ su - oracle
Password: 
[oracle@rac1 ~]$ ssh rac2 date
Tue Aug 13 10:00:06 CST 2019
[oracle@rac1 ~]$ ssh rac1 date
Tue Aug 13 10:00:10 CST 2019
[oracle@rac1 ~]$ ssh rac1-priv date
The authenticity of host 'rac1-priv (192.168.2.1)' can't be established.
RSA key fingerprint is 5d:4b:e5:6a:d6:d2:e2:e3:2d:3a:86:21:1c:41:c1:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.2.1' (RSA) to the list of known hosts.
Tue Aug 13 10:00:18 CST 2019
[oracle@rac1 ~]$ ssh rac2-priv date
The authenticity of host 'rac2-priv (192.168.2.2)' can't be established.
RSA key fingerprint is 5d:4b:e5:6a:d6:d2:e2:e3:2d:3a:86:21:1c:41:c1:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,192.168.2.2' (RSA) to the list of known hosts.
Tue Aug 13 10:00:24 CST 2019

安装前检查

切换到grid用户,在/tmp/grid目录下,执行脚本;

你会看到一堆pass,除了下面这3个:

第1个是缺少pdksh包,我们在网上下载安装,也可以忽略;

第2个是NTP,我们已经取消,不用理会;

第3个是没有配置DNS(只在hosts文件里标明),检测/etc/resolv.conf会失败的,可以忽略该警告,直接 ignore 即可,不影响安装。

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
-------------------------------------1-------------------------------------------
Check: Package existence for "pdksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ------------
  rac2          missing                   pdksh-5.2.14              failed    
  rac1          missing                   pdksh-5.2.14              failed    
Result: Package existence check failed for "pdksh"
-------------------------------------2-------------------------------------------
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
rac2,rac1
Result: Clock synchronization check using Network Time Protocol(NTP) failed

-------------------------------------3-------------------------------------------
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined

WARNING: 
PRVF-5640 : Both search and domain entries are present in file "/etc/resolv.conf" on the following nodes: rac2
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
PRVF-5603 : domain entry does not exist in file "/etc/resolv.conf" on nodes: "rac1"
Checking file "/etc/resolv.conf" to make sure that only one domain entry is defined
All nodes have one domain entry defined in file "/etc/resolv.conf"
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
PRVF-5622 : search entry does not exist in file "/etc/resolv.conf" on nodes: "rac1"
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
  Node Name                             Status                  
  ------------------------------------  ------------------------
  rac2                                  passed                  
  rac1                                  passed                  
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is not consistent across nodes

安装pdksh包

在rac1和rac2上,这个包不安装也是可以的。

[root@rac1 ~]# mount /dev/cdrom /media/centos
mount: block device /dev/sr0 is write-protected, mounting read-only
[root@rac1 ~]# cd /media/centos/Packages/
[root@rac1 Packages]#  rpm -qa | grep ksh
ksh-20120801-10.el6.x86_64
[root@rac1 Packages]#  rpm -e ksh
//再使用sftp上传我们从网上下载的pdksh-5.2.14-30.x86_64.rpm
sftp> lcd e:/install
sftp> cd /tmp
sftp> put pdksh-5.2.14-30.x86_64.rpm
Uploading pdksh-5.2.14-30.x86_64.rpm to /tmp/pdksh-5.2.14-30.x86_64.rpm
  100% 202KB    202KB/s 00:00:00     
e:/install/pdksh-5.2.14-30.x86_64.rpm: 207399 bytes transferred in 0 seconds (202 KB/s)
[root@rac1 Packages]# cd /tmp
[root@rac1 tmp]# rpm -ivh pdksh-5.2.14-30.x86_64.rpm
warning: pdksh-5.2.14-30.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 73307de6: NOKEY
Preparing...                ########################################### [100%]
   1:pdksh                  ########################################### [100%]
//同样的方法再rac2上也做一次,注意是再root用户下

安装grid

进入rac1虚拟机,打开终端,输入以下脚本,弹出图形化安装界面;

[root@rac1 ~]#  export DISPLAY=:0.0
[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ cd /tmp/grid
[grid@rac1 grid]$ ls
install      response  runcluvfy.sh  sshsetup  welcome.html
readme.html  rpm       runInstaller  stage
[grid@rac1 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 31235 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3999 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-08-13_10-58-58AM. Please wait ...[grid@rac1 grid]$ 

接着按下图一步一不操作;

在语言选择框选多一个简体中文;

只有rac1,点击add,添加rac2;

网络有指定是public和private;

使用ASM;

Disk Group Name 写上OCR,点击Change Discovery Path,修改磁盘路径为/dev/asm-disk*;

可以看到我们配置的5块盘,我们选择2048MB的3块,作为OCR磁盘;

设备密码;

这里默认;

继续默认;

还是默认;

再度默认;

出现下面2个failed,上文说了,可以忽略,1个warning,也可以忽略;

点击Ignore All前面的勾,忽略它;

点击Yes;

Install开始安装。

执行orainstRoot.sh和root.sh脚本

到达76%的时候,弹出这个对话框,千万不能关闭,转到secureCRT上执行脚本;

两边节点用 root 用户执行脚本,先让 rac1和rac2 执行orainstRoot.sh,再让 rac1和rac2 执行root.sh;

[root@rac1 /]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
-------------------------------------------------------------------------

[root@rac2 /]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.


------------------------------------------------------------------------
[root@rac1 /]#  /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: //此处敲入回车
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

ASM created and started successfully.

Disk Group OCR created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 96a82a883f4c4f01bfabf95c7cbebe81.
Successful addition of voting disk 0ff5f18bf62c4fd5bfb40f28f304387c.
Successful addition of voting disk 95e99da23b9f4f3bbf2f7aa3dfe05e1a.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   96a82a883f4c4f01bfabf95c7cbebe81 (/dev/asm-diskb) [OCR]
 2. ONLINE   0ff5f18bf62c4fd5bfb40f28f304387c (/dev/asm-diske) [OCR]
 3. ONLINE   95e99da23b9f4f3bbf2f7aa3dfe05e1a (/dev/asm-diskh) [OCR]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.OCR.dg' on 'rac1'
CRS-2676: Start of 'ora.OCR.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
//成功了Configure Oracle Grid Infrastructure for a Cluster ... succeeded

--------------------------------------------------------------------
[root@rac2 /]#  /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 /]#   /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: //此处敲入回车
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
//成功了Configure Oracle Grid Infrastructure for a Cluster ... succeeded

回到图形安装界面,点击OK;

到了最后一步,弹出INS-20802错误,查看日志;

说明scan ip已经存在,我们在自己电脑上ping scan ip(192.168.5.222),可以ping通,那这个错误可以忽略,不予理睬,

导致这个错误的原因是在/etc/hosts中配置了SCAN的地址,未使用DNS来进行SCAN的解析;

[root@rac1 ~]# cat /u01/app/oraInventory/logs/installActions2019-08-13_10-58-58AM.log

。。。。省略前面
INFO: Checking name resolution setup for "scan-ip"...
INFO: Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
INFO: All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
INFO: Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
INFO: ERROR: 
INFO: PRVG-1101 : SCAN name "scan-ip" failed to resolve
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "scan-ip" (IP address: 192.168.5.222) failed
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-ip"
INFO: Verification of SCAN VIP and Listener setup failed

点击next,弹出 【INS-32091】software installation was successful.But sone configuration assistants failed,were cancelled or skipped.我们选择yes。可以看到完成了。

校验grid是否安装成功

[root@rac1 /]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.OCR.dg
               ONLINE  ONLINE       rac1                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1  //gsd offline是正常的                                       
ora.net1.network
               ONLINE  ONLINE       rac1                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                                         
[root@rac1 /]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora.OCR.dg     ora....up.type ONLINE    ONLINE    rac1        
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1        
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1        
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    rac1        
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1        
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    OFFLINE   OFFLINE               
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    OFFLINE   OFFLINE               
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2        
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1        
[root@rac1 /]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   96a82a883f4c4f01bfabf95c7cbebe81 (/dev/asm-diskb) [OCR]
 2. ONLINE   0ff5f18bf62c4fd5bfb40f28f304387c (/dev/asm-diske) [OCR]
 3. ONLINE   95e99da23b9f4f3bbf2f7aa3dfe05e1a (/dev/asm-diskh) [OCR]
Located 3 voting disk(s).
[root@rac1 /]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2592
         Available space (kbytes) :     259528
         ID                       : 1951501964
         Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

发布了29 篇原创文章 · 获赞 4 · 访问量 7576

猜你喜欢

转载自blog.csdn.net/cd4836/article/details/99406444