汽车故障码雷诺DF1015DF1012,DF001

雷诺科雷傲排放系统故障码df1012+df001
雷诺科雷傲排放系统故障码df1012+df001
提问者:网友
常见故障的种类及其原因 一般来说,电脑故障包括硬件损坏和软件程序错误两大类,前者属于硬故障,后者属于软故障。硬故障可分为器件故障、机械故障和人为故障三大类。器件故障主要是元器件、接插件和印刷板引起的;机械故障主要是外部设备出错,如键盘按键失灵;人为故障主要是由机器不符合运行环境条件要求或操作不当造成的。 元器件本身的故障,例如电容器膨胀、炸裂、电阻烧黑、集成块发热严重等等,除了其本身的质量问题外,也可能是负荷太大、电源功率不足或CPU超频使用等原因引起的。一般情况下,刚刚安装好的电脑出现故障,可能是硬件故障,也可能是软件故障,但硬件故障的可能性比较大。有时候,刚装好的电脑出现故障,往往是接触不良引起的,例如各种插卡、内存、CPU等与主板接触不良,或者电源线、数据线、音频线接触不良等等。
回答者:网友
相关已解答问题
在移动端查看:
还没有汽配人账号?雷诺科雷傲排放系统故障码df1012+df001_百度知道
雷诺科雷傲排放系统故障码df1012+df001
雷诺科雷傲排放系统故障码df1012+df001...
雷诺科雷傲排放系统故障码df1012+df001
答题抽奖
首次认真答题后
即可获得3次抽奖机会,100%中奖。
来自电脑网络类芝麻团
采纳数:1159
获赞数:739
擅长:暂未定制
参与团队:
常见故障的种类及其原因
一般来说,电脑故障包括硬件损坏和软件程序错误两大类,前者属于硬故障,后者属于软故障。硬故障可分为器件故障、机械故障和人为故障三大类。器件故障主要是元器件、接插件和印刷板引起的;机械故障主要是外部设备出错,如键盘按键失灵;人为故障主要是由机器不符合运行环境条件要求或操作不当造成的。
元器件本身的故障,例如电容器膨胀、炸裂、电阻烧黑、集成块发热严重等等,除了其本身的质量问题外,也可能是负荷太大、电源功率不足或CPU超频使用等原因引起的。一般情况下,刚刚安装好的电脑出现故障,可能是硬件故障,也可能是软件故障,但硬件故障的可能性比较大。有时候,刚装好的电脑出现故障,往往是接触不良引起的,例如各种插卡、内存、CPU等与主板接触不良,或者电源线、数据线、音频线接触不良等等。
为你推荐:
其他类似问题
您可能关注的内容
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。在64位RHEL 下安装virtualbox,并创建rac1.ad.com 和rac2.ad.com 主机,所以的都是使用64位版本
Oracle:11.2.0.3 64bit
0:设置时间同步
服务端设置:
[root@ test]# vim /etc/xinetd.d/time-dgram
disable = no
[root@ test]# vim /etc/xinetd.d/time-stream
disable = no
/etc/init.d/xinetd
--重启后,查看tcp和udp的37端口都会开放
# vim sync_time_ss.sh --编写脚本
#!/bin/bash
while :; do rdate -s 10.13.12.21; sleep 10; done #每10秒钟同步一次时间
# sh sync_time_ss.sh & --执行
[root@rac2 ~]# crontab -e
* * * * * rdate -s 10.13.12.21
--每1分钟同步一次时间
一:设置图形化界面连接
1: [root@master ]# vim /etc/sysconfig/vncservers
添加如下两行:
VNCSERVERS="89:oracle"
--配置oracle的桌面是89,如果要设置多用户则:VNCSERVERS="89:oracle 90:root"
VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -nohttpd -localhost"
--如果不做以上修改,/etc/init.d/vncserver restart
会报:no displays configured
2:[oracle@master ~]$ vncserver :89
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
You will require a password to access your desktops.
--设置vnc的访问密码
Passwords don't match - try again
creating new authority file /home/oracle/.Xauthority
New 'master.wonder.com:89 (oracle)' desktop is master.wonder.com:89
Creating default startup script /home/oracle/.vnc/xstartup
Starting applications specified in /home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/master.wonder.com:89.log
3:配置oracle桌面89
[oracle@master ]$ vim /home/oracle/.vnc/xstartup
# Uncomment the following two lines for normal desktop:
unset SESSION_MANAGER
--去掉前面的#
exec /etc/X11/xinit/xinitrc
--去掉前面的#
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
gnome-session &
--删除twn &,添加此行
5: [oracle@master ]$ vncpasswd
--设置访问密码,和上面的设置密码相同即可
6: [root@master ]# /etc/init.d/vncserver restart
--重启服务才生效
7:下载一个vnc客户端,访问 ip:89 或者
ip:90 即可,ip为公网IP也可以
8:如果连接后界面显示灰色,则检查/home/oracle/.vnc/master.wonder.com:89.log日志,如果发现提示gnome-session 命令找不到,
yum install gnome-session 即可.
连接后,在终端选择 Bitstream Vera Sans Mono字体,Roman样式,相当好看!!!!
注意:以上执行命令时使用的用户名,涉及权限问题。
二: 在官网下载for oracle linux 5.0的VirtualBox-4.2-4.2.12_84980_el5-1.x86_64.rpm ,
rpm -ivh VirtualBox-4.2-4.2.12_84980_el5-1.x86_64.rpm
即可安装成功
# virtualbox
运行图形化界面
三:下载RHEL5.8 ,以及安装,注意更改存放系统的路径,已经clone时的默认存放路径,在 file -- preferences 修改
Redhat_Linux_v5.8.X86_64.iso下载连接地址:
链接:http://pan.baidu.com/share/link?shareid=&uk= 密码:yigh
四:配置yum源
# umount /dev/hdc
--因为LINUX自动挂载的名称里面包含空格,所以配置yum源时报错,所以下面重新挂载
# mkdir /mnt/cdrom
# mount /dev/hdc /mnt/cdrom
# vim local_cdrom.repo
name=Red Hat Enterprise Cluster
baseurl=file:///mnt/cdrom/Cluster
gpgcheck=0
[ClusterStorage]
name=Red Hat Enterprise
ClusterStorage
baseurl=file:///mnt/cdrom/ClusterStorage
gpgcheck=0
name=Red Hat Enterprise
baseurl=file:///mnt/cdrom/Server
gpgcheck=0
name=Red Hat Enterprise Linux VT
baseurl=file:///mnt/cdrom/VT
gpgcheck=0
五:解决依赖性:
# yum install libXp*
libaio* gcc-* gcc-c++-* make-* setarch-* unixODBC*
compat-libstdc*
sysstat -y
六:创建用户和组
# groupadd -g 5000 asmadmin
# groupadd -g 5001 asmdba
# groupadd -g 5002 asmoper
# groupadd -g 6000 oinstall
# groupadd -g 6001 dba
# groupadd -g 6002 oper
# useradd/usermod -u 1000 -g oinstall -G asmadmin,asmdba,asmoper grid
# useradd -u 1001 -g oinstall -G dba,asmdba oracle
七:创建目录和给正确的权限
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/11.2.0.3/grid
# chown -R grid:oinstall /u01
# mkdir /u01/app/oracle/
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01
八:设置环境变量
grid环境变量:
grid $ vim .bash_profile
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0.3/grid
export ORACLE_SID=+ASM1
--node2节点上改为+ASM2
export LANG=en
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
执行 $source .bash_profile命令,使设置生效。
oracle环境变量:
oracle $ vim .bash_profile
export ORACLE_BASE=/u01/app/oracle/
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome
export ORACLE_SID=chris1
--这里的和dbca创建数据库时填写的全局数据库名和SID应该保持一致 ,node2上改为 chris2
export LANG=en
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
执行 $source .bash_profile命令,使设置生效。
8.编辑如下文件,设置相关的值。
1.编辑/etc/security/limits.conf ,在文本的最后添加如下行:
9.编辑/etc/pam.d/login 在文本的最后添加如下行:
/lib64/security/pam_limits.so
编辑/etc/profile ,在文本的最后添加如下行:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
ulimit -u 16384 -n 65536
10.编辑/etc/sysctl.conf
kernel.shmmax =
kernel.shmall =
net.ipv4.tcp_max_syn_backlog = 65536
net.core.netdev_max_backlog = 32768
net.core.somaxconn = 8192
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_timestamps = 0
kernel.core_uses_pid = 1
kernel.shmmni = 4096
kernel.sem = 250 8
fs.file-max = 6815744
net.ipv4.ip_local_port_range =
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576
#sysctl –p命令,使设置生效。
12:添加网卡 internal network 模式,启动后配置静态IP为192.168.179.150
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
IPADDR=192.168.179.150
NETMASK=255.255.255.0
NETWORK=120.197.95.0
ONBOOT=yes
HWADDR=00:0c:29:2c:28:35
# /etc/init.d/network restart
11:配置hostname,hosts
10.13.12.150 rac1 rac1.ad.com
10.13.12.151 rac2 rac2.ad.com
192.168.1.150 rac1-priv
192.168.1.151 rac2-priv
10.13.12.155 rac1-vip
10.13.12.156 rac2-vip
10.13.12.152 ad-cluster ad-cluster-scan
1:clone一个test_rac1 ,一个test_rac2 ,建立一个share_storage目录,专门存储共享磁盘文件,注意在攒机共享磁盘时,要选择fixed size
在test_rac1中创建共享磁盘,share_disk1,share_disk2 ,share_disk_voting_4 存放crs信息
share_disk_data3 存放数据
2:设置磁盘共享 , file --virtual media manager 将以上3个文件modify为shared
3:在test_rac2 中添加磁盘 ,选择choose existing disk ,后选择那4个共享磁盘
十三:配置系统
1:两节点都改为静态ip,发现同一个磁盘在2个节点下的盘符是不一样的,使用udev绑定磁盘,
[root@rac1 ~]# fdisk -l
Disk /dev/sda: 25.7 GB,
255 heads, 63 sectors/track, 3133 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Disk /dev/sdb: 1073 MB,
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1073 MB,
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 21.4 GB,
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 1073 MB,
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
[root@rac2 ~]# fdisk -l
Disk /dev/sda: 25.7 GB,
255 heads, 63 sectors/track, 3133 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Disk /dev/sdb: 21.4 GB,
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1073 MB,
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 1073 MB,
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 1073 MB,
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
共享了4个盘每个5G设置好后,使用udev绑定盘符:
# touch /etc/udev/rules.d/touch 99-oracle-asmdevices.rules
# vim /etc/udev/rules.d/touch 99-oracle-asmdevices.rules
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
[root@rac1 rules.d]# cat !$
cat 99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB77c6d2b5-_", NAME="asm-b_crs", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB7d0c7f3b-d289b35f_", NAME="asm-c_crs", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBd1a2c502-_", NAME="asm-d_crs", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBb9935999-ff6b34be_", NAME="asm-e_data", OWNER="grid", GROUP="asmadmin", MODE="0660"
拷贝一份到另一节点,后
start_udev
[root@rac1 rules.d]# ls /dev/sd* -l
brw-r----- 1 root disk 8,
2013 /dev/sda
brw-r----- 1 root disk 8,
1 Jun 17 10:25 /dev/sda1
brw-r----- 1 root disk 8,
2013 /dev/sda2
brw-r----- 1 root disk 8, 16 Jun 17
2013 /dev/sdb
brw-r----- 1 root disk 8, 32 Jun 17
2013 /dev/sdc
brw-r----- 1 root disk 8, 48 Jun 17
2013 /dev/sdd
brw-r----- 1 root disk 8, 64 Jun 17
2013 /dev/sde
[root@rac1 rules.d]# ls /dev/asm* -l
brw-rw---- 1 grid root 8, 16 Jun 17 11:06 /dev/asm-b_crs
brw-rw---- 1 grid root 8, 32 Jun 17 11:06 /dev/asm-c_crs
brw-rw---- 1 grid root 8, 64 Jun 17 11:06 /dev/asm-d_crs
brw-rw---- 1 grid root 8, 48 Jun 17 11:06 /dev/asm-e_data
[root@rac2 rules.d]# ls /dev/asm* -l
brw-rw---- 1 grid asmadmin 8, 48 Jun 17 11:16 /dev/asm-b_crs
brw-rw---- 1 grid asmadmin 8, 64 Jun 17 11:16 /dev/asm-c_crs
brw-rw---- 1 grid asmadmin 8, 32 Jun 17 11:16 /dev/asm-d_crs
brw-rw---- 1 grid asmadmin 8, 16 Jun 17 11:16 /dev/asm-e_data
[root@rac2 rules.d]# ls
/dev/sd* -l
brw-r----- 1 root disk 8,
2013 /dev/sda
brw-r----- 1 root disk 8,
1 Jun 17 10:33 /dev/sda1
brw-r----- 1 root disk 8,
2013 /dev/sda2
brw-r----- 1 root disk 8, 16 Jun 17
2013 /dev/sdb
brw-r----- 1 root disk 8, 32 Jun 17
2013 /dev/sdc
brw-r----- 1 root disk 8, 48 Jun 17
2013 /dev/sdd
brw-r----- 1 root disk 8, 64 Jun 17
2013 /dev/sde
十四:共享LINUX下的文件到虚拟机下比较麻烦,主要是没 vboxsf模块
挂载时报错
[root@rac1 mnt]# mount -t vboxsf /data1/rac_file/grid grid
mount: unknown filesystem type 'vboxsf'
所以改为scp到虚拟机下,拷贝gird与oracle安装文件
十五:安装grid
[root@rac1 grid_inst_file]# chown grid:oinstall ../grid_inst_file -R
[root@rac1 grid_inst_file]# chown oracle:oinstall ../oracle_inst_file -R
一下2节点都要操作
=============================================
--解决包ntp 失败:
停用ntpd时间同步(oracle会使用内部的时间同步机制CTSS):
# /sbin/service ntpd stop
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.original
还要删除以下文件:
[root@racnode1 ~]# rm /var/run/ntpd.pid
此文件保存了 NTP 后台程序的 pid。
=======================================================
--解决cvuqdisk-1.0.9-1.rpm 依赖性
[root@rac1 ~]# cd /mnt/grid_inst_file/grid/rpm/
[root@rac1 rpm]# ls
cvuqdisk-1.0.9-1.rpm
[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...
########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk
########################################### [100%]
[root@rac1 rpm]# scp cvuqdisk-1.0.9-1.rpm 192.168.1.151:/tmp
The authenticity of host '192.168.1.151 (192.168.1.151)' can't be established.
RSA key fingerprint is f2:8f:81:1e:f3:5d:df:e6:1a:b5:ed:58:1f:af:c5:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.151' (RSA) to the list of known hosts. password:
cvuqdisk-1.0.9-1.rpm
[root@rac2 ~]# rpm -ivh /tmp/cvuqdisk-1.0.9-1.rpm
Preparing...
########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk
########################################### [100%]
================================================================================
--解决elfutils-libelf-devel-0.125依赖性
[root@rac1 rpm]# yum install elfutils-libelf-devel -y
===========================================================================
--可以忽略的检查错误
device checks for asm
--没安装asmlib,使用的udev绑定,可以忽略
task resolv.conf integrity
--这个是因为无法访问设置的DNS ip,对安装没影响
==========================================
[grid@rac1 grid]$ ./runcluvfy.sh stage -post hwos -n rac1,rac2
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "10.13.12.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "10.13.12.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.1.0"
Interfaces found on subnet "10.13.12.0" that are likely candidates for a private interconnect are:
rac2 eth0:10.13.12.151
rac1 eth0:10.13.12.150
Interfaces found on subnet "192.168.1.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.1.151
rac1 eth1:192.168.1.150
Could not find a suitable set of interfaces for VIPs
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.13.12.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "10.13.12.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.13.12.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility...
No shared storage found
Shared storage check failed on nodes "rac2,rac1"
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[grid@rac1 grid]$
[grid@rac1 grid_inst_file ~]$ export LANG=en
[grid@rac1 grid_inst_file ~]$ ./runInstaller
step 1 of 8 :
install and configure grid infrastructure for a cluster
step 2 of 8 :
advanced installation
step 3 of 8 :
simplified chinese and english
step 4 of 8 :
cluster name : ad-cluster
--这里的填写要根据hosts里面定义的scan域名而来
scan name: ad-cluster-scan
scan port : 1521
不勾选 configure GNS,即使用DNS中定义的查询方式
step 5 of 16 :
把第一个节点检测的信息修改为
add -- hostname: rac2 (这里的输入根据host里面的public ip来) virtual ip name : rac2-vip
--添加第二节点信息
ssh connectivity -- os username : grid
os passowrd : grid --setup --test
step 6 of 16 :
eth0 120.197.95.0 public
eth1 192.168.8.0 private
此步基本检测是正确的,不需要修改
step 7 of 15:
automatic storage management(ASM)
step 8 of 15:
change discovery path -- disk discovery path : /dev/asm* --ok
disk group name : CRS_DATA
redundancy : external
如果选择normal需要至少选择3块磁盘,high则至少需要5块
选择/dev/asm-b-crs 和 /dev/asm-c-crs /dev/asm-d-crs 来存放OCR
step 9 of 15:
use sam passwords for these accounts --重复输入两次密码,记住不要输入特殊字符 ,这里输入
这里是针对 sys
,asmsnmp 用户设置口令
step 10 of 16:
do not use intelligent platform management interface(IPMI)
step 11 of 16:
OSDBA GROUP : asmdba
OSOPER GROUP: asmoper
OSASM GROUP: asmadmin
step 12 of 16:
oracle base : /u01/app/grid
software location : /u01/app/11.2.0.3/grid
--注意这里的路径都是11.2.0.3,核对下
step 13 of 17:
Inventory dorectory : /data/u01/app/oraInventory
step 14 of 17:
勾选ignore all
step 15 of 17:
中途出现了这样的警告:
WARNING: Error while copying directory /u01/app/11.2.0.3/grid with exclude file list '/tmp/OraInstall_03-26-31PM/installExcludeFile.lst' to nodes 'rac2'.
[Connection closed by 10.13.12.151 :failed]
Refer to '/u01/app/oraInventory/logs/installActions_03-26-31PM.log' for details. You may fix the errors on the required remote nodes.
Refer to the install guide for error recovery. Click 'Yes' if you want to proceed. Click 'No' to exit the install. Do you want to continue?
[grid@rac1 grid]$ cat /tmp/OraInstall_03-26-31PM/installExcludeFile.lst
/u01/app/11.2.0.3/grid/cfgtoollogs/cfgfw
[grid@rac1 grid]$ ll /u01/app/11.2.0.3/grid/cfgtoollogs/cfgfw
-rw------- 1 grid oinstall 1261 Jun 22 15:34 CfmLogger__03-34-58-PM.log
-rw-r--r-- 1 grid oinstall
0 Jun 22 15:34 CfmLogger__03-34-58-PM.log.lck
-rw------- 1 grid oinstall
0 Jun 22 15:34 OuiConfigVariables__03-34-58-PM.log
-rw-r--r-- 1 grid oinstall
0 Jun 22 15:34 OuiConfigVariables__03-34-58-PM.log.lck
-rw------- 1 grid oinstall
0 Jun 22 15:34 oracle.assistants.asm__03-34-58-PM.log
-rw-r--r-- 1 grid oinstall
0 Jun 22 15:34 oracle.assistants.asm__03-34-58-PM.log.lck
-rw------- 1 grid oinstall
0 Jun 22 15:34 oracle.assistants.netca.client__03-34-58-PM.log
-rw-r--r-- 1 grid oinstall
0 Jun 22 15:34 oracle.assistants.netca.client__03-34-58-PM.log.lck
-rw------- 1 grid oinstall
0 Jun 22 15:34 oracle.crs__03-34-58-PM.log
-rw-r--r-- 1 grid oinstall
0 Jun 22 15:34 oracle.crs__03-34-58-PM.log.lck
点击OK,继续,接下来提示运行脚本
[root@rac1 rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 app]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 app]# /u01/app/11.2.0.3/grid/root.sh
--此时由于失误,错误的在一开始就在2节点上运行了,没办法,既然错了,看下面有什么错误的地方,在虚拟机此脚本大概运行了十几分钟
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=
/u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
已成功创建并启动 ASM。
已成功创建磁盘组CRSDATA。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 9f83bf35d.
Successful addition of voting disk 2ab935b263ee4f9fbf2eb.
Successful addition of voting disk 6cf33bf0e30e8c251deab.
Successfully replaced voting disk group with +CRSDATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
File Universal Id
File Name Disk group
-----------------
--------- ---------
9f83bf35d (/dev/asm-b_crs) [CRSDATA]
2ab935b263ee4f9fbf2eb (/dev/asm-c_crs) [CRSDATA]
6cf33bf0e30e8c251deab (/dev/asm-d_crs) [CRSDATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.CRSDATA.dg' on 'rac2'
CRS-2676: Start of 'ora.CRSDATA.dg' on 'rac2' succeeded
OC4J 无法启动
PRCR-1079 : 无法启动资源 ora.oc4j
CRS-2674: Start of 'ora.oc4j' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.oc4j' on that would satisfy its placement policy
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac2'
CRS-2676: Start of 'ora.registry.acfs' on 'rac2' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
在第一节点运行:
[root@rac1 rpm]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=
/u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
Mounting Disk Group CRSDATA failed with the following message:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "CRSDATA" cannot be mounted
ORA-15003: diskgroup "CRSDATA" already mounted in another lock name space
Configuration of ASM ... failed
see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0.3/grid/crs/install/crsconfig_lib.pm line 6763.
/u01/app/11.2.0.3/grid/perl/bin/perl -I/u01/app/11.2.0.3/grid/perl/lib -I/u01/app/11.2.0.3/grid/crs/install /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl execution failed
报错了,而且集群启动不了
[root@rac1 rpm]# su - grid
[grid@rac1 ~]$
crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
到此时,RAC安装失败,所以只能卸载了GI,再重新安装,在第2节点和第1节点实行卸载操作
[grid@rac2 deinstall]$ cd /u01/app/11.2.0.3/grid/deinstall
[grid@rac2 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Location of logs /tmp/deinstall_05-02-43PM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0.3/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0.3/grid
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall_05-02-43PM/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall_05-02-43PM/logs/netdc_check_05-09-06-PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall_05-02-43PM/logs/asmcadc_check_05-09-07-PM.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/11.2.0.3/grid.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +CRSDATA
ASM diskstring : /dev/asm*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'.
want to modify above information (y|n) [n]:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0.3/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /u01/app/11.2.0.3/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
---主要这里输入y,删除ASM配置,其余的一路回车,下面提示root运行脚本回车即可
A log of this session will be written to: '/tmp/deinstall_05-02-43PM/logs/deinstall_deconfig_05-07-30-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall_05-02-43PM/logs/deinstall_deconfig_05-07-30-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall_05-02-43PM/logs/asmcadc_clean_05-09-42-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall_05-02-43PM/logs/netdc_clean_05-11-23-PM.log
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
----------------------------------------&
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on
the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall_05-02-43PM/perl/bin/perl -I/tmp/deinstall_05-02-43PM/perl/lib -I/tmp/deinstall_05-02-43PM/crs/install /tmp/deinstall_05-02-43PM/crs/install/rootcrs.pl -force
-deconfig -paramfile "/tmp/deinstall_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "rac2".
/tmp/deinstall_05-02-43PM/perl/bin/perl -I/tmp/deinstall_05-02-43PM/perl/lib -I/tmp/deinstall_05-02-43PM/crs/install /tmp/deinstall_05-02-43PM/crs/install/rootcrs.pl -force
-deconfig -paramfile "/tmp/deinstall_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands
&----------------------------------------
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
^[[B^[[BOracle Universal Installer cleanup completed with errors.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall_05-02-43PM' on node 'rac2'
Clean install operation removing temporary directory '/tmp/deinstall_05-02-43PM' on node 'rac1'
XML file /tmp/deinstall_05-02-43PM/deinstall.xml does not exist in the path specified.
Verify the location and restart the application.
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Oracle Universal Installer cleanup completed with errors.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2,rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
上面提示使用root执行 rm -rf /opt/ORCLfmap
[root@rac2 ~]# rm -rf /opt/ORCLfmap
使用root执行在2节点上
[root@rac2 ~]# /tmp/deinstall_05-02-43PM/perl/bin/perl -I/tmp/deinstall_05-02-43PM/perl/lib -I/tmp/deinstall_05-02-43PM/crs/install /tmp/deinstall_05-02-43PM/crs/install/rootcrs.pl
-deconfig -paramfile "/tmp/deinstall_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/10.13.12.0/255.255.255.0/eth0, type static
VIP exists: /rac2-vip/10.13.12.156/10.13.12.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 ~]#
在节点1上同样操作
好,全部卸载后再来安装
[root@rac1 ~]# rm /u01 -rf
[root@rac2 ~]# rm /u01 -rf
2节点分别运行
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0.3/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
在2个节点运行
[root@rac1 ~]# rpm -ivh /mnt/grid_inst_file/grid/rpm/cvuqdisk-1.0.9-1.rpm
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]
[root@rac2 ~]# rpm -ivh /tmp/cvuqdisk-1.0.9-1.rpm
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]
在第1节点运行安装
[root@rac1 ~]# ./runInstaller
中途报错:
SEVERE: Remote 'AttachHome' failed on nodes: 'rac2'. Refer to '/u01/app/oraInventory/logs/installActions_05-58-40PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=&node on which command is to be
Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details.
所以在已经连接的rac2上执行,但依然报错:
[grid@rac2 deinstall]$ /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory"
LOCAL_NODE=rac2
Error in GetCurrentDir(): 2
Error in GetCurrentDir(): 2
Error in GetCurrentDir(): 2
Starting Oracle Universal Installer...
sh: /command_output_5864: Permission denied
Checking swap space: 0 MB available, 500 MB required.
Failed &&&&
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Exiting Oracle Universal Installer, log for this session can be found at /u01/app/oraInventory/logs/AttachHome_09-05-53PM.log
[grid@rac2 deinstall]$ vim /u01/app/oraInventory/logs/AttachHome_09-05-53PM.log
[grid@rac2 deinstall]$ free
-/+ buffers/cache:
但这里发现swap有5G多没使用,奇怪了
没办法,启动虚拟机grid的图形化界面,再次运行脚本,起劲出现了,呵呵
[grid@rac2 ~]$ /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory"
LOCAL_NODE=rac2
正在启动 Oracle Universal Installer...
检查交换空间: 必须大于 500 MB。
实际为 5855 MB
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
请在会话结束时执行 '/u01/app/oraInventory/orainstRoot.sh' 脚本。
'AttachHome' 成功。
点击RAC1上的图形化界面的OK,继续安装,马上出现运行脚本的界面
[root@rac1 ~]# hostname
rac1.ad.com
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]# /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory"
LOCAL_NODE=rac2
The user is root. Oracle Universal Installer cannot continue installation if the user is root.
: No such file or directory
[root@rac2 ~]# hostname
rac2.ad.com
[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
更改权限/u01/app/oraInventory.
添加组的读取和写入权限。
删除全局的读取, 写入和执行权限。
更改组名/u01/app/oraInventory 到 oinstall.
脚本的执行已完成。
[root@rac1 ~]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=
/u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
已成功创建并启动 ASM。
已成功创建磁盘组CRSDATA。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 2b1bd0cabfd26bd.
Successful addition of voting disk 2bc03776cdd94f5cbfbfdb0e.
Successful addition of voting disk 3b43cdbfada89.
Successfully replaced voting disk group with +CRSDATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
File Universal Id
File Name Disk group
-----------------
--------- ---------
2b1bd0cabfd26bd (/dev/asm-b_crs) [CRSDATA]
2bc03776cdd94f5cbfbfdb0e (/dev/asm-c_crs) [CRSDATA]
3b43cdbfada89 (/dev/asm-d_crs) [CRSDATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.CRSDATA.dg' on 'rac1'
CRS-2676: Start of 'ora.CRSDATA.dg' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'
CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]#
[root@rac2 ~]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=
/u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
接下来点击下一步,之后完成安装
[grid@rac2 /]$ crsctl stat res -t
--------------------------------------------------------------------------------
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDATA.dg
ora.LISTENER.lsnr
OFFLINE OFFLINE
OFFLINE OFFLINE
ora.net1.network
ora.registry.acfs
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
ora.rac1.vip
ora.rac2.vip
ora.scan1.vip
[grid@rac2 /]$
此时集群软件安装完毕
=========================================================================================
ASMCA 创建数据磁盘
[grid@rac1 /]$ asmca
disk groups --create
disk group name : fra
redundancy : normal
三个300G,fra4 即/dev/sdm4 勾选了quorum
disk groups --create
disk group name : data
redundancy : external
三个300G,fra4 即/dev/sdm4 勾选了quorum
最后的磁盘容量信息为:
sysdg使用了3个1G做normal,现在总容量为2.87G,空闲1.97G,可用的 0.5G
fra使用了3个300G做normal,其中一个磁盘勾选了quorum,现在总容量为558.81G,空闲558.62G,可用的 93.04G
data使用了一个分区1300G做extern,现在总容量为1300。81G,空闲1300.71G,可用的1300.71G
见同目录下的asm_disk_info.jpg
====================================================================
安装oracle软件后,dbca建库
使用oracle用户登录系统安装
oracle base : /u01/app/oracle/
software location : /u01/app/oracle/product/11.2.0.3/dbhome
中间安装到执行root.sh脚本时,发现节点1被踢出集群了,而且重启或者停止集群都报错,但不可以重启系统,因为oracle软件还有最后一步就安装完毕了,要解决此故障才行
[root@rac1 bin]# ./crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
从以上看来集群失败
[root@rac1 bin]# ./crs_start -all
CRS-0184: Cannot communicate with the CRS daemon.
[root@rac1 bin]# ./crs_start -all
CRS-0184: Cannot communicate with the CRS daemon.
[root@rac1 bin]# ./crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@rac1 bin]# ./crsctl stop crs
CRS-2796: The command may not proceed when Cluster Ready Services is not running
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.
[root@rac1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 bin]# ./crs_stop -all
CRS-0184: Cannot communicate with the CRS daemon.
看,使用上面的命令无法停止集群,也无法启动他
...........
从上面的报错来看,应该是ASM实例异常关闭了导致的,这里也顺便 df -h 查看了一下,发现根目录已经使用100%了,呵呵,应该是这里导致ASM失败的吧,
使用grid登录启动ASM实例,
Connected to an idle instance.
ASMCMD& ls
ASMCMD-8102: no connection to ASM; command requires ASM to run
ASMCMD& startup
ASM instance started
Total System Global Area
Fixed Size
2227664 bytes
Variable Size
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15003: diskgroup "DATA" already mounted in another lock name space
ORA-15017: diskgroup "CRSDATA" cannot be mounted
ORA-15003: diskgroup "CRSDATA" already mounted in another lock name space
ASMCMD& ls
ASMCMD& cd CRSDATA
ASMCMD-8001: diskgroup 'CRSDATA' does not exist or is not mounted
ASMCMD& cd DATA
ASMCMD-8001: diskgroup 'DATA' does not exist or is not mounted
从上面看出其实是不正常的,无法访问CRSDATA和DATA磁盘组,虽然RAC2是正常的,这也不代表RAC1无法访问磁盘组,
而且日志报错:
[root@rac1 grid]# ls /u01/app/11.2.0.3/grid/log/rac1/alertrac1.log
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
16:13:35.185
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
所以登录节点2去查看ASMCMD下面的磁盘组,也提示无法查询,问题应该就在这里了,猜想是:
节点1非正常的关闭了ASM1实例,导致ASM2实例也无法访问,但RAC2上./crsctl stat res -t 是正常的,估计是假象
所以没办法关闭RAC2的集群
(这里如果可以重启RAC1主机系统,可以自动解决问题,不需要操作RAC2的,所以这里猜测可以kill -9 集群的所有进程,后crs_start -all应该就可以解决问题)
[root@rac2 ~]# cd /u01/app/11.2.0.3/grid/bin/
[root@rac2 bin]$ ./crsctl stop crs
................
正常关闭,再重新启动
[root@rac2 bin]$ ./crsctl start crs
访问磁盘组正常
但RAC1依然问题依旧,所以准备关闭ASM后再重新启动ASM
ASMCMD& shutdown immediate
ORA-15100: invalid or missing diskgroup name
ORA-15100: invalid or missing diskgroup name
ASM instance shutdown
Connected to an idle instance.
ASMCMD& startup
ASM instance started
Total System Global Area
Fixed Size
2227664 bytes
Variable Size
ASM diskgroups mounted
ASM diskgroups volume enabled
ASMCMD& ls
ASMCMD& ls CRSDATA
ad-cluster/
呵呵,到这里看到可以正常访问磁盘组了,集群应该恢复了,查看日志
[root@rac1 grid]# tail -f /u01/app/11.2.0.3/grid/log/rac1/alertrac1.log
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
16:13:35.185
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
16:14:12.158
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
16:14:50.848
[crsd(18409)]CRS-1012:The OCR service started on node rac1.
16:14:55.085
[crsd(18409)]CRS-1201:CRSD started on node rac1.
看到没,提示CRSD已经成功启动了 ,好,到此运行图形化界面提示的脚本,完成oracle软件的安装
[root@rac1 ~]# /u01/app/oracle/product/11.2.0.3/dbhome/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=
/u01/app/oracle/product/11.2.0.3/dbhome
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac2 ~]# /u01/app/oracle/product/11.2.0.3/dbhome/root.sh
--第二节点
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=
/u01/app/oracle/product/11.2.0.3/dbhome
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
其实可以从上面看出,其实是现在集群不正常,也不影响最后的脚本运行,就只更改了些文件和权限,不需要2节点相互通信,到这里艰难的ORACLE软件安装完毕,
主要是虚拟机性能太差劲导致安装的这么困难,以前使用真实机器安装真是顺畅,痛快,速度,哎
之后由于虚拟机的时间不一致,但oracle内部ctss是激活状态,还是因时间不一致报错了:
Removal of this node from cluster in 14.350 seconds
[crsd(14317)]CRS-2772:Server 'rac1' has been assigned to pool 'Free'.
[ctssd(14014)]CRS-2411:The Cluster Time Synchronization Service will take a long time to perform time synchronization as
local time is significantly different from mean cluster time. Details in /u01/app/11.2.0.3/grid/log/rac2/ctssd/octssd.log.
16:16:29.919
[ctssd(14014)]CRS-2411:The Cluster Time Synchronization Service will take a long time to perform time synchronization as
local time is significantly different from mean cluster time. Details in /u01/app/11.2.0.3/grid/log/rac2/ctssd/octssd.log.
16:16:35.956
[ctssd(14014)]CRS-2411:The Cluster Time Synchronization Service will take a long time to perform time synchronization as
local time is significantly different from mean cluster time. Details in /u01/app/11.2.0.3/grid/log/rac2/ctssd/octssd.log.
16:20:02.065
[cssd(13928)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.
Removal of this node from cluster in 14.350 seconds
16:20:09.088
[cssd(13928)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.
Removal of this node from cluster in 7.330 seconds
16:20:14.104
[cssd(13928)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.
Removal of this node from cluster in 2.320 seconds
16:20:16.427
[cssd(13928)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation
16:20:16.458
[cssd(13928)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
16:20:16.478
[crsd(14317)]CRS-5504:Node down event reported for node 'rac1'.
16:20:16.551
[ctssd(14014)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
16:20:25.093
[crsd(14317)]CRS-2773:Server 'rac1' has been removed from pool 'Free'.
检测ctss状态:
[grid@rac1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@rac2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 6000
[grid@rac2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): -400
[grid@rac1 ~]$ crsctl stat resource ora.ctssd -t -init
--------------------------------------------------------------------------------
STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
在rac2上禁用ctss
[grid@rac2 ~]$ su - root
[root@rac2 ~]# vim /etc/ntp
ntp.conf.original
[root@rac2 ~]# vim /etc/ntp
ntp.conf.original
[root@rac2 ~]# mv
/etc/ntp.conf.original /etc/ntp.conf
[root@rac2 ~]# exit
[grid@rac2 ~]$ date
[grid@rac2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 9300
[grid@rac2 ~]$ crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
[grid@rac2 ~]$
=========================================================================
下面选中2个节点
database area :
fast recovery area : fra
设置sys/system 密码: oraclesys
编译归档参数
复制+fra到栏目即可
只选择EMR 但里面的各选项全部不选择
OEM只可以在一个节点上连接
emctl status dbconsole
提示设置oracle_unqname=prod
使用sys用户登录
创建脚本保存位置:
/data/u01/app/oracle/admin/pro/scripts
安装完成,检查:
[grid@rac1 ~]$ crsctl stat res -t |more
--------------------------------------------------------------------------------
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDATA.dg
ora.DATA.dg
ora.LISTENER.lsnr
OFFLINE OFFLINE
OFFLINE OFFLINE
ora.net1.network
ora.registry.acfs
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
ora.chris.db
ora.rac1.vip
ora.rac2.vip
ora.scan1.vip
[grid@rac1 ~]$
[grid@rac1 ~]$ ps -ef |grep ora_
00:00:00 ora_pmon_chris1
00:00:01 ora_psp0_chris1
00:00:45 ora_vktm_chris1
00:00:00 ora_gen0_chris1
00:00:01 ora_diag_chris1
00:00:00 ora_dbrm_chris1
00:00:00 ora_ping_chris1
00:00:00 ora_acms_chris1
00:00:06 ora_dia0_chris1
00:00:02 ora_lmon_chris1
00:00:03 ora_lmd0_chris1
00:00:15 ora_lms0_chris1
00:00:00 ora_rms0_chris1
00:00:00 ora_lmhb_chris1
00:00:00 ora_mman_chris1
00:00:00 ora_dbw0_chris1
00:00:00 ora_lgwr_chris1
00:00:00 ora_ckpt_chris1
00:00:01 ora_smon_chris1
00:00:00 ora_reco_chris1
00:00:00 ora_rbal_chris1
00:00:00 ora_asmb_chris1
00:00:01 ora_mmon_chris1
00:00:00 ora_mmnl_chris1
00:00:00 ora_d000_chris1
00:00:00 ora_mark_chris1
00:00:00 ora_s000_chris1
00:00:05 ora_lck0_chris1
00:00:00 ora_rsmn_chris1
00:00:00 ora_gtx0_chris1
00:00:00 ora_rcbg_chris1
00:00:00 ora_qmnc_chris1
00:00:00 ora_q000_chris1
00:00:00 ora_q001_chris1
00:00:03 ora_cjq0_chris1
00:00:00 ora_smco_chris1
00:00:00 ora_w000_chris1
00:00:01 ora_gcr0_chris1
00:00:00 ora_pz99_chris1
0 23:22 pts/1
00:00:00 grep ora_
[grid@rac1 ~]$
============================================================================
关于VirtualBox的设置要注意以下几点:
0:分辨率最高是800x600,oracle的安装界面不可以显示完全,解决办法是修改配置文件:
没有安装增强包的配置文件是这样的:
[root@rac1 X11]# cat /etc/X11/xorg.conf.bak
# Xorg configuration created by pyxf86config
Section "ServerLayout"
Identifier
"Default Layout"
"Screen0" 0 0
InputDevice
"Keyboard0" "CoreKeyboard"
EndSection
Section "InputDevice"
Identifier
"Keyboard0"
"XkbModel" "pc105"
"XkbLayout" "us"
EndSection
Section "Device"
Identifier
"Videocard0"
EndSection
Section "Screen"
Identifier "Screen0"
"Videocard0"
DefaultDepth
SubSection "Display"
EndSubSection
EndSection
安装了增强包的配置文件是这样的:
[root@rac1 X11]# cat /etc/X11/xorg.conf_2.bak
# VirtualBox generated configuration file
# based on /etc/X11/xorg.conf.
# Xorg configuration created by pyxf86config
# Section "ServerLayout"
Identifier
"Default Layout"
"Screen0" 0 0
InputDevice
"Keyboard0" "CoreKeyboard"
# EndSection
# Section "InputDevice"
Identifier
"Keyboard0"
"XkbModel" "pc105"
"XkbLayout" "us"
# EndSection
# Section "Device"
Identifier
"Videocard0"
# EndSection
# Section "Screen"
Identifier "Screen0"
"Videocard0"
DefaultDepth
SubSection "Display"
EndSubSection
# EndSection
Section "InputDevice"
Identifier
"Keyboard[0]"
"XkbModel" "pc105"
"XkbLayout" "us"
"Protocol" "Standard"
"CoreKeyboard"
EndSection
Section "InputDevice"
Identifier
"Mouse[1]"
"Buttons" "9"
"Device" "/dev/input/mice"
"Name" "VirtualBox Mouse Buttons"
"Protocol" "explorerps/2"
"Vendor" "Oracle Corporation"
"ZAxisMapping" "4 5"
"CorePointer"
EndSection
Section "InputDevice"
"vboxmouse"
Identifier
"Mouse[2]"
"Device" "/dev/vboxguest"
"Name" "VirtualBox Mouse"
"Vendor" "Oracle Corporation"
"SendCoreEvents"
EndSection
Section "ServerLayout"
Identifier
"Layout[all]"
InputDevice
"Keyboard[0]" "CoreKeyboard"
InputDevice
"Mouse[1]" "CorePointer"
InputDevice
"Mouse[2]" "SendCoreEvents"
"Clone" "off"
"Xinerama" "off"
"Screen[0]"
EndSection
Section "Monitor"
Identifier
"Monitor[0]"
"VirtualBox Virtual Output"
VendorName
"Oracle Corporation"
EndSection
Section "Device"
"VirtualBox Graphics"
"vboxvideo"
Identifier
"Device[0]"
VendorName
"Oracle Corporation"
EndSection
Section "Screen"
SubSection "Display"
EndSubSection
"Device[0]"
Identifier
"Screen[0]"
"Monitor[0]"
EndSection
根据网上搜集的资料应该增加如下几项即可:
Section "Device"
Identifier
"Configured Video Device"
EndSection
Section "Monitor"
Identifier
"Configured Monitor"
Driver "vboxvideo"
EndSection
Section "Screen"
Identifier
"Default Screen"
"Configured Monitor"
"Configured Video Device"
SubSection "Display"
Modes "" ""
EndSubSection
EndSection
因为我安装了增强包,所以我只修改了最后一段
Section "Screen"
SubSection "Display"
EndSubSection
"Device[0]"
Identifier
"Screen[0]"
"Monitor[0]"
EndSection
1:如果你没安装增强包,那么鼠标跟随是非常郁闷的,而且和主机的文件共享也不可以,报如下错误
[root@rac1 mnt]# mount -t vboxsf /data1/rac_file/grid grid
mount: unknown filesystem type 'vboxsf'
下载VBoxGuestAdditions_3.0.4.iso,上传到LINUX上,设置虚拟机cdrom使用此镜像包,启动虚拟机,进入到挂载目录
[root@rac1 cdrom]# ls
autorun.sh
VBoxSolarisAdditions.pkg
VBoxWindowsAdditions-x86.exe
VBoxLinuxAdditions-amd64.run
VBoxWindowsAdditions-amd64.exe
AUTORUN.INF
VBoxLinuxAdditions-x86.run
VBoxWindowsAdditions.exe
注意,之后要点击虚拟机上的devices --Install Guest Additions ,之后再查看挂载的目录,你会发现,文件内容发生了变化
[root@rac1 cdrom]# ls
autorun.sh
runasroot.sh
VBoxWindowsAdditions-amd64.exe
VBoxLinuxAdditions.run
VBoxWindowsAdditions.exe
AUTORUN.INF
VBoxSolarisAdditions.pkg
VBoxWindowsAdditions-x86.exe
此时才可以运行安装
[root@rac1 cdrom]# ./VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.2.12 Guest Additions for Linux............
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module
Building the shared folder support module
Not building the VirtualBox advanced graphics driver as this Linux version is
too old to use it.
Doing non-kernel setup of the Guest Additions
Starting the VirtualBox Guest Additions
Installing the Window System drivers
Installing X.Org 7.1 modules
Setting up the Window System to use the Guest Additions
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.
Installing graphics libraries and desktop services componen[确定]
重新虚拟机即可解决鼠标问题,共享还没测试过,使用的是scp的解决办法。
=======================================
如果被用于ASM的磁盘以前曾经被使用,现在使用时提示是备用,无法使用时,可以清空磁盘头
dd if=/dev/zero of=/dev/asm-b_crs bs=512 count=10
dd if=/dev/zero of=/dev/asm-c_crs bs=512 count=10
dd if=/dev/zero of=/dev/asm-d_crs bs=512 count=10
安装oracle11g RAC Clusterware报: asm disk not shared on all nodes
oracle 12c rac 安装检测错误 Device Checks for ASM
安装11.2.0.3 asm,grid 执行root脚本报错/u01/app/oraInventory/orainstRoot.sh
RAC实验环境的安装部署
linux6.4+oracle11.2.0.3 安装GI错误
没有更多推荐了,}

我要回帖

更多关于 汽车故障码雷诺DF252 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信