【CEPH】 ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0: (13) Permission denied

问题详细LOG


root@j-2:/etc/ceph# ceph-deploy osd prepare j-2:/var/lib/ceph/osd/ceph-0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd prepare j-2:/var/lib/ceph/osd/ceph-0
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('j-2', '/var/lib/ceph/osd/ceph-0', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f26c5556ef0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f26c59bc500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks j-2:/var/lib/ceph/osd/ceph-0:
[j-2][DEBUG ] connected to host: j-2 
[j-2][DEBUG ] detect platform information from remote host
[j-2][DEBUG ] detect machine type
[j-2][DEBUG ] find the location of an executable
[j-2][INFO  ] Running command: /sbin/initctl version
[j-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to j-2
[j-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host j-2 disk /var/lib/ceph/osd/ceph-0 journal None activate False
[j-2][DEBUG ] find the location of an executable
[j-2][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/lib/ceph/osd/ceph-0
[j-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[j-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[j-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[j-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[j-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[j-2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/osd/ceph-0
[j-2][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ceph_fsid.9803.tmp
[j-2][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/fsid.9803.tmp
[j-2][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/magic.9803.tmp
[j-2][INFO  ] checking OSD status...
[j-2][DEBUG ] find the location of an executable
[j-2][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host j-2 is now ready for osd use.
root@j-2:/etc/ceph# 
root@j-2:/etc/ceph# ceph-deploy osd activate j-2:/var/lib/ceph/osd/ceph-0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd activate j-2:/var/lib/ceph/osd/ceph-0
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc954721ef0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fc954b87500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('j-2', '/var/lib/ceph/osd/ceph-0', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks j-2:/var/lib/ceph/osd/ceph-0:
[j-2][DEBUG ] connected to host: j-2 
[j-2][DEBUG ] detect platform information from remote host
[j-2][DEBUG ] detect machine type
[j-2][DEBUG ] find the location of an executable
[j-2][INFO  ] Running command: /sbin/initctl version
[j-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host j-2 disk /var/lib/ceph/osd/ceph-0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[j-2][DEBUG ] find the location of an executable
[j-2][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /var/lib/ceph/osd/ceph-0
[j-2][WARNIN] main_activate: path = /var/lib/ceph/osd/ceph-0
[j-2][WARNIN] activate: Cluster uuid is ea99362b-48ca-4e73-90b5-1e48cf7930e7
[j-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[j-2][WARNIN] activate: Cluster name is ceph
[j-2][WARNIN] activate: OSD uuid is 5f99b598-7ab7-47ed-8a3b-61af167b2457
[j-2][WARNIN] allocate_osd_id: Allocating OSD id...
[j-2][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 5f99b598-7ab7-47ed-8a3b-61af167b2457
[j-2][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/whoami.9860.tmp
[j-2][WARNIN] activate: OSD id is 0
[j-2][WARNIN] activate: Initializing OSD...
[j-2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[j-2][WARNIN] got monmap epoch 1
[j-2][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --osd-data /var/lib/ceph/osd/ceph-0 --osd-journal /var/lib/ceph/osd/ceph-0/journal --osd-uuid 5f99b598-7ab7-47ed-8a3b-61af167b2457 --keyring /var/lib/ceph/osd/ceph-0/keyring --setuser ceph --setgroup ceph
[j-2][WARNIN] Traceback (most recent call last):
[j-2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[j-2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5371, in run
[j-2][WARNIN]     main(sys.argv[1:])
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5322, in main
[j-2][WARNIN]     args.func(args)
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3453, in main_activate
[j-2][WARNIN]     init=args.mark_init,
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3273, in activate_dir
[j-2][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3378, in activate
[j-2][WARNIN]     keyring=keyring,
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2853, in mkfs
[j-2][WARNIN]     '--setgroup', get_ceph_group(),
[j-2][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2800, in ceph_osd_mkfs
[j-2][WARNIN]     raise Error('%s failed : %s' % (str(arguments), error))
[j-2][WARNIN] ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'0', '--monmap', '/var/lib/ceph/osd/ceph-0/activate.monmap', '--osd-data', '/var/lib/ceph/osd/ceph-0', '--osd-journal', '/var/lib/ceph/osd/ceph-0/journal', '--osd-uuid', u'5f99b598-7ab7-47ed-8a3b-61af167b2457', '--keyring', '/var/lib/ceph/osd/ceph-0/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2018-02-11 13:43:16.470829 7f0a5123a800 -1 filestore(/var/lib/ceph/osd/ceph-0) mkfs: write_version_stamp() failed: (13) Permission denied
[j-2][WARNIN] 2018-02-11 13:43:16.470897 7f0a5123a800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
[j-2][WARNIN] 2018-02-11 13:43:16.470981 7f0a5123a800 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0: (13) Permission denied
[j-2][WARNIN] 
[j-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /var/lib/ceph/osd/ceph-0

产生原因


   I版本起,CEPH的守护进程以ceph用户而不是root用户运行。而osd在prepare和activate之前需要将硬盘挂载到指定目录去。而创建指定目录(例如/var/lib/ceph/osd/ceph-0)时,用户可能是非ceph用户例如root用户等,因此执行ceph osd activate <node_name>:/var/lib/ceph/osd/ceph-0时就会报权限不足的错误。


解决版本


    修改挂载路径的归属组及用户即可,例如


chown -R ceph:ceph /var/lib/ceph/osd


猜你喜欢

转载自blog.csdn.net/u010317005/article/details/79310198