Fixed typo in README's
Change-Id: I92d49df478f2a46631a6c739c2fa5071433daff1
This commit is contained in:
parent
c3491543de
commit
232f756543
|
@ -47,7 +47,7 @@ information.
|
||||||
### MAINTAINERS
|
### MAINTAINERS
|
||||||
|
|
||||||
This is repository level MAINTAINERS file. One submitting a patch should
|
This is repository level MAINTAINERS file. One submitting a patch should
|
||||||
contact the apropriate maintainer or invite her or him for the code review.
|
contact the appropriate maintainer or invite her or him for the code review.
|
||||||
Note, core reviewers are not the maintainers. Normally, cores do reviews
|
Note, core reviewers are not the maintainers. Normally, cores do reviews
|
||||||
after maintainers.
|
after maintainers.
|
||||||
|
|
||||||
|
@ -84,7 +84,7 @@ fuel-library
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
The [top-scope puppet manifests](deployment/puppet/osnailyfacter/modular)
|
The [top-scope puppet manifests](deployment/puppet/osnailyfacter/modular)
|
||||||
(sometimes also refered as the composition layer) represent the known
|
(sometimes also referred as the composition layer) represent the known
|
||||||
deploy paths (aka supported deployment scenarios) for the
|
deploy paths (aka supported deployment scenarios) for the
|
||||||
[task-based deployment](https://docs.mirantis.com/openstack/fuel/fuel-6.1/reference-architecture.html#task-based-deployment).
|
[task-based deployment](https://docs.mirantis.com/openstack/fuel/fuel-6.1/reference-architecture.html#task-based-deployment).
|
||||||
|
|
||||||
|
@ -169,7 +169,7 @@ bundle exec rake syntax
|
||||||
```
|
```
|
||||||
|
|
||||||
This will run syntax checks against all puppet, python, shell and hiera files
|
This will run syntax checks against all puppet, python, shell and hiera files
|
||||||
within fuel-libray.
|
within fuel-library.
|
||||||
|
|
||||||
### Module Lint Checks
|
### Module Lint Checks
|
||||||
|
|
||||||
|
@ -196,7 +196,7 @@ util/jenkins/modules.disable_rake-lint file.
|
||||||
|
|
||||||
### Puppet module tests
|
### Puppet module tests
|
||||||
|
|
||||||
Puppet rspec tests should be provided for an every module's directory included.
|
Puppet rspec tests should be provided for every module's directory included.
|
||||||
All of the discovered tests will be automatically executed by the
|
All of the discovered tests will be automatically executed by the
|
||||||
`rake spec` command issued from the repository root path.
|
`rake spec` command issued from the repository root path.
|
||||||
|
|
||||||
|
|
|
@ -30,13 +30,13 @@ Known Issues
|
||||||
|
|
||||||
There are currently issues with glance 2013.1.2 (grizzly) that cause ``glance
|
There are currently issues with glance 2013.1.2 (grizzly) that cause ``glance
|
||||||
image-create`` with ``--location`` to not function. see
|
image-create`` with ``--location`` to not function. see
|
||||||
https://bugs.launchpad.net/glance/+bug/1215682
|
https://bugs.launchpad.net/glance/+bug/1215682
|
||||||
|
|
||||||
**RadosGW, Keystone and Python 2.6**
|
**RadosGW, Keystone and Python 2.6**
|
||||||
|
|
||||||
RadosGW (RGW) will work with Keystone token_formats UUID or PKI. While RGW
|
RadosGW (RGW) will work with Keystone token_formats UUID or PKI. While RGW
|
||||||
perfers using PKI tokens. Python 2.6 distributions currently may not work
|
prefers using PKI tokens. Python 2.6 distributions currently may not work
|
||||||
correctly with the PKI tokens. As such, keystone integration will defalt to
|
correctly with the PKI tokens. As such, keystone integration will default to
|
||||||
UUID, but you can adjust as desired see ```rgw_use_pki``` option.
|
UUID, but you can adjust as desired see ```rgw_use_pki``` option.
|
||||||
|
|
||||||
Features
|
Features
|
||||||
|
@ -47,7 +47,7 @@ Features
|
||||||
* Ceph OSDs
|
* Ceph OSDs
|
||||||
* Ceph MDS (present, but un-supported)
|
* Ceph MDS (present, but un-supported)
|
||||||
* Ceph Object Gateway (radosgw)
|
* Ceph Object Gateway (radosgw)
|
||||||
* * Openstack Keystone integration
|
* * OpenStack Keystone integration
|
||||||
|
|
||||||
|
|
||||||
Using
|
Using
|
||||||
|
@ -121,13 +121,13 @@ Here are some errors that may be reported.
|
||||||
[root@controller-13 ~]# ceph -s
|
[root@controller-13 ~]# ceph -s
|
||||||
2013-08-22 00:06:19.513437 7f79eedea760 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
|
2013-08-22 00:06:19.513437 7f79eedea760 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
|
||||||
2013-08-22 00:06:19.513466 7f79eedea760 -1 ceph_tool_common_init failed.
|
2013-08-22 00:06:19.513466 7f79eedea760 -1 ceph_tool_common_init failed.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Check the links in ``/root/ceph\*.keyring``. There should be one for each of
|
Check the links in ``/root/ceph\*.keyring``. There should be one for each of
|
||||||
admin, osd, and mon. If any are missing this could be the cause.
|
admin, osd, and mon. If any are missing this could be the cause.
|
||||||
|
|
||||||
Try to run ``ceph-deploy gatherkeys {mon-server-name}``. If this dosn't work
|
Try to run ``ceph-deploy gatherkeys {mon-server-name}``. If this doesn't work
|
||||||
then there may have been an issue starting the cluster.
|
then there may have been an issue starting the cluster.
|
||||||
|
|
||||||
Check to see running ceph processes ``ps axu | grep ceph``. If there is a
|
Check to see running ceph processes ``ps axu | grep ceph``. If there is a
|
||||||
|
@ -135,7 +135,7 @@ python process running for ``ceph-create-keys`` then there is likely a problem
|
||||||
with the MON processes talking to each other.
|
with the MON processes talking to each other.
|
||||||
* Check each mon's network and firewall. The monitor defaults to a port 6789
|
* Check each mon's network and firewall. The monitor defaults to a port 6789
|
||||||
* If public_network is defined in ceph.conf, mon_host and DNS names **MUST**
|
* If public_network is defined in ceph.conf, mon_host and DNS names **MUST**
|
||||||
be inside the public_network or ceph-deploy wont create mon's
|
be inside the public_network or ceph-deploy won't create mon's
|
||||||
|
|
||||||
Missing OSD instances
|
Missing OSD instances
|
||||||
---------------------
|
---------------------
|
||||||
|
@ -154,7 +154,7 @@ Check the osd tree:
|
||||||
|
|
||||||
```
|
```
|
||||||
#ceph osd tree
|
#ceph osd tree
|
||||||
|
|
||||||
# id weight type name up/down reweight
|
# id weight type name up/down reweight
|
||||||
-1 6 root default
|
-1 6 root default
|
||||||
-2 2 host controller-1
|
-2 2 host controller-1
|
||||||
|
@ -178,7 +178,7 @@ the current pools:
|
||||||
# ceph osd lspools
|
# ceph osd lspools
|
||||||
0 data,1 metadata,2 rbd,3 images,4 volumes,
|
0 data,1 metadata,2 rbd,3 images,4 volumes,
|
||||||
|
|
||||||
Testing Openstack
|
Testing OpenStack
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
|
|
||||||
|
@ -203,7 +203,7 @@ glance image-create --name cirros --container-format bare \
|
||||||
--disk-format qcow2 --is-public yes < cirros-0.3.0-x86_64-disk.img
|
--disk-format qcow2 --is-public yes < cirros-0.3.0-x86_64-disk.img
|
||||||
```
|
```
|
||||||
|
|
||||||
This will return somthing like:
|
This will return something like:
|
||||||
|
|
||||||
```
|
```
|
||||||
+------------------+--------------------------------------+
|
+------------------+--------------------------------------+
|
||||||
|
@ -294,7 +294,7 @@ cinder show 78bf2750-e99c-4c52-b5ca-09764af367b5
|
||||||
| status | available |
|
| status | available |
|
||||||
| volume_type | None |
|
| volume_type | None |
|
||||||
+------------------------------+--------------------------------------+
|
+------------------------------+--------------------------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
Since the image is ``status`` ``available`` it should have been created in
|
Since the image is ``status`` ``available`` it should have been created in
|
||||||
ceph. we can check this with ``rbd ls volumes``
|
ceph. we can check this with ``rbd ls volumes``
|
||||||
|
@ -324,11 +324,11 @@ the service script for radosgw might exit 0 and not start the service, the
|
||||||
easy way to test this is to simply ```service ceph-radosgw restart``` if the
|
easy way to test this is to simply ```service ceph-radosgw restart``` if the
|
||||||
service script can not stop the service, it wasn't running in the first place.
|
service script can not stop the service, it wasn't running in the first place.
|
||||||
|
|
||||||
We can also check to see if the rados service might be running by
|
We can also check to see if the rados service might be running by
|
||||||
```ps axu | grep radosgw```, but this might also show the webserver script
|
```ps axu | grep radosgw```, but this might also show the webserver script
|
||||||
server processes as well.
|
server processes as well.
|
||||||
|
|
||||||
most commands from ```radosgw-admin``` will work wether or not the ```radosgw```
|
most commands from ```radosgw-admin``` will work whether or not the ```radosgw```
|
||||||
service is running.
|
service is running.
|
||||||
|
|
||||||
#### swift testing
|
#### swift testing
|
||||||
|
@ -358,8 +358,8 @@ radosgw-admin user create --uid=test --display-name="bob" --email="bob@mail.ru"
|
||||||
"caps": []}
|
"caps": []}
|
||||||
```
|
```
|
||||||
|
|
||||||
swift auth works with subusers, in that from openstack this would be
|
swift auth works with subusers, in that from OpenStack this would be
|
||||||
tennant:user so we need to mimic the same
|
tenant:user so we need to mimic the same
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
radosgw-admin subuser create --uid=test --subuser=test:swift --access=full
|
radosgw-admin subuser create --uid=test --subuser=test:swift --access=full
|
||||||
|
@ -383,7 +383,7 @@ radosgw-admin subuser create --uid=test --subuser=test:swift --access=full
|
||||||
"caps": []}
|
"caps": []}
|
||||||
```
|
```
|
||||||
|
|
||||||
Generate the secret key.
|
Generate the secret key.
|
||||||
___Note that ```--gen-secred``` is required in (at least) cuttlefish and newer.___
|
___Note that ```--gen-secred``` is required in (at least) cuttlefish and newer.___
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
@ -418,10 +418,10 @@ swift -A http://localhost:6780/auth/1.0 -U test:swift -K "eRYvzUr6vubg93dMRMk60R
|
||||||
swift -A http://localhost:6780/auth/1.0 -U test:swift -K "eRYvzUr6vubg93dMRMk60RWYiGdJGvDk3lnwi4cl" list test
|
swift -A http://localhost:6780/auth/1.0 -U test:swift -K "eRYvzUr6vubg93dMRMk60RWYiGdJGvDk3lnwi4cl" list test
|
||||||
```
|
```
|
||||||
|
|
||||||
##### Keystone intergration
|
##### Keystone integration
|
||||||
|
|
||||||
We will start with a simple test, we should be able to use the keystone openrc
|
We will start with a simple test, we should be able to use the keystone openrc
|
||||||
credentials and start using the swift client as if we where actually using
|
credentials and start using the swift client as if we were actually using
|
||||||
swift.
|
swift.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
@ -439,9 +439,9 @@ Clean up ceph to re-run
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
some times it is necessary to re-set the ceph-cluster rather than rebuilding
|
some times it is necessary to re-set the ceph-cluster rather than rebuilding
|
||||||
everything from cratch
|
everything from scratch
|
||||||
|
|
||||||
set ``all`` to contain all monitors, osds, and computes want to re-initalize.
|
set ``all`` to contain all monitors, osds, and computes want to re-initialize.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
export all="compute-4 controller-1 controller-2 controller-3"
|
export all="compute-4 controller-1 controller-2 controller-3"
|
||||||
|
@ -474,4 +474,3 @@ distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
See the License for the specific language governing permissions and
|
See the License for the specific language governing permissions and
|
||||||
limitations under the License.
|
limitations under the License.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue