diff --git a/doc/ceph-guide.rst b/doc/ceph-guide.rst
index 181b290d27..2a7cf9f7c2 100644
--- a/doc/ceph-guide.rst
+++ b/doc/ceph-guide.rst
@@ -18,7 +18,7 @@ Preparation and Deployment
To prepare a disk for use as a
`Ceph OSD `_ you must add a
special partition label to the disk. This partition label is how Kolla detects
-the disks to format and bootstrap. Any disk with a matching parition label will
+the disks to format and bootstrap. Any disk with a matching partition label will
be reformatted so use caution.
To prepare an OSD as a storage drive, execute the following operations:
@@ -72,7 +72,7 @@ Using a Cache Tier
An optional
`cache tier `_
-can be deployed by formating at least one cache device and enabling cache
+can be deployed by formatting at least one cache device and enabling cache
tiering in the globals.yml configuration file.
To prepare an OSD as a cache device, execute the following operations:
diff --git a/doc/cinder-guide.rst b/doc/cinder-guide.rst
index 0f0bc5d092..86cb463401 100644
--- a/doc/cinder-guide.rst
+++ b/doc/cinder-guide.rst
@@ -26,7 +26,7 @@ the devices /dev/sdb and /dev/sdc:
pvcreate /dev/sdb /dev/sdc
vgcreate cinder-volumes /dev/sdb /dev/sdc
-During development, it may be desireable to use file backed block storage. It
+During development, it may be desirable to use file backed block storage. It
is possible to use a file and mount it as a block device via the loopback
system.
@@ -72,6 +72,6 @@ Check the console log added the disk:
openstack console log show steak_server
-A /dev/vdb should appear in the console log, atleast when booting cirros.
+A /dev/vdb should appear in the console log, at least when booting cirros.
If the disk stays in the available state, something went wrong during the
iSCSI mounting of the volume to the guest VM.
diff --git a/doc/swift-guide.rst b/doc/swift-guide.rst
index a27bbfdff9..bf50c770bd 100644
--- a/doc/swift-guide.rst
+++ b/doc/swift-guide.rst
@@ -73,7 +73,7 @@ container directories:
/etc/kolla/config/swift/${ring}.builder rebalance;
done
-Similar commands can be used for multinode, you will just need to run the the 'add' step for each IP
+Similar commands can be used for multinode, you will just need to run the 'add' step for each IP
in the cluster.
For more info, see