A quick guide for installing Ceph on a single node for demo purposes. It almost goes without saying that this is for tire-kickers who just want to test out the software. Ceph is a powerful distributed storage platform with a focus on spreading the failure domain across disks, servers, racks, pods, and datacenters. It doesn’t get a chance to shine if limited to a single node. With that said, let’s get on with it.

Hardware

This example uses a virtualbox VM with 4 disks attached (1 for OS/App, 3 for Storage). Those installing on physical hardware for a more permanent home setup will obviously want to increase the OS disks for redundancy.

To get started create a new VM with the following specs:

  • Name: ceph-single-node
  • Type: Linux
  • Version: Ubuntu (64-bit)
  • Memory: 2GB
  • Disk: 8GB (Dynamic)
  • Network Interface1: NAT
  • Network Interface2: Host-only

Notice the name of the VM is “ceph-single-node”. Virtualbox will inject this as the hostname during the Ubuntu install. You can call yours whatever you want but for copy/paste ease of install via the commands below, I highly recommend keeping it the same.

Linux Install

For the OS install we are going to use Ubuntu Server 14.04 (download). The default install is fine including the default partitioning. The only thing you’ll want to change from default is to select the optional OpenSSH Server.

Once the Linux installation is complete, shut the VM off to add the data disks. You need to add 3 separate 1TB (1024 GB) drives. Since these are dynamically allocated and we won’t be actually putting a bunch of data on here you don’t need 3TB free on your host machine. Please don’t try to create smaller drives for the demo as you’ll run into a situation where Ceph will refuse to write data as the drives will be below the default free space threshold. For your sanity just make them 1TB.

Storage Disks

Ceph Install

For reference, if anything goes south the complete Ceph install documentation is here. Most of the below is identical to the official documentation. Where necessary I’ve adjusted it so it will work on a single node and made decisions on options to lower the learning curve.

Begin by installing the ceph repo key.

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

Add the Ceph (jewel release) repo to your Ubuntu sources list.

echo deb http://download.ceph.com/debian-jewel/ trusty main | sudo tee /etc/apt/sources.list.d/ceph.list

Install the ceph-deploy utility, this is the admin tool that allows you to centrally manage and create new ceph nodes. For those that have been around ceph for a while this is a VERY welcome upgrade to the old ‘here’s a list of things you’ll need to install and configure on each node, how that happens is up to you… might want to learn Chef/Puppet…’

Thankfully those days are behind us and now we have the awesome ceph-deploy.

sudo apt-get update && sudo apt-get install ceph-deploy

We’ll want a dedicated user to handle ceph configs and installs so lets create that user now.

sudo useradd -m -s /bin/bash ceph-deploy
sudo passwd ceph-deploy

This user needs passwordless sudo configured.

echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy

Verify permissions are correct on this file.

sudo chmod 0440 /etc/sudoers.d/ceph-deploy

Now let’s switch to this newly created user. All the rest of guide will be commands issued as this user.

sudo su - ceph-deploy

The ceph-deploy utility functions by ssh’ing to other nodes and executing commands. To accomplish this we need to create an RSA key pair to allow passwordless logins to the nodes we will be configuring (in this guide we are of course just talking about the local node we are on). Make sure you are still the ceph-deploy user and use ssh-keygen to generate the key pair. Just hit enter at all the prompts. Defaults are fine.

ssh-keygen

Now lets install the generated public key on our destination nodes (in this case our only node, which happens to be the same box we are currently logged into).

ssh-copy-id ceph-deploy@ceph-single-node

Make a new subdirectory in the ceph-deploy users’s home directory and change to it.

cd ~
mkdir my-cluster
cd my-cluster

Create an initial cluster config in this directory.

ceph-deploy new ceph-single-node

This created a bunch of files in the current directory. One of which is the global config file. Edit this newly created initial configuration file.

vim ceph.config

Add the following two lines:

osd pool default size = 2
osd crush chooseleaf type = 0

Default pool size is how many replicas of our data we want (2). The chooseleaf setting is required to tell ceph we are only a single node and that it’s OK to store the same copy of data on the same physical node. Normally for safety, ceph distributes the copies and won’t leave all your eggs in the same basket (server).

Time to install ceph. This installs the ceph binaries and copies our initial config file.

ceph-deploy install ceph-single-node

Before we can create storage OSDs we need to create a monitor.

ceph-deploy mon create-initial

Now we can create the OSDs that will hold our data. Remember those 3 x 1TB drives we attached earlier, they should be /dev/sdb, /dev/sdc, /dev/sdd. Let’s configure them.

ceph-deploy osd prepare ceph-single-node:sdb
ceph-deploy osd prepare ceph-single-node:sdc
ceph-deploy osd prepare ceph-single-node:sdd

And activate them.

ceph-deploy osd activate ceph-single-node:/dev/sdb1
ceph-deploy osd activate ceph-single-node:/dev/sdc1
ceph-deploy osd activate ceph-single-node:/dev/sdd1

Restribute our config and keys.

ceph-deploy admin ceph-single-node

Depending on umask you may not be able to read one of the created files with a non-root user. Let’s correct.

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

At this point your “cluster” should be in working order and completely functional. Check health with:

ceph -s

Should see something like:

    cluster b607c266-751a-4e77-ad09-d8a9d5f6a531
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-single-node=10.0.2.15:6789/0}
            election epoch 3, quorum 0 ceph-single-node
     osdmap e27: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v286: 120 pgs, 8 pools, 1651 bytes data, 172 objects
            110 MB used, 3055 GB / 3055 GB avail
                 120 active+clean

The most important line in that output is the second line from the top:

health HEALTH_OK

This tells us that the cluster is happy everything is working as expected.

We’re not gonna stop there with just an installed cluster. We want the rest of the ceph functionality such as s3/swift object storage and cephfs.

Install object storage gateway:

ceph-deploy rgw create ceph-single-node

Install cephfs:

ceph-deploy mds create ceph-single-node

That’s it! You now have a fully functional ceph “cluster” with 1-monitor, 3-OSDs, 1-metadata-server, 1-rados-gateway.

Usage

Now that we have this installed how do we use it?

Ceph FS

One of the most followed features is the distributed filesystem provided by ceph fs.

Before we can create a filesystem we need to create an osd pool to store it on.

ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128

Now create the filesystem.

ceph fs new cephfs cephfs_metadata cephfs_data

In order to mount it in Linux we need to install the ceph client libraries.

sudo apt-get install ceph-fs-common

Next we need to create a mountpoint for the filesystem.

sudo mkdir /mnt/mycephfs

By default all access operations require authentication. The ceph install has created some default credentials for us. To view them:

cat ~/my-cluster/ceph.client.admin.keyring

[client.admin]
    key = AQCv2yRXOVlUMxAAK+e6gehnirXTV0O8PrJYQQ==

The key string is what we are looking for and will use it to mount this newly created filesystem.

sudo mount -t ceph ceph-single-node:6789:/ /mnt/mycephfs -o name=admin,secret=AQCv2yRXOVlUMxAAK+e6gehnirXTV0O8PrJYQQ==

Now that it’s mounted lets see what it looks like.

df -h /mnt/mycephfs

Filesystem        Size  Used Avail Use% Mounted on
10.0.2.15:6789:/  3.0T  124M  3.0T   1% /mnt/mycephfs

Object Storage (Amazon s3 Compatible)

Create a user.

sudo radosgw-admin user create --uid="testuser" --display-name="First User"

The output of this command will be contain both the user’s access key and secrety key. Make note of them.

"access_key": "ZIEYA2GLI93L2Q7ET4IV",
"secret_key": "FCKtauzPBGAwaPWMDDkL3Ek11nkriB8PNBmTUmwB"

To avoid having to curl our own REST actions we will use the convenient s3cmd utility.

 sudo apt-get install s3cmd

Create the initial configuration file.

s3cmd --configure

Enter your access and secret keys when prompted. Say no to the other encryption options.

Since this utility is designed to work with Amazon’s s3 service we need to modify the generated config to point to our local server.

vim .s3config

We will need to change the following variables. Below I’ve changed them to point to my local server.

host_base = 10.10.10.101:7480
host_bucket = %(bucket)s.10.10.10.101:7480

Once configured to point to our local object store we can test bucket creation.

s3cmd mb s3://TESTING

And test putting an object into our new bucket

echo "Hello World" > hello.txt
s3cmd put hello.txt s3://TESTING
s3cmd ls s3://TESTING

2016-05-01 03:39        12   s3://TESTING/hello.txt

Admin Web Interface (Calimari)

Coming Soon…