Skip to main content
Version: latest

Rook Ceph

Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It automates deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling, and orchestration platform to perform its duties.

The pack provides the following two configurations:

  • A three-node Ceph cluster (recommended).
  • A single node Ceph cluster.

Please make sure that your worker node pool size satisfies the minimum nodes requirement for your Ceph cluster. Additional disks should be attached to your worker pool nodes to deploy a Ceph cluster. For example, suppose you are using existing appliances for your Kubernetes cluster (typical for edge clusters); you will need to ensure that additional disks (1 or 3 - based on your Ceph cluster settings) are attached to the appliance. The device filter needs to be configured in the pack settings for such cases. As an example, if the additional disks were sdd, sde, sdf, the following configuration would be required:

Example YAML

useAllNodes: true
useAllDevices: false
deviceFilter: ^sd[d-f]
osdsPerDevice: "1" # this value can be overridden at the node or device level

Versions Supported