Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended storage docs wrt local storage #7

Merged
merged 2 commits into from
Feb 23, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 57 additions & 0 deletions docs/user/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,60 @@ The amount of storage needed is configured using the
Note that configuring storage is done per group of servers.
It is not possible to configure storage per individual
server.

## Local storage

For optimal performance, ArangoDB should be configured with locally attached
SSD storage.

To accomplish this, one must create `PersistentVolumes` for all servers that
need persistent storage (single, agents & dbservers).
E.g. for a `cluster` with 3 agents and 5 dbservers, you must create 8 volumes.

Note that each volume must have a capacity that is equal to or higher than the
capacity needed for each server.

To select the correct node, add a required node-affinity annotation as shown
in the example below.

```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume-agent-1
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["node-1"]
}
]}
]}
}'
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-ssd
local:
path: /mnt/disks/ssd1
```

For Kubernetes 1.9 and up, you should create a `StorageClass` which is configured
to bind volumes on their first use as shown in the example below.
This ensures that the Kubernetes scheduler takes all constraints on a `Pod`
that into consideration before binding the volume to a claim.

```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
```