Initial commit

This commit is contained in:
marsal wang
2021-12-23 13:52:28 +08:00
commit 06000029e8
45 changed files with 3708 additions and 0 deletions

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,16 @@
apiVersion: v2
appVersion: "v3.1.0-k8s1.11"
version: 0.1.5
type: application
name: nfs-client-provisioner
description: nfs-client-provisioner is an automatic provisioner creating Persistent Volumes from the NFS
home: https://github.com/rimusz/charts/tree/master/stable/nfs-client-provisioner
sources:
- https://github.com/rimusz/hostpath-provisioner
keywords:
- nfs
- storage
- nfs-client
maintainers:
- email: rmocius@gmail.com
name: rimusz

View File

@ -0,0 +1,4 @@
approvers:
- rimusz
reviewers:
- rimusz

View File

@ -0,0 +1,116 @@
# NFS Client Provisioner
[NFS Client Provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)
is an automatic provisioner that uses your already configured NFS server to automatically create Persistent Volumes.
## TL;DR;
```console
$ helm install rimusz/nfs-client-provisioner --set nfs.server="1.2.3.4"
```
## Introduction
This chart bootstraps a [nfs-client-provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)
deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh)
package manager.
## Installing the Chart
To install the chart with the release name `nfs`:
```console
$ helm install rimusz/nfs-client-provisioner --name nfs --set nfs.server="1.2.3.4"
```
The command deploys nfs-client-provisioner on the Kubernetes cluster in the default
configuration. The [configuration](#configuration) section lists the parameters
that can be configured during installation.
## Testing the Chart
Now we'll test your NFS provisioner.
Deploy:
```console
$ kubectl create -f test/test-claim.yaml -f test/test-pod.yaml
```
Now check in PVC folder on your NFS Server for the file `SUCCESS`.
Delete:
```console
kubectl delete -f test/test-pod.yaml -f test/test-claim.yaml
```
Now check that PVC folder got renamed to `archived-???`.
## Deploying your own PersistentVolumeClaim
To deploy your own PVC, make sure that you have the correct `storage-class` as indicated by your `values.yaml` file.
For example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Mi
```
## Uninstalling the Chart
To uninstall/delete the `nfs` deployment:
```console
$ helm delete nfs
```
The command removes all the Kubernetes components associated with the chart and
deletes the release.
## Configuration
The following table lists the configurable parameters of the kibana chart and
their default values.
| Parameter | Description | Default |
|:-------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------|
| `image.repository` | The image repository to pull from | `quay.io/kubernetes_incubator/nfs-client-provisioner` |
| `image.tag` | The image tag to pull from | `v2.1.0-k8s1.10` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `nfs.server` | NFS server IP | `` |
| `nfs.path` | NFS server share path | `/vol1` |
| `storageClass.create` | Enable creation of a StorageClass to consume this nfs-client-provisioner instance | `true` |
| `storageClass.name` | The name to assign the created StorageClass | `nfs` |
| `storageClass.reclaimPolicy` | Set the reclaimPolicy for PV within StorageClass | `Delete` |
| `rbac.create` | Enable RABC | `false` |
| `rbac.serviceAccountName` | Service account name | `default` |
| `resources` | Resource limits for nfs-client-provisioner pod | `{}` |
| `nodeSelector` | Map of node labels for pod assignment | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `affinity` | Map of node/pod affinities | `{}` |
```console
$ helm install rimusz/nfs-client-provisioner --name nfs \
--set nfs.server="1.2.3.4",resources.limits.cpu=200m
```
Alternatively, a YAML file that specifies the values for the above parameters
can be provided while installing the chart. For example,
```console
$ helm install rimusz/nfs-client-provisioner --name nfs -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml) as an example

View File

@ -0,0 +1,36 @@
The NFS Client Provisioner deployment has now been installed.
{{- if not .Values.nfs.server }}
##############################################################################
#### ERROR: You did not provide NFS server IP. ####
##############################################################################
All pods do not go to the running state if the NFS server IP was not provided.
{{- end }}
{{ if .Values.storageClass.create -}}
A storage class named '{{ .Values.storageClass.name }}' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "{{ .Values.storageClass.name }}"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Mi
{{ else -}}
A storage class has NOT been created. You may create a custom `StorageClass`
resource with a `provisioner` annotation of `{{ template "nfs-client-provisioner.provisionerName" . }}`.
{{ end -}}

View File

@ -0,0 +1,43 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nfs-client-provisioner.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nfs-client-provisioner.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nfs-client-provisioner.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nfs-client-provisioner.provisionerName" -}}
{{- if .Values.storageClass.provisionerName -}}
{{- printf .Values.storageClass.provisionerName -}}
{{- else -}}
cluster.local/{{ template "nfs-client-provisioner.fullname" . -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,24 @@
{{ if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "nfs-client-provisioner.fullname" . }}
labels:
app: {{ template "nfs-client-provisioner.name" . }}
chart: {{ template "nfs-client-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
{{- end -}}

View File

@ -0,0 +1,19 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "nfs-client-provisioner.name" . }}
chart: {{ template "nfs-client-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nfs-client-provisioner.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "nfs-client-provisioner.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "nfs-client-provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}

View File

@ -0,0 +1,57 @@
{{- if .Values.nfs.server }}
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ template "nfs-client-provisioner.fullname" . }}
labels:
app: {{ template "nfs-client-provisioner.name" . }}
chart: {{ template "nfs-client-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "nfs-client-provisioner.name" . }}
release: {{ .Release.Name }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ template "nfs-client-provisioner.name" . }}
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ if .Values.rbac.create }}{{ template "nfs-client-provisioner.fullname" . }}{{ else }}{{ .Values.rbac.serviceAccountName | quote }}{{ end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: {{ template "nfs-client-provisioner.provisionerName" . }}
- name: NFS_SERVER
value: {{ .Values.nfs.server }}
- name: NFS_PATH
value: {{ .Values.nfs.path }}
volumes:
- name: nfs-client-root
nfs:
server: {{ .Values.nfs.server }}
path: {{ .Values.nfs.path }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,11 @@
{{- if .Values.rbac.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "nfs-client-provisioner.name" . }}
chart: {{ template "nfs-client-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nfs-client-provisioner.fullname" . }}
{{- end -}}

View File

@ -0,0 +1,13 @@
{{ if .Values.storageClass.create -}}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.storageClass.name }}
labels:
app: {{ template "nfs-client-provisioner.name" . }}
chart: {{ template "nfs-client-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
provisioner: {{ template "nfs-client-provisioner.provisionerName" . }}
reclaimPolicy: {{.Values.storageClass.reclaimPolicy}}
{{ end -}}

View File

@ -0,0 +1,12 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi

View File

@ -0,0 +1,21 @@
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim

View File

@ -0,0 +1,52 @@
# Default values for nfs-client-provisioner.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## Deployment replica count
replicaCount: 1
## docker image
image:
repository: quay.io/external_storage/nfs-client-provisioner
tag: v3.1.0-k8s1.11
pullPolicy: IfNotPresent
## Cloud Filestore instance
nfs:
## Set IP address
server: ""
## Set file share name
path: "/vol1"
## For creating the StorageClass automatically:
storageClass:
create: true
## Set a StorageClass name
name: nfs
## Set reclaim policy for PV
reclaimPolicy: Delete
## For RBAC support:
rbac:
create: true
## Ignored if rbac.create is true
##
serviceAccountName: default
##
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}