1 # Private Docker Registry in Kubernetes
3 Kubernetes offers an optional private Docker registry addon, which you can turn
4 on when you bring up a cluster or install later. This gives you a place to
5 store truly private Docker images for your cluster.
9 The private registry runs as a `Pod` in your cluster. It does not currently
10 support SSL or authentication, which triggers Docker's "insecure registry"
11 logic. To work around this, we run a proxy on each node in the cluster,
12 exposing a port onto the node (via a hostPort), which Docker accepts as
13 "secure", since it is accessed by `localhost`.
17 Some cluster installs (e.g. GCE) support this as a cluster-birth flag. The
18 `ENABLE_CLUSTER_REGISTRY` variable in `cluster/gce/config-default.sh` governs
19 whether the registry is run or not. To set this flag, you can specify
20 `KUBE_ENABLE_CLUSTER_REGISTRY=true` when running `kube-up.sh`. If your cluster
21 does not include this flag, the following steps should work. Note that some of
22 this is cloud-provider specific, so you may have to customize it a bit.
26 The primary job of the registry is to store data. To do that we have to decide
27 where to store it. For cloud environments that have networked storage, we can
28 use Kubernetes's `PersistentVolume` abstraction. The following template is
29 expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
32 <!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in -->
34 kind: PersistentVolume
37 name: kube-system-kube-registry-pv
39 {% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
41 storage: {{ pillar['cluster_registry_disk_size'] }}
45 pdName: "{{ pillar['cluster_registry_disk_name'] }}"
49 <!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
51 If, for example, you wanted to use NFS you would just need to change the
52 `gcePersistentDisk` block to `nfs`. See
53 [here](https://kubernetes.io/docs/concepts/storage/volumes/) for more details on volumes.
55 Note that in any case, the storage (in the case the GCE PersistentDisk) must be
56 created independently - this is not something Kubernetes manages for you (yet).
58 ### I don't want or don't have persistent storage
60 If you are running in a place that doesn't have networked storage, or if you
61 just want to kick the tires on this without committing to it, you can easily
62 adapt the `ReplicationController` specification below to use a simple
63 `emptyDir` volume instead of a `persistentVolumeClaim`.
67 Now that the Kubernetes cluster knows that some storage exists, you can put a
68 claim on that storage. As with the `PersistentVolume` above, you can start
69 with the `salt` template:
71 <!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in -->
73 kind: PersistentVolumeClaim
76 name: kube-registry-pvc
77 namespace: kube-system
83 storage: {{ pillar['cluster_registry_disk_size'] }}
85 <!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
87 This tells Kubernetes that you want to use storage, and the `PersistentVolume`
88 you created before will be bound to this claim (unless you have other
89 `PersistentVolumes` in which case those might get bound instead). This claim
90 gives you the right to use this storage until you release the claim.
94 Now we can run a Docker registry:
96 <!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml -->
99 kind: ReplicationController
101 name: kube-registry-v0
102 namespace: kube-system
125 - name: REGISTRY_HTTP_ADDR
127 - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
128 value: /var/lib/registry
131 mountPath: /var/lib/registry
133 - containerPort: 5000
138 persistentVolumeClaim:
139 claimName: kube-registry-pvc
141 <!-- END MUNGE: EXAMPLE registry-rc.yaml -->
143 *Note:* that if you have set multiple replicas, make sure your CSI driver has support for the `ReadWriteMany` accessMode.
145 ## Expose the registry in the cluster
147 Now that we have a registry `Pod` running, we can expose it as a Service:
149 <!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml -->
155 namespace: kube-system
158 kubernetes.io/name: "KubeRegistry"
167 <!-- END MUNGE: EXAMPLE registry-svc.yaml -->
169 ## Expose the registry on each node
171 Now that we have a running `Service`, we need to expose it onto each Kubernetes
172 `Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
173 node by creating following daemonset.
175 <!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
180 name: kube-registry-proxy
181 namespace: kube-system
183 k8s-app: kube-registry-proxy
189 k8s-app: kube-registry-proxy
190 kubernetes.io/name: "kube-registry-proxy"
194 - name: kube-registry-proxy
195 image: gcr.io/google_containers/kube-registry-proxy:0.4
201 - name: REGISTRY_HOST
202 value: kube-registry.kube-system.svc.cluster.local
203 - name: REGISTRY_PORT
210 <!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
212 When modifying replication-controller, service and daemon-set definitions, take
213 care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
214 Failing to do so will have register the localhost proxy daemon-sets to the
215 upstream service. As a result they will then try to proxy themselves, which
216 will, for obvious reasons, not work.
218 This ensures that port 5000 on each node is directed to the registry `Service`.
219 You should be able to verify that it is running by hitting port 5000 with a web
220 browser and getting a 404 error:
223 $ curl localhost:5000
227 ## Using the registry
229 To use an image hosted by this registry, simply say this in your `Pod`'s
230 `spec.containers[].image` field:
233 image: localhost:5000/user/container
236 Before you can use the registry, you have to be able to get images into it,
237 though. If you are building an image on your Kubernetes `Node`, you can spell
238 out `localhost:5000` when you build and push. More likely, though, you are
239 building locally and want to push to your cluster.
241 You can use `kubectl` to set up a port-forward from your local node to a
245 $ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
246 -o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
247 | grep Running | head -1 | cut -f1 -d' ')
249 $ kubectl port-forward --namespace kube-system $POD 5000:5000 &
252 Now you can build and push images on your local computer as
253 `localhost:5000/yourname/container` and those images will be available inside
254 your kubernetes cluster with the same name.