Commit f9bb57a9 authored by Bruce Flynn's avatar Bruce Flynn
Browse files

initial

parents
FROM debian:buster-slim
RUN apt-get -qq update && apt-get -qq install --no-install-recommends \
dumb-init \
libpq5 \
libpq-dev \
postgresql-client \
python3 \
python3-dev \
python3-pip \
python3-setuptools && \
rm -rf /var/lib/apt/lists/*
COPY app.py /app.py
COPY cron.py /cron.py
COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt
ENTRYPOINT ["dumb-init", "--"]
EXPOSE 6543
CMD ["python3", "/app.py"]
# SSEC Brownbag: Kubernetes Q&A Minikube Example Code
Presented on 04 Nov, 2019
> NOTE: This is not intended as an example of best-practices for anything involved
> here, it's simply an dumping ground what I could think of off the top of my head
> to provide an example of as many K8S moving parts as I could.
# Overview
This is a demo of used various types of components provided by Kubernetes. The
demo is deployed as a [Helm](https://helm.sh) chart and consists of a Python
web application to which you can submit a string. Upon submission the App will
create a Kubernetes
[Batch Job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
that will simply print out a message. There is also a Kubernetes
[Cron Job](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)
that will collect results and status from the Job and enter the data in a
PostgreSQL database, the results of which will be availble to view in the App.
It makes use of the following Kubernetes componets
* [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
* [Service](https://kubernetes.io/docs/concepts/services-networking/service/)
* [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/)
* [Batch Job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
* [Cron Job](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)
* [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
* [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
* [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap)
* [Secret](https://kubernetes.io/docs/concepts/configuration/secret/)
# Environment Setup
## Install/Run Kubernetes
This example will make use of [minikube](https://minikube.sigs.k8s.io) for a
Kubernetes cluster. Generally speaking, the confiuration/deployment here should
be deployable to any K8S cluster you have available, however, it was only
tested with minikube.
Follow the instructions [here](https://minikube.sigs.k8s.io) to install minikube
on your system.
After installing the minikube binary, on my Linux box I create a new cluster with
```
minikube start --vm-driver=kvm2
```
## Helm Charts
This demonstration also provides a [Helm](https://helm.sh) chart.
> **NOTE**: Helm v3 is the current version but this demo uses Helm v2 because
I have not had the opportunity to check out v3 yet. So make sure if you are looking
at docs you are looking at the right version.
Helm requires that the `helm-tiller` addon be enabled:
```
minikube addons enable helm-tiller
```
Also, the Chart includes a K8S Ingress which is enabled by default. Unless you
specifically disable the ingress you will need to also enable the `ingress` addon:
```
minikube addons enable ingress
```
# Database
The database password is set in a secret that must be created before deploying the
app.
Set `<MYPASSWORD>` to whatever you want the password to be.
```
kubectl create secret generic myapp --from-literal=postgresql-password=<MYPASSWORD>
```
Deploy an empty database `myapp` that will use the password from the secret created
above. The `-f` is providing configuration values that override the defaults provided
as a part of the Helm PostgreSQL [chart](https://github.com/helm/charts/tree/master/stable/postgresql).
```
helm install --name myappdb -f chart/pg-values.yaml stable/postgresql
```
> **NOTE**: If you change the name here you will also have to set `dburl` when you
deploy the myapp chart.
# Install App
You can install the app provided in the `./chart` directory (after Helm is installed)
by doing the following:
```
helm install --name myapp ./chart/myapp
```
After the app is installed you can check the status of all components that were deployed
as part of the app by:
```
helm status myapp
```
You can additionally delete all components as part of this particular Helm realse by:
```
helm delete --purge myapp
```
The `--purge` flag removes all trace of the release in Helm. You would probalby not use
the `--purge` flag in production.
\ No newline at end of file
""" App for submitting K8S batch jobs based on entered name.
"""
import logging
import os
from contextlib import contextmanager
from wsgiref.simple_server import make_server
import kubernetes as k8s
import psycopg2
import yaml
from jinja2 import Template
from paste.deploy.config import PrefixMiddleware
from pyramid.config import Configurator
from pyramid.httpexceptions import HTTPBadRequest, HTTPFound, HTTPInternalServerError
from pyramid.session import SignedCookieSessionFactory
LOG = logging
session_factory = SignedCookieSessionFactory("foofoo")
@contextmanager
def cursor(dburl):
with psycopg2.connect(dburl) as conn:
yield conn.cursor()
def say_hello_view(request):
LOG.info("main page")
resp = request.response
resp.content_type = "text/html"
with cursor(request.registry.settings["dburl"]) as cur:
cur.execute("select * from jobs")
jobs = cur.fetchall()
resp.text = Template(
"""
<!doctype html>
<html>
<head>
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
</head>
<body class="pt-5">
<div class="container">
<div class="row">
<div class="col">
{% for m in messages %}
<div class="alert alert-primary">{{ m }}</div>
{% endfor %}
</div>
</div>
<div class="row">
<div class="col">
<form action="{{ url }}">
<div class="form-group">
<label for="to">Say Hello To</label>
<input id="to" name="to" class="form-control" type="text">
</div>
<button type="submit" class="btn btn-primary">Submit</button>
</form>
</div>
</div>
<div class="row">
<div class="col">
<hr />
<table class="table">
<tr><th>Job</th><th>Status</th></tr>
{% for name, status in jobs %}
<tr><td>{{ name }}</th><td>{{ status }}</td></tr>
{% endfor %}
</table>
</div>
</div>
</div>
</body>
</html>
"""
).render(
messages=request.session.pop_flash(),
url=request.route_path("say_hello_submit"),
jobs=jobs,
)
return resp
def say_hello_submit_view(request):
LOG.info("creating jobs for %s", request.params)
if "to" not in request.params:
raise HTTPBadRequest("Das Whack!")
job = submit_say_hello_job(request.params["to"])
LOG.info("submit result: %s", job)
with cursor(request.registry.settings["dburl"]) as cur:
cur.execute("INSERT INTO jobs VALUES (%s, 'Submitted');", [job.metadata.name])
request.session.flash(f"submitted `Say Hello!` job with name {job.metadata.name}")
raise HTTPFound(request.route_path("say_hello"))
def submit_say_hello_job(name):
tmpl = os.environ.get("MYAPP_JOB_TEMPLATE")
if not tmpl or not os.path.exists(tmpl):
raise HTTPInternalServerError(f"template '{tmpl}' not found")
job = yaml.safe_load(
Template(
open(tmpl, "rt").read(),
# necessary to not conflict with Helm template syntax
variable_start_string="<{",
variable_end_string="}>",
).render(name=name)
)
LOG.info("submitting: %s", job)
api = k8s.client.BatchV1Api()
return api.create_namespaced_job("default", body=job)
def make_app():
settings = {"dburl": os.environ["MYAPP_DBURL"]}
with Configurator(settings=settings) as config:
config.set_session_factory(session_factory)
config.add_route("say_hello_submit", "/submit")
config.add_view(say_hello_submit_view, route_name="say_hello_submit")
config.add_route("say_hello", "/")
config.add_view(say_hello_view, route_name="say_hello")
app = config.make_wsgi_app()
prefix = os.environ.get("MYAPP_PREFIX", "")
if prefix:
LOG.info("using prefix '%s'", prefix)
app = PrefixMiddleware(app, prefix=prefix)
return app
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
# This will only run in a K8s cluster
if "MYAPP_INCLUSTER" in os.environ:
LOG.info("k8s incluster config")
k8s.config.load_incluster_config()
else:
LOG.info("k8s kube config")
k8s.config.load_kube_config()
server = make_server("0.0.0.0", 6543, make_app())
LOG.info(f"serving on :6543 db='{os.environ['MYAPP_DBURL']}'")
server.serve_forever()
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: myapp
version: 0.1.0
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "myapp.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "myapp.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "myapp.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "myapp.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "myapp.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "myapp.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "myapp.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "myapp.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
job.tmpl.yaml: |
apiVersion: batch/v1
kind: Job
metadata:
generateName: {{ include "myapp.name" . }}-
spec:
metadata:
generateName: {{ include "myapp.name" . }}-
template:
spec:
restartPolicy: Never
containers:
- name: job
image: alpine:latest
imagePullPolicy: Always
command:
- sh
- -c
- 'echo "Hello <{ name }>"'
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "myapp.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 2
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
backoffLimit: 0
template:
spec:
serviceAccount: {{ include "myapp.fullname" . }}
restartPolicy: Never
containers:
- name: job
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["python3", "/cron.py"]
env:
- name: MYAPP_INCLUSTER
value: "1"
- name: MYAPP_DBURL
value: "{{ .Values.dburl }}"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.secret }}"
key: postgresql-password
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccount: {{ include "myapp.fullname" . }}
volumes:
- name: jobtmpl
configMap:
name: {{ include "myapp.fullname" . }}
initContainers:
- name: init-db
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.secret }}"
key: postgresql-password
command:
- psql
- -h
- myappdb-postgresql
- -U
- myapp
- -c
- "CREATE TABLE IF NOT EXISTS jobs (name text, status text);"
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 6543
protocol: TCP
volumeMounts:
- name: jobtmpl
mountPath: /templates
readOnly: true
env:
- name: MYAPP_JOB_TEMPLATE
value: /templates/job.tmpl.yaml
- name: MYAPP_PREFIX
value: "{{ .Values.appPrefix }}"
- name: MYAPP_INCLUSTER
value: "1"
- name: MYAPP_DBURL
value: "{{ .Values.dburl }}"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.secret }}"
key: postgresql-password
livenessProbe:
httpGet:
path: "{{ .Values.appPrefix }}"
port: http
readinessProbe:
httpGet:
path: "{{ .Values.appPrefix }}"
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "myapp.fullname" . -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
{{- end }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "myapp.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "myapp.fullname" . }}-jobmanager
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
rules:
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "myapp.fullname" . }}-managejobs
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "myapp.fullname" . }}
roleRef:
kind: Role
name: {{ include "myapp.fullname" . }}-jobmanager
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: Service
metadata:
name: {{ include "myapp.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "myapp.name" . }}
helm.sh/chart: {{ include "myapp.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports: