You are on page 1of 4

Kubernetes 1 - Fundamentals

Spring Initializr generates template app for containerized Spring


github.com/javajon/monitoring-kubernetes
Containers and microservices work really well together, wouldn't do one
without the other
Containers give control of the platform on which the container runs by
specifying runtimes, utilities, agents, etc (which linux version)
Containers use opinionated defaults, allowing for "2-line apps"
Java app container in 5 lines
big reduction of "works on my machine" because you must specify the env in
the container (either by accepting opinionated defaults, or specifying)
major performance boost because the container engine doesn't need to be
restarted to restart app
containers can be cached for even more perf boost
Jib - plugin to containerize your java build
Drawback of having more control: more to learn
for Kubernetes, containers (1 or sometimes more) go in a Pod
Kubernetes supports 100s or 1000s of Pods
Google podcasts about development and inside look at Kubernetes
Kubernetes Up & Running (O'Reilly)
Kubernetes handles...
networking (connect to "database-x" instead of ipaddr:port)
scheduling
resources
balancing
scaling
health (self-healing)
monitoring (logging, tracing, and metrics)
namespacing (virtual separation within a single cluster)
separate teams, separate environments, resource allocation
Node: a logical (physical or virtual) machine
master nodes manage cluster
worker nodes run your processes
Pods: one or more containers
Services: load balancers across Pods
Kubelet: a linux process to communicate with master nodes
work sent to kubelet from master node, health and metrics reported back
KubeCtl: admin portal for interacting with master node(s)
all control done through REST API, so multiple ways to manage (CLI,
GUI, Java API, etc)
etcd: distributed key-value store of truth
uses Raft consensus model
--- MAIN TAKEAWAY ---
Scheduler watches etcd for work needing to be done, and acts on it using
available nodes
reconciliation model: based on real-time needs, make it happen
---------------------
Kubelets are compatible with multiple container runtime engines
container runtime engines, in turn, are compatible with any container
implementing container spec
first steps:
Katacoda.com does this in the browsers (
https://www.katacoda.com/javajon )
Minikube is a Virtualbox image for getting started with a small local
Kubernetes env

Kubernetes 2 - Containers
Public containers exist (nginx, mysql, etc)
Helm charts: prebuilt K8s configs that combine containers into pre-assembled,
opinionated setups (e.g. logging plus log search capabilities)
Pods should be as simple as possible
sometimes, that means single-purpose, one app per container per pod
Jlink (Java 9+)... learn more
Storage can be run as a container
can be paired with a container to sync with enterprise shared db,
making container more of a view or cache
files can be mounted per container, per pod, or on a network location
might not work for us, since we need a single source of truth (except
maybe testing with instanced datasets)
Testing
isolate testing into a container (including data!)
test suite can...
deploy testable unit(s) -- in K8s, using a helm script to stand
up multiple containers
run suite against testable unit with REST API
tear down container afterwards
Pods are always machine specific (not split)
Each pod has a unique IP address
multiple containers within a single pod can talk quickly
localhost (no wire time for network calls)
shared storage (same storage mounted into multiple containers)
IPC (inter-process communication)
multi-container pod patterns:
side-car pattern
example: event listener
ambassador pattern
"find me all products" which are distributed among different
places
adapter pattern
"get me the data" which could live on different kinds of
databases
initialization containers:
useful for separation of permissions (e.g. modify schema on DB on
startup, but don't allow schema changes when running)
YAML fragments can help modularize configuration (e.g. a generic "I need
fileshare access" component that could be applied to a webapp definition)
Serverless containers (or functional containers) simply execute functions

Kubernetes 3 - Testing Patterns


Testing challenges:
Missing Data - Often times, features aren't tested because they require
a particular setup or data
Data Consistency - Other tests change the data and ruin my test
Mocking errors - Mocks either out of date or just plain wrong
Improper Usage - invalid or unexpected data provided to a call breaks
it
Beginner usage of test containers:
1. Get a data image. THIS IS CRITICAL -- this approach is useless
without this.
2. Deploy a pod with the app and the data image (can run Oracle)
3. Integrate into test suite:
a. spin up container/pod
b. run test suite against container/pod
c. tear down container/pod
...can eventually start to scale test app within test environment (e.g.
3 pods with service load balancing)
Consumer contracts declare expected in/out conversations with endpoint
helps provide visibility of client usage = fewer surprises
embraces CONVERSATION with clients
Consumer pact tooling:
Spring Cloud Contract
Postman Collections and Newman CLI
*DIUS Pact
only one with a broker that can work with multiple languages
Brian Ely mentioned this with the Swagger documentation too!
Dius pact-broker Docker container available
Pact testing is meant for border scenarios (specifically between consumer and
provider); not good for end-to-end with more that two points

QUESTIONS:
How did you stand up oracle in a container? Name of software?
Licensing?
Oracle XE (lightweight) container can be used for local testing
without any licensing concerns

Kubernetes 4 - Serverless
Monolith -> Microservice ~~ Microservice -> Function
Meant to *augment* existing monoliths and microservices, not *replace*
Examples:
business rule engine: Given input X, produce result Y
IoT helpers: store/process this image
Helpful for prototyping and mocking
Good for scaling up and down quickly for bursts of action
Basically, you give it a runtime and code, and it provides access to that
running code through...
HTTP calls
CRON (scheduled)
Other custom triggers
...
...Jon gets *really* lost in the weeds showing commands to download the
images, get them running and exposed, get the function deployed out, run the
function, install the logging database and dashboard, log into the dashboard, etc
etc...
...this goes on for 45 minutes.

Architecture 5 - The Hard Parts


Modularizing architecture, in general, gets hard when you try to stitch them
back together
Companies have started splitting architecture across domains instead of
technical capabilities
but people have stayed organized by technology, causing even small
changes to require a flurry of tickets and (mis)communication
Event driven topology is either...
choreographed (brokered, pub/sub controlled by consumers)
mediated (orchestration layer)
Microservices calling one-to-one are synchronous by nature, which makes
scalability hit a brick wall
Message queues can help turn microservice intercommunication asynchronous
Workflow event pattern
event producer and consumer, plus a workflow processor
workflow processor watches for failures in the consumer, and can
facilitate retrying, recovery, or human intervention (perhaps via a gui dashboard,
etc)
Don't serialize an internal type of a producer and send to a consumer
send name/value pairs instead, since they are not coupled to any
representation
?? what about using an api artifact that defines the shape of a
message?
Compensating transactions are a BIG NONO. Never set yourself up to have to
undo a transaction!
Data consistency in microservices can be handled through a write-ahead log
works like a journaled filesystem
Data replication can asynchronously replicate a copy from the source of truth
to the consumer that needs it
LRD table copy
CQRS lets you save events synchronously (or enqueue them asynchronously), and
then digest them into a persistent application state database
Event streams as your source of truth (e.g. Kafka)
You could implement CQRS with Kafka if you digested the Kafka`messages
into a state database
Reactive architecture
self healing
Reactive architecture patterns
thread delegate
dequeuing logic is allowed to be more sophisticated than feeding
to any available delegate
preserving message order can be accomplished by making sticky
delegates -- always send aligned messages to the same consumer so they get
processed in order
consumer supervisor
monitor exists to watch consumption of enqueued messages,
provides feedback to producer or spins up additional consumers
workflow event

You might also like