Authentication and Security in Kubernetes:
Authentication ensures that only authorized users or systems can access the Kubernetes API, identifying the entity making an API request.
Security in Kubernetes is vital due to its role in managing containerized applications across clusters. These containers often hold sensitive data and run critical workloads, making robust security measures essential to protect against unauthorized access and potential threats.
***Securing Kubernetes with TLS Certificates
To secure a Kubernetes cluster, TLS certificates are essential for encrypting communication and authenticating components.
Key Concepts:
Public/Private Keys: Servers use a private key to decrypt messages encrypted with its public key.
Certificates: A public key paired with identification, signed by a trusted Certificate Authority (CA).
Types of Certificates:
Server Certificates: Used by servers (API server, etcd, kubelets) for secure communication.
Client Certificates: Used by clients (admins, scheduler, controller) to authenticate themselves.
Root Certificates (CA): The CA signs and verifies other certificates.
Kubernetes Components and Certificates:
API Server: Requires a server certificate (
apiserver.crt
andapiserver.key
) to secure communication with clients.ETCD Server: Requires its own certificate pair (
etcdserver.crt
andetcdserver.key
).Kubelets: Worker node components requiring a certificate (
kubelet.crt
andkubelet.key
).Clients: Components like the admin user (using
admin.crt
andadmin.key
), scheduler, and kube-proxy need certificates to authenticate with the API server.
Certificate Authority (CA):
A CA signs the certificates used in the cluster. Kubernetes can use one CA for the entire cluster or separate CAs for different components like etcd. The CA’s certificates are referred to as ca.crt
and ca.key
.
***Generating Certificates for Kubernetes Cluster
Tools for Certificate Generation: Use tools like OpenSSL, Easy-RSA, or CFSSL. This lecture focuses on using OpenSSL.
Creating CA Certificates:
Generate a private key with
openssl genrsa
.Create a Certificate Signing Request (CSR) specifying the common name (e.g.,
Kubernetes-CA
).Sign the CSR using OpenSSL to create a self-signed certificate for the CA.
3. Generating Client Certificates (e.g., Admin User):
Create a private key for the admin using OpenSSL.
Generate a CSR, specifying the name (e.g.,
Kube Admin
).Sign the certificate using the CA key to make it valid within the cluster.
4. User Identification: Specify the user’s group (e.g., System Masters
) in the certificate to give administrative privileges.
5. Generating Certificates for Other Components:
Follow the same process for components like kube-scheduler, kube-controller-manager, and kube-proxy.
Use appropriate prefixes (
system-
) for system components.
6. Server-Side Certificates:
ETCD Server: Generate certificates for secure communication, including peer certificates for high-availability clusters.
Kube API Server: Generate a certificate with all DNS names and IP addresses using an OpenSSL configuration file. This ensures the API server can be reached by different names (e.g.,
Kubernetes.default.svc
).
7. Using Certificates:
For client-server authentication, certificates are specified in the kubeconfig file.
The CA root certificate is used by all components to validate each other’s certificates.
8. Kubelet Certificates:
Each node in the cluster requires a unique certificate, named after the node (e.g.,
node01
,node02
).Use the appropriate kubelet config with node certificates and the CA certificate.
9. Kube API Server Permissions:
- Nodes must be added to the System Nodes group for proper permissions, similar to how the admin user gets administrative privileges.
***Viewing and Checking Certificates in a Kubernetes Cluster
As a new administrator in a Kubernetes environment, you’re tasked with performing a health check on the cluster’s certificates. Here’s a streamlined approach:
- Understand the Cluster Setup:
- If the cluster is manually deployed, you generated certificates yourself. If deployed with Kubeadm, the tool automatically handles certificates.
2. Identify All Certificates:
Locate the Kubernetes API server definition file, typically found in
/etc/kubernetes/manifests
(for Kubeadm setups).List all certificate files, their paths, names, alternate names, organizations, issuers, and expiration dates.
3. Examine Certificate Details:
Use the OpenSSL X509 command to inspect certificate details.
Check the subject (e.g.,
kube-apiserver
), alternate names, validity period, and issuer (should be the Kubernetes CA).Ensure certificates have the correct names, are issued by the right CA, and have not expired.
4. Logs for Troubleshooting:
For a manually configured cluster, check system service logs.
For Kubeadm-based clusters, check pod logs using
kubectl logs <pod-name>
.If core components (API server, etcd) are down, use Docker commands (
docker ps -a
anddocker logs <container-id>
) to access logs.
***Key Concepts: Kubeconfigs in Kubernetes
- Purpose of Kubeconfigs:
- Simplifies Kubernetes cluster access and user authentication by storing configuration details in a single file.
2. Structure of Kubeconfig File:
Clusters: Lists the Kubernetes clusters you access (e.g., dev, test, prod).
Users: Stores user credentials (e.g., admin or dev user) for cluster access.
Contexts: Links a user to a cluster for specific operations (e.g.,
admin@production
).
3. Default Location:
By default,
kubectl
looks for aconfig
file in the.kube
directory in the user's home folder.This eliminates the need to specify the file path in every
kubectl
command.
4. Config Example:
YAML format with three sections:
clusters
,users
, andcontexts
.Example:
Clusters: Includes the server address and CA certificate.
Users: Defines client certificate and key for authentication.
Contexts: Maps a user to a cluster.
5. Setting a Default Context:
Use the
current-context
field in the kubeconfig file to specify the default context.Switch contexts with the
kubectl config use-context
command.
6. Customizing Contexts:
Add namespaces to contexts to automatically work within a specific namespace.
Example: Configure a context to use a namespace like
dev-namespace
.
7. Base64 Encoding for Certificates:
- Instead of file paths, certificate data (e.g., CA or client certificates) can be embedded directly in the kubeconfig in Base64-encoded format.
8. Viewing and Managing Kubeconfigs:
View the current kubeconfig with
kubectl config view
.Modify the file (e.g., update clusters, users, or contexts) using commands like
kubectl config set-context
.
9. Using Multiple Kubeconfig Files:
Specify a custom kubeconfig file using the
--kubeconfig
flag.Useful for managing multiple clusters or environments.
10. Practical Usage:
Move frequently used kubeconfig files to the default
.kube
directory for convenience.Update and troubleshoot configurations with
kubectl config
commands.
Authorization in Kubernetes:
Authorization in Kubernetes determines what actions a user or system is allowed to perform after successful authentication. It ensures that authenticated entities can only access resources and perform operations they are permitted to.
***Role-Based Access Control (RBAC) in Kubernetes
RBAC in Kubernetes provides fine-grained access control by assigning roles and permissions to users, groups, or service accounts. It ensures that users can only perform authorized actions within their scope.
Creating and Managing Roles in RBAC
Defining a Role
A Role specifies permissions for resources within a namespace. Here’s how to create one:
- Role Object Definition:
The
apiVersion
is set torbac.authorization.k8s.io/v1
.The
kind
isRole
.Rules are defined with three key sections:
* API Groups: Defines the API group (e.g., ""
for the Core group).
*Resources: Specifies the resources (e.g., Pods, ConfigMaps).
*Verbs: Lists the allowed actions (get
, list
, create
, delete
).
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: developer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
2. Creating the Role: Use the following command:
kubectl create -f role-definition.yaml
Linking Roles to Users
To assign a role to a user, create a RoleBinding.
- RoleBinding Definition:
The
kind
isRoleBinding
.Subjects specify the user, group, or service account.
RoleRef links the binding to a specific role.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: default
name: dev-user-binding
subjects:
- kind: User
name: dev-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
2.Creating the RoleBinding:
kubectl create -f rolebinding-definition.yaml
***Namespace Scope of Roles and RoleBindings
Roles and RoleBindings are namespace-specific.
To apply permissions in a different namespace, specify it in the
metadata
section.Use
ClusterRole
andClusterRoleBinding
for cluster-wide permissions.
***Cluster Roles and Cluster Role Bindings in Kubernetes
In Kubernetes, ClusterRoles and ClusterRoleBindings extend Role-Based Access Control (RBAC) to cluster-scoped resources, allowing you to manage permissions for resources that are not tied to specific namespaces.
Difference Between Namespace and Cluster-Scoped Resources
- Namespace Resources:
Created within a specific namespace and limited to that namespace.
Examples: Pods, ReplicaSets, Deployments, Services, Secrets, and Roles.
2. Cluster-Scoped Resources:
Exist at the cluster level, not associated with any namespace.
Examples: Nodes, PersistentVolumes, Namespaces themselves, CertificateSigningRequests, ClusterRoles, and ClusterRoleBindings.
To view a full list of namespaced and non-namespaced resources:
kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false
Cluster Roles
A ClusterRole defines permissions for cluster-scoped resources. Unlike regular Roles, ClusterRoles can also be applied to namespaced resources across all namespaces.
Creating a ClusterRole
- Define the ClusterRole in a YAML file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-binding
subjects:
- kind: User
name: admin-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
2. Create the ClusterRoleBinding:
kubectl create -f clusterrolebinding.yaml
Key Concepts
- Namespace Considerations:
ClusterRoles are typically used for cluster-scoped resources.
However, they can also be used for namespaced resources, granting access across all namespaces.
Example: A ClusterRole granting access to Pods applies to Pods in all namespaces.
2. Default ClusterRoles:
Kubernetes creates several predefined ClusterRoles (e.g.,
cluster-admin
,view
,edit
) during cluster setup.Use these roles as needed or create custom ClusterRoles.
3. Combining Roles and Bindings:
For namespace-specific access, use Roles and RoleBindings.
For cluster-wide or cross-namespace access, use ClusterRoles and ClusterRoleBindings.
***Service Accounts in Kubernetes
Service accounts in Kubernetes play a key role in enabling secure access to the Kubernetes API for applications and processes. They differ from user accounts and are designed for non-human entities like applications or automation tools that need to interact with the cluster.
What Are Service Accounts?
- User Accounts vs. Service Accounts:
User Accounts: Designed for human users (e.g., administrators or developers) to access and manage the cluster.
Service Accounts: Designed for applications or automated tools to interact with the Kubernetes API.
2. Examples of Service Account Use Cases:
A monitoring tool like Prometheus accessing cluster metrics via the Kubernetes API.
A CI/CD pipeline like Jenkins deploying applications to the cluster.
Key Concepts: Service Accounts, Tokens, and Recent Enhancements in Kubernetes:
1. Default Service Account Behavior
Every namespace in Kubernetes includes a default service account.
By default, when a pod is created:
The default service account and its token are automatically mounted to the pod as a volume.
The token is stored at
/var/run/secrets/kubernetes.io/serviceaccount
inside the pod.The token can be used by processes in the pod to authenticate with the Kubernetes API.
The default service account is highly restricted and suitable for basic Kubernetes API queries only.
2. Specifying a Custom Service Account
To use a custom service account:
Update the pod specification by adding the
serviceAccountName
field and specifying the custom account.
spec:
serviceAccountName: custom-service-account
. Pods cannot have their service accounts changed after creation. You must delete and recreate the pod. However, for deployments, updating the service account in the pod spec will trigger a new rollout, recreating the pods with the updated account.
3. Disabling Automatic Mounting
- Automatic mounting of the service account token can be disabled by setting
automountServiceAccountToken: false
in the pod specification:
spec:
automountServiceAccountToken: false
4. Changes in Kubernetes Versions 1.22 and 1.24
Version 1.22: Token Request API
Issues with Legacy Tokens:
Tokens from default secrets are not audience-bound, time-bound, or object-bound.
Legacy tokens lack expiry, posing security and scalability risks.
Enhancements Introduced:
Token Request API was implemented via KEP 1205.
Tokens generated by this API are:
Audience-bound: Used only by specific consumers.
Time-bound: Have a defined expiration (e.g., one hour by default).
Object-bound: Valid only for a specific service account.
Tokens are mounted to pods as projected volumes, replacing the older secret-based tokens.
Version 1.24: Reduction of Secret-Based Tokens:
Removal of Default Secret Creation:
Service accounts no longer automatically create associated secret tokens.
If a token is required, it must be explicitly generated using
kubectl create token <service-account-name>
- Tokens created this way are time-limited (default: one hour).
Creating Non-Expiring Tokens (Not Recommended):
- If necessary, you can manually create a secret with the
kubernetes.io/service-account-token
type:
apiVersion: v1
kind: Secret
metadata:
name: custom-token
annotations:
kubernetes.io/service-account.name: custom-service-account
type: kubernetes.io/service-account-token
. Use this only if the Token Request API is not viable, as it introduces security risks.
***Image Security:
Understanding Image Names and Using Secure Image Repositories
Let’s dive deeper into how image names work and how they relate to secure image practices.
Image Names and Sources
- Basic Image Naming Convention:
- Consider a pod definition file where an Nginx container is deployed
containers:
- name: nginx
image: nginx
. Here, nginx
represents the image name. But what does it mean, and where does Kubernetes pull this image from?
2. Understanding the Structure:
The name follows Docker’s naming convention:
nginx
is shorthand forlibrary/nginx
.library
is the default account name for Docker's official images, which adhere to best practices and are maintained by a dedicated teamIf you create custom images, the name might look like:
<account-name>/<repository-name>:<tag>
3. Default Registry:
If no registry is specified, Kubernetes assumes the image is hosted on Docker Hub (DNS:
docker.io
).Example:
nginx
resolves todocker.io/library/nginx
.
4. Alternative Registries:
Other registries include:
. Google Container Registry(GCR): gcr.io
. Amazon Elastic Container Registry (ECR): aws_account_id.dkr.ecr.region.amazonaws.com
. Azure Container Registry (ACR): <your_registry_name>.azurecr.io
These registries are used for both public and private images.
Key Concepts: Using Private Docker Registries in Kubernetes:
1. Accessing Private Docker Registries in Docker
Authentication:
Use the
docker login
command to authenticate with a private Docker registry.Provide your username, password, and optionally, email for access.
Running a Container:
Once authenticated, specify the full image path from the private registry to pull and run the container:
docker run <private-registry>/<image-name>:<tag>
2. Accessing Private Registries in Kubernetes
Challenge:
Kubernetes worker nodes need credentials to pull private images when creating pods.
Authentication and credential management must be automated for seamless image pulling.
3. Solution: Kubernetes Secret for Docker Registry Credentials
Secret Type:
Create a Kubernetes Secret of type
docker-registry
.Contents: Store the credentials for the private registry:
Registry Server Name: e.g.,
https://index.docker.io/v1/
Username and Password: Credentials for the private registry.
Email Address: Associated with the registry account.
4. Steps to Create and Use the Secret
** Create a Secret:
- Use the
kubectl create secret
command
kubectl create secret docker-registry regcred \
--docker-server=<registry-server> \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email>
. Replace placeholders with actual registry details.
**Reference the Secret in the Pod Spec:
- Add the
imagePullSecrets
field in the pod specification
spec:
imagePullSecrets:
- name: regcred
** Pod Creation:
- When a pod is created, kubelet uses the credentials from the secret to authenticate with the registry and pull the required image.
5. Key Kubernetes Concepts
Docker Registry Secret: A built-in Kubernetes secret type for securely storing Docker credentials.
Image Pull Secrets: Configured in the pod specification to reference the secret for private registry authentication.
6. Example Pod Spec Using a Private Image
apiVersion: v1
kind: Pod
metadata:
name: private-registry-pod
spec:
containers:
- name: my-container
image: <private-registry>/<image-name>:<tag>
imagePullSecrets:
- name: regcred
***Security Context:
Key Concepts: Configuring Security Contexts in Kubernetes
1. Configuring Security Context at the Pod Level
The
securityContext
field allows you to define security settings for a Pod.Example configuration for a Pod running an Ubuntu image with a
sleep
command:
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsUser: 1000 # Set the user ID for all containers in the Pod
containers:
- name: ubuntu-container
image: ubuntu
command: ["sleep", "3600"]
runAsUser
: Defines the user ID under which processes in the Pod will run, enhancing security by avoiding root-level privileges.
2. Configuring Security Context at the Container Level
- Security settings can also be applied at the Container level for more granular control:
spec:
containers:
- name: ubuntu-container
image: ubuntu
command: ["sleep", "3600"]
securityContext:
runAsUser: 1000 # Set the user ID for this specific container
3. Adding Linux Capabilities
- Use the
capabilities
field undersecurityContext
to fine-tune Linux kernel privileges for the Container:
securityContext:
capabilities:
add:
- NET_ADMIN # Add network administration capability
- SYS_TIME # Allow modification of system time
Capabilities:
Add: Grants specific privileges to the container. For example:
NET_ADMIN
: Manage network settings.SYS_TIME
: Adjust system time
Why Use Security Contexts?
Enforce Least Privilege: Prevent containers from running with unnecessary or excessive permissions.
Granularity: Define security settings for either the entire Pod or individual Containers.
Improve Security: Reduce risks associated with running containers as root or with unrestricted access.
***NetWork Policies:
1. Networking Basics in Applications
A typical application has multiple components:
Web Server: Serves front-end traffic (e.g., HTTP on port 80).
API Server: Processes backend requests (e.g., port 5000).
Database Server: Stores and serves data (e.g., port 3306).
Traffic Types:
Ingress: Incoming traffic to a component (e.g., HTTP requests to port 80).
Egress: Outgoing traffic from a component (e.g., API server querying the database).
2. Networking in Kubernetes
Kubernetes provides each node, pod, and service with a unique IP address.
Pods can communicate across nodes using a virtual private network (VPN) within the cluster.
Default Behavior:
- All pods can communicate with one another without additional configuration.
3. Introducing Network Policies
- Purpose: Restrict pod-to-pod communication based on security requirements.
Use Case:
By default, all pods (e.g., web server, API server, database) can communicate.
A Network Policy can restrict traffic, e.g., allowing the database to accept traffic only from the API server.
4. Network Policy Configuration
- A Network Policy is a Kubernetes object defined within a namespace.
How It Works:
Policies apply to selected pods using labels.
Rules specify allowed ingress (incoming) or egress (outgoing) traffic.
Once applied, only traffic matching the rules is permitted; all other traffic is blocked.
5. Steps to Configure a Network Policy
- Create a Policy Object:
Define API version, kind, metadata, and spec.
Use
networking.k8s.io/v1
as the API version.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
spec:
podSelector:
matchLabels:
app: database # Apply policy to pods labeled "app=database"
Add Policy Types:
- Specify
ingress
,egress
, or both underpolicyTypes
.
policyTypes:
- Ingress # Isolate ingress traffic
Define Rules:
Add Ingress or Egress rules for specific traffic:
Define allowed traffic sources using labels and selectors.
Specify allowed ports.
Example (allowing API server access to database):
ingress:
- from:
- podSelector:
matchLabels:
app: api # Allow traffic from API server pods
ports:
- protocol: TCP
port: 3306 # Allow traffic on port 3306
***Developing network policies:
Final Thoughts
As I continue my journey to understand Kubernetes, it’s clear that security, especially in the form of authentication and authorization, is not a static implementation but an ongoing commitment. Each concept I’ve learned has reinforced the importance of embedding security at every stage of the Kubernetes lifecycle — from development to deployment.
Through this process, I’ve gained insights into how Kubernetes empowers DevOps engineers to confidently manage clusters while safeguarding against threats. My efforts to explore these critical aspects have equipped me to implement best practices, adapt to emerging challenges, and contribute to creating secure, resilient applications in Kubernetes environments. This evolving understanding underscores the balance between accessibility and protection, essential for modern application delivery at scale.