Signing and Verifying ECR Images Using Cosign and Kyverno
Reference:
Prerequisites:
1-Create a user with Administrator permission(in real world scenarios, access should be restricted) and generate access key/secret access key or create an IAM role for EC2 with Administrator permission.
2-ECR registry should be there in your AWS account or just create one. Also we’ll be using us-east-1 region for the setup.
Step 1: Launch an EC2 instance with Amazon Linux 2 AMI and configure aws cli/EC2 IAM role.
Connect to this instance and execute following command to install required packages.
sudo yum install -y git docker jq
sudo service docker start
sudo curl --silent --location -o /usr/local/bin/kubectl \
https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.5/2024-01-04/bin/linux/amd64/kubectl
sudo chmod +x /usr/local/bin/kubectl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv -v /tmp/eksctl /usr/local/bin
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
sudo usermod -aG docker ec2-user
sudo su - $USER
docker ps
Step 2: Follow https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions to upgrade AWS CLI to v2.
Step 3: Now create a file named eks-cluster-config.yaml with following content:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-cluster
region: us-east-1
version: '1.28'
iam:
withOIDC: true
availabilityZones: ['us-east-1a','us-east-1b','us-east-1c']
fargateProfiles:
- name: defaultfp
selectors:
- namespace: default
- namespace: kube-system
- namespace: kyverno
cloudWatch:
clusterLogging:
enableTypes: ["*"]
Now execute following command to create our EKS cluster:
eksctl create cluster -f eks-cluster-config.yaml
It will take around 20 minutes before cluster is ready.
Step 4: Verify you can connect to your cluster using following commands:
aws eks update-kubeconfig --region us-east-1 --name eks-cluster
kubectl get nodes
Step 5: Create a file named index.html with following content:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Docker Nginx</title>
</head>
<body>
<h2>Hello from Nginx container v1</h2>
</body>
</html>
and a Dockerfile with following content:
FROM nginx:latest
COPY ./index.html /usr/share/nginx/html/index.html
Build and push the image to ECR registry.
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com
docker build -t app-api:v1 .
docker tag app-api:v1 <AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/app-api:v1
docker push <AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/app-api:v1
Step 6: Create a file named deployment.yaml with following content
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: <ECR_IMAGE_URL>
ports:
- containerPort: 80
create the deployment by executing kubectl apply -f deployment.yaml command.
Step 7:Install kyverno by executing following commands:
kubectl create namespace kyverno
kubectl create secret docker-registry aws-registry -n kyverno \
--docker-server=<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com \
--docker-username=AWS \
--docker-password=demo
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno --namespace kyverno kyverno/kyverno --create-namespace --set 'extraArgs={--imagePullSecrets=aws-registry}'
eksctl create iamserviceaccount --name kyverno-admission-controller --namespace kyverno --cluster eks-cluster --attach-policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess --override-existing-serviceaccounts --approve
kubectl edit deployment kyverno-admission-controller -n kyverno
and this line in the args section
- --imagePullSecrets=aws-registry
As of now, we have deployed our nginx deployment as well as kyverno.
Step 8: We need to deploy a cronjob to periodically fetch ECR registry credentials.
For this, create a file named cron.yaml with following content. Replace <AWS_ACCOUNT_ID> with your AWS account id.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: kyverno
name: secret-creator-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "update", "delete", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-creator-binding
namespace: kyverno
subjects:
- kind: ServiceAccount
name: kyverno-admission-controller
namespace: kyverno
roleRef:
kind: Role
name: secret-creator-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: aws-registry-credential-cron
namespace: kyverno
spec:
schedule: "* */8 * * *"
successfulJobsHistoryLimit: 2
suspend: false
jobTemplate:
spec:
template:
spec:
serviceAccountName: kyverno-admission-controller
containers:
- name: ecr-registry-helper
image: omarxs/awskctl:v1.0
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- |-
DOCKER_REGISTRY_SERVER=https://<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com
DOCKER_USER=AWS
AWS_REGION=us-east-1
ECR_TOKEN="$(aws ecr get-login-password --region ${AWS_REGION})"
kubectl delete secret --ignore-not-found aws-registry -n kyverno
kubectl create secret docker-registry aws-registry -n kyverno --docker-server=${DOCKER_REGISTRY_SERVER} --docker-username=${DOCKER_USER} --docker-password=${ECR_TOKEN}
echo "Secret was successfully updated at $(date)"
restartPolicy: Never
Apply it using kubectl apply -f cron.yaml command.
We also need to run this job manually once to fetch ecr credentials. For this, execute following command
kubectl create job \
--from=cronjob/aws-registry-credential-cron -n kyverno aws-registry-credential-cron-manual-001
and check the job logs using following command.
kubectl logs job/aws-registry-credential-cron-manual-001 -n kyverno
Step 9: Now we’ll install cosign with following commands
curl -O -L "https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64" cosign-linux-amd64
sudo mv cosign-linux-amd64 /usr/local/bin/cosign
sudo chmod +x /usr/local/bin/cosign
and generate a key pair with following command
cosign generate-key-pair #You may choose not to specify a password for private key
Step 10: Next, we’ll create a cluster policy that restricts the deployment of unsigned images to the EKS cluster. For this create a file named cluster-policy.yaml with following content. Replace <AWS_ACCOUNT_ID> and <ECR_REGISTRY_NAME> accordingly. For public key, use the content of cosign.pub which we generated in Step 9.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-signed-images
spec:
validationFailureAction: Enforce
background: false
webhookTimeoutSeconds: 30
failurePolicy: Fail
rules:
- name: check-image-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- image: "<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/*"
# Replace with your own public key
key: |-
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEM1z+vdFrcLuuxHjQdmLFn3icm0Xq
Rr4bTktxmpzITojnPDTiMcQBIXZY4o/+hrZ09GJ7rXEcYLns/q/iWWBiGQ==
-----END PUBLIC KEY-----
Apply it using kubectl apply -f cluster-policy.yaml command.
Step 11: Now redeploy the nginx deployment using following command
kubectl rollout restart deployment/nginx-deployment
and it will get blocked
because ECR image used for nginx deployment (<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/app-api:v1) is not signed.
Step 12: Now let’s sign the ECR image with following command. Replace values for <AWS_ACCOUNT_ID> and <ECR_REGISTRY_NAME> accordingly
cosign sign --key cosign.key <AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/<ECR_REGISTRY_NAME>:v1
Press y when asked to confirm uploading to the transparency log at “https://rekor.sigstore.dev
Now if you check your ECR registry, you’ll see your image along with SHA256 hash.
Step 13: Now redeploy the nginx deployment using following command
kubectl rollout restart deployment/nginx-deployment
and it will get restarted successfully.