¿Cómo depurar contenedores atascados en estado pendiente con programación fallida en Fargate?
Objetivo
Mi objetivo es poder implementar en AWS EKS usando Fargate. He logrado que la implementación funcione con un node_group
. Sin embargo, cuando pasé a usar Fargate, parece que todos los pods están bloqueados en el estado pendiente.
Cómo se ve mi código actual
Estoy aprovisionando usando Terraform (no necesariamente buscando una respuesta de Terraform). Así es como creo mi clúster EKS:
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "13.2.1"
cluster_name = "${var.project_name}-${var.env_name}"
cluster_version = var.cluster_version
vpc_id = var.vpc_id
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
enable_irsa = true
subnets = concat(var.private_subnet_ids, var.public_subnet_ids)
create_fargate_pod_execution_role = false
node_groups = {
my_nodes = {
desired_capacity = 1
max_capacity = 2
min_capacity = 1
instance_type = var.nodes_instance_type
subnets = var.private_subnet_ids
}
}
}
Y así es como aprovisiono el perfil de Fargate:
resource "aws_eks_fargate_profile" "airflow" {
cluster_name = module.eks_cluster.cluster_id
fargate_profile_name = "${var.project_name}-fargate-${var.env_name}"
pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn
subnet_ids = var.private_subnet_ids
selector {
namespace = "airflow"
}
}
Y así es como creé y adjunto las políticas requeridas:
resource "aws_iam_role" "fargate_iam_role" {
name = "${var.project_name}-fargate-${var.env_name}"
force_detach_policies = true
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = ["eks-fargate-pods.amazonaws.com", "eks.amazonaws.com"]
}
}]
Version = "2012-10-17"
})
}
# Attach IAM Policy for Fargate
resource "aws_iam_role_policy_attachment" "fargate_pod_execution" {
role = aws_iam_role.fargate_iam_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
}
Lo que he probado y no funciona
Intenté implementar los pods (estoy usando un gráfico de Helm) en el mismo espacio de nombres donde Fargate Profile
existe. Cuando corro kubectl get pods -n airflow
veo todas mis vainas pendientes como:
NAME READY STATUS RESTARTS AGE
airflow-flower-79b5948677-vww5d 0/1 Pending 0 40s
airflow-redis-master-0 0/1 Pending 0 40s
airflow-scheduler-6b6bd4b6f6-j9qzg 0/2 Pending 0 41s
airflow-web-567b55fbbf-z8dsg 0/2 Pending 0 41s
airflow-worker-0 0/2 Pending 0 40s
airflow-worker-1 0/2 Pending 0 40s
Luego miro los eventos por kubectl get events -n airflow
, a los que llego:
LAST SEEN TYPE REASON OBJECT MESSAGE
2m15s Normal LoggingEnabled pod/airflow-flower-79b5948677-vww5d Successfully enabled logging for pod
2m16s Normal SuccessfulCreate replicaset/airflow-flower-79b5948677 Created pod: airflow-flower-79b5948677-vww5d
2m17s Normal ScalingReplicaSet deployment/airflow-flower Scaled up replica set airflow-flower-79b5948677 to 1
2m15s Normal LoggingEnabled pod/airflow-redis-master-0 Successfully enabled logging for pod
2m16s Normal SuccessfulCreate statefulset/airflow-redis-master create Pod airflow-redis-master-0 in StatefulSet airflow-redis-master successful
2m15s Normal LoggingEnabled pod/airflow-scheduler-6b6bd4b6f6-j9qzg Successfully enabled logging for pod
2m16s Normal SuccessfulCreate replicaset/airflow-scheduler-6b6bd4b6f6 Created pod: airflow-scheduler-6b6bd4b6f6-j9qzg
2m17s Normal NoPods poddisruptionbudget/airflow-scheduler No matching pods found
2m17s Normal ScalingReplicaSet deployment/airflow-scheduler Scaled up replica set airflow-scheduler-6b6bd4b6f6 to 1
2m15s Normal LoggingEnabled pod/airflow-web-567b55fbbf-z8dsg Successfully enabled logging for pod
2m16s Normal SuccessfulCreate replicaset/airflow-web-567b55fbbf Created pod: airflow-web-567b55fbbf-z8dsg
2m17s Normal ScalingReplicaSet deployment/airflow-web Scaled up replica set airflow-web-567b55fbbf to 1
2m15s Normal LoggingEnabled pod/airflow-worker-0 Successfully enabled logging for pod
2m15s Normal LoggingEnabled pod/airflow-worker-1 Successfully enabled logging for pod
2m16s Normal SuccessfulCreate statefulset/airflow-worker create Pod airflow-worker-0 in StatefulSet airflow-worker successful
2m16s Normal SuccessfulCreate statefulset/airflow-worker create Pod airflow-worker-1 in StatefulSet airflow-worker successful
Luego trato de describir una de las vainas (vía kubectl describe pod
), y obtengo:
Name: airflow-redis-master-0
Namespace: airflow
Priority: 2000001000
Priority Class Name: system-node-critical
Node: <none>
Labels: app=redis
chart=redis-10.5.7
controller-revision-hash=airflow-redis-master-588d57785d
eks.amazonaws.com/fargate-profile=airflow-fargate-airflow-dev
release=airflow
role=master
statefulset.kubernetes.io/pod-name=airflow-redis-master-0
Annotations: CapacityProvisioned: 0.25vCPU 0.5GB
Logging: LoggingEnabled
checksum/configmap: 2b82c78fd9186045e6e2b44cfbb38460310697cf2f2f175c9d8618dd4d42e1ca
checksum/health: a5073935c8eb985cf8f3128ba7abbc4121cef628a9a1b0924c95cf97d33323bf
checksum/secret: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
cluster-autoscaler.kubernetes.io/safe-to-evict: true
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/airflow-redis-master
NominatedNodeName: 6f344dfd11-000a9c54e4e240a2a8b3dfceb5f8227e
Containers:
airflow-redis:
Image: docker.io/bitnami/redis:5.0.7-debian-10-r32
Port: 6379/TCP
Host Port: 0/TCP
Command:
/bin/bash
-c
if [[ -n $REDIS_PASSWORD_FILE ]]; then password_aux=`cat ${REDIS_PASSWORD_FILE}`
export REDIS_PASSWORD=$password_aux fi if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf fi if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf fi ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}") ARGS+=("--masterauth" "${REDIS_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
/run.sh ${ARGS[@]}
Liveness: exec [sh -c /health/ping_liveness_local.sh 5] delay=5s timeout=5s period=5s #success=1 #failure=5
Readiness: exec [sh -c /health/ping_readiness_local.sh 5] delay=5s timeout=1s period=5s #success=1 #failure=5
Environment:
REDIS_REPLICATION_MODE: master
REDIS_PASSWORD: <set to the key 'redis-password' in secret 'my-creds'> Optional: false
REDIS_PORT: 6379
Mounts:
/data from redis-data (rw)
/health from health (rw)
/opt/bitnami/redis/etc/ from redis-tmp-conf (rw)
/opt/bitnami/redis/mounted-etc from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dmwvn (ro)
Volumes:
health:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-redis-health
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-redis
Optional: false
redis-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
redis-tmp-conf:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-dmwvn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dmwvn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoggingEnabled 3m12s fargate-scheduler Successfully enabled logging for pod
Warning FailedScheduling 12s fargate-scheduler Pod provisioning timed out (will retry) for pod: airflow/airflow-redis-master-0
Otras cosas que he probado
- Etiquetar mis subredes con la etiqueta apropiada (condicional basado en subredes públicas / privadas):
kubernetes_tags = map(
"kubernetes.io/role/${var.type == "Public" ? "elb" : "internal-elb"}", 1,
"kubernetes.io/cluster/${var.kubernetes_cluster_name}", "shared"
)
- Anotando mis pods con el perfil de Fargate (como infraestructura: fargate)
- Configuración de depuración de VPC. A mi entender, las siguientes configuraciones deben describirse para Fargate (fuente aquí ):
single_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_vpn_gateway = false
enable_dns_hostnames = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_dns_support = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
Sin embargo, se me ha proporcionado una VPC de fácil creación y no estoy seguro de cómo comprobar si esta configuración ya se ha activado o desactivado.
¿Cuáles son los pasos que debo seguir para depurar este problema?
Respuestas
Para fines de prueba, creo que debe habilitar la conectividad desde las subredes privadas de vpc al mundo exterior mediante la puerta de enlace NAT. Por lo tanto, puede crear una puerta de enlace NAT en público y agregar a las subredes privadas una entrada adicional en su tabla de enrutamiento asociada que se ve así:
0.0.0.0/0 nat-xxxxxxxx
Si esto funcionó y desea mantener su salida restringida a través de su instancia de firewall, que es más segura, creo que debe comunicarse con el soporte del proveedor de firewall para preguntar cómo puede incluir el tráfico fuera de la lista blanca.