Impossibile stabilire una connessione mqtt al cluster VerneMQ in k8s dietro il proxy Istio

Aug 21 2020

Sto configurando il cluster k8s in locale k8s. Per i test utilizzo cluster a nodo singolo su vm configurata con kubeadm. I miei requisiti includono l'esecuzione del cluster MQTT (vernemq) in k8s con accesso esterno tramite Ingress (istio).

Senza distribuire l'ingresso, posso connettermi (mosquitto_sub) tramite il servizio NodePort o LoadBalancer.

Istio è stato installato utilizzando istioctl install --set profile=demo

Il problema

Sto tentando di accedere al broker VerneMQ dall'esterno del cluster. Ingress (Istio Gateway) - sembra una soluzione perfetta in questo caso, ma non riesco a stabilire una connessione TCP al broker (né tramite IP in ingresso, né direttamente tramite svc / vernemq IP).

Quindi, come faccio a stabilire questa connessione TCP dal client esterno tramite l'ingresso Istio?

Quello che ho provato

Ho creato due spazi dei nomi:

  • esposto-con-istio - con iniezione proxy istio
  • esposto-con-bilanciamento del carico - senza istio proxy

All'interno dello exposed-with-loadbalancerspazio dei nomi ho distribuito vernemq con il servizio LoadBalancer. Funziona, è così che so che è possibile accedere a VerneMQ (con mosquitto_sub -h <host> -p 1883 -t hello, l'host è ClusterIP o ExternalIP di svc / vernemq). Il dashboard è accessibile dall'host: 8888 / status, incrementi di "Clienti online" sul dashboard.

All'interno exposed-with-istioho implementato vernemq con ClusterIP Service, Istios Gateway e VirtualService. Immediatamente istio-after proxy injection, mosquitto_sub non può iscriversi tramite l'IP svc / vernemq, né tramite l'IP istio ingress (gateway). Il comando si blocca per sempre, ripetendo costantemente. Nel frattempo, l'endpoint del dashboard di vernemq è accessibile tramite l'ip del servizio e il gateway istio.

Immagino che il proxy istio debba essere configurato affinché mqtt funzioni.

Ecco il servizio istio-ingressgateway:

kubectl describe svc/istio-ingressgateway -n istio-system

Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   app=istio-ingressgateway
                          install.operator.istio.io/owning-resource=installed-state
                          install.operator.istio.io/owning-resource-namespace=istio-system
                          istio=ingressgateway
                          istio.io/rev=default
                          operator.istio.io/component=IngressGateways
                          operator.istio.io/managed=Reconcile
                          operator.istio.io/version=1.7.0
                          release=istio
Annotations:              Selector:  app=istio-ingressgateway,istio=ingressgateway
Type:                     LoadBalancer
IP:                       10.100.213.45
LoadBalancer Ingress:     192.168.100.240
Port:                     status-port  15021/TCP
TargetPort:               15021/TCP
Port:                     http2  80/TCP
TargetPort:               8080/TCP
Port:                     https  443/TCP
TargetPort:               8443/TCP
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
...

Ecco i registri di debug di istio-proxy kubectl logs svc/vernemq -n test istio-proxy

2020-08-24T07:57:52.294477Z debug   envoy filter    original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug   envoy filter    tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug   envoy filter    http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug   envoy filter    [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug   envoy pool  creating a new connection
2020-08-24T07:57:52.294671Z debug   envoy pool  [C5646] connecting
2020-08-24T07:57:52.294684Z debug   envoy connection    [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug   envoy connection    [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug   envoy pool  queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug   envoy conn_handler  [C5645] new connection
2020-08-24T07:57:52.294768Z debug   envoy connection    [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug   envoy connection    [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug   envoy pool  [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug   envoy connection    [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug   envoy connection    [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug   envoy conn_handler  [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug   envoy pool  [C5646] connection destroyed

Questi sono i registri di istio-ingressagateway. L'IP 10.244.243.205appartiene al pod VerneMQ, non al servizio (probabilmente è previsto).

2020-08-24T08:48:31.536593Z debug   envoy filter    [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug   envoy filter    [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug   envoy pool  creating a new connection
2020-08-24T08:48:31.536778Z debug   envoy pool  [C13237] connecting
2020-08-24T08:48:31.536784Z debug   envoy connection    [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug   envoy connection    [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug   envoy pool  queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug   envoy conn_handler  [C13236] new connection
2020-08-24T08:48:31.537181Z debug   envoy connection    [C13237] connected
2020-08-24T08:48:31.537204Z debug   envoy pool  [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug   envoy filter    TCP:onUpstreamEvent(), requestedServerName: 
2020-08-24T08:48:31.537880Z debug   envoy misc  Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug   envoy connection    [C13237] remote close
2020-08-24T08:48:31.537913Z debug   envoy connection    [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug   envoy pool  [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug   envoy connection    [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug   envoy connection    [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug   envoy conn_handler  [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug   envoy pool  [C13237] connection destroyed

Le mie configurazioni

vernemq-istio-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-istio
  labels:
    istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
  namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
rules:
  - apiGroups: ["", "extensions", "apps"]
    resources: ["endpoints", "deployments", "replicasets", "pods"]
    verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
subjects:
  - kind: ServiceAccount
    name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  namespace: exposed-with-istio
  labels:
    app: vernemq
spec:
  selector:
    app: vernemq
  type: ClusterIP
  ports:
    - port: 4369
      name: empd
    - port: 44053
      name: vmq
    - port: 8888
      name: http-dashboard
    - port: 1883
      name: tcp-mqtt
      targetPort: 1883
    - port: 9001
      name: tcp-mqtt-ws
      targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vernemq
  namespace: exposed-with-istio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      serviceAccountName: vernemq
      containers:
        - name: vernemq
          image: vernemq/vernemq
          ports:
            - containerPort: 1883
              name: tcp-mqtt
              protocol: TCP
            - containerPort: 8080
              name: tcp-mqtt-ws
            - containerPort: 8888
              name: http-dashboard
            - containerPort: 4369
              name: epmd
            - containerPort: 44053
              name: vmq
            - containerPort: 9100-9109 # shortened
          env:
            - name: DOCKER_VERNEMQ_ACCEPT_EULA
              value: "yes"
            - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
              value: "on"
            - name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
              value: "3,4,5"
            - name: DOCKER_VERNEMQ_allow_register_during_netsplit
              value: "on"
            - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
              value: "1"
            - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
              value: "vernemq"
            - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
              value: "9100"
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
              value: "9109"
            - name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
              value: "1"
vernemq-loadbalancer-service.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
istio.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: vernemq-destination
  namespace: exposed-with-istio
spec:
  host: vernemq.exposed-with-istio.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: vernemq-gateway
  namespace: exposed-with-istio
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - "*"
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
  http:
  - match:
    - uri:
        prefix: /status
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 8888
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 1883

Lo screenshot di Kiali implica che ingressgateway inoltra solo il traffico HTTP al servizio e mangia tutto il TCP?

UPD

Seguendo il suggerimento, ecco l'output:

** Ma i log di envoy rivelano un problema: envoy misc Codice di errore sconosciuto 104 dettagli Connessione ripristinata dal client peer ed envoy pool [C5648] disconnesso.

istioctl proxy-config listeners vernemq-c945876f-tvvz7.exposed-with-istio

prima con | grep 8888 e | grep 1883

0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
...                                            Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000  App: HTTP                                               Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000  ALL                                                     Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369  App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369  ALL                                                     Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 9001  ALL                                                     Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        9090  App: HTTP                                               Route: 9090
0.0.0.0        9090  ALL                                                     PassthroughCluster
10.96.0.10     9153  App: HTTP                                               Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10     9153  ALL                                                     Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        9411  App: HTTP                                               ...
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0               InboundPassthroughClusterIpv4
0.0.0.0        15006 Addr: 0.0.0.0/0                                         InboundPassthroughClusterIpv4
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15010 App: HTTP                                               Route: 15010
0.0.0.0        15010 ALL                                                     PassthroughCluster
10.106.166.154 15012 ALL                                                     Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 App: HTTP                                               Route: 15014
0.0.0.0        15014 ALL                                                     PassthroughCluster
0.0.0.0        15021 ALL                                                     Inline Route: /healthz/ready*
10.100.213.45  15021 App: HTTP                                               Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45  15021 ALL                                                     Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0        15090 ALL                                                     Inline Route: /stats/prometheus*
10.100.213.45  15443 ALL                                                     Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL                                                     Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0        20001 App: HTTP                                               Route: 20001
0.0.0.0        20001 ALL                                                     PassthroughCluster
10.100.213.45  31400 ALL                                                     Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL                                                     Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local

** Inoltre, eseguire: istioctl proxy-config endpoints e istioctl proxy-config route.

istioctl proxy-config endpoints vernemq-c945876f-tvvz7.exposed-with-istio grep 1883

10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT                         STATUS      OUTLIER CHECK     CLUSTER
10.101.200.113:9411              HEALTHY     OK                zipkin
10.106.166.154:15012             HEALTHY     OK                xds-grpc
10.211.55.14:6443                HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010             HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012             HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014             HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017             HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053             HEALTHY     OK                outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080              HEALTHY     OK                outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443              HEALTHY     OK                outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443             HEALTHY     OK                outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080              HEALTHY     OK                outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443              HEALTHY     OK                outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021             HEALTHY     OK                outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443             HEALTHY     OK                outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400             HEALTHY     OK                outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000              HEALTHY     OK                outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411              HEALTHY     OK                outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686             HEALTHY     OK                outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090              HEALTHY     OK                outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001             HEALTHY     OK                outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090              HEALTHY     OK                outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369              HEALTHY     OK                outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888              HEALTHY     OK                outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001              HEALTHY     OK                outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053             HEALTHY     OK                outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369                   HEALTHY     OK                inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888                   HEALTHY     OK                inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001                   HEALTHY     OK                inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000                  HEALTHY     OK                prometheus_stats
127.0.0.1:15020                  HEALTHY     OK                agent
127.0.0.1:44053                  HEALTHY     OK                inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS     HEALTHY     OK                sds-grpc

istioctl proxy-config routes vernemq-c945876f-tvvz7.exposed-with-istio

NOTE: This output only contains routes loaded via RDS.
NAME                                                                         DOMAINS                               MATCH                  VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021                    istio-ingressgateway.istio-system     /*                     
istiod.istio-system.svc.cluster.local:853                                    istiod.istio-system                   /*                     
20001                                                                        kiali.istio-system                    /*                     
15010                                                                        istiod.istio-system                   /*                     
15014                                                                        istiod.istio-system                   /*                     
vernemq.exposed-with-istio.svc.cluster.local:4369                            vernemq                               /*                     
vernemq.exposed-with-istio.svc.cluster.local:44053                           vernemq                               /*                     
kube-dns.kube-system.svc.cluster.local:9153                                  kube-dns.kube-system                  /*                     
8888                                                                         vernemq                               /*                     
80                                                                           istio-egressgateway.istio-system      /*                     
80                                                                           istio-ingressgateway.istio-system     /*                     
80                                                                           tracing.istio-system                  /*                     
grafana.istio-system.svc.cluster.local:3000                                  grafana.istio-system                  /*                     
9411                                                                         zipkin.istio-system                   /*                     
9090                                                                         kiali.istio-system                    /*                     
9090                                                                         prometheus.istio-system               /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
                                                                             *                                     /stats/prometheus*     
InboundPassthroughClusterIpv4                                                *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
InboundPassthroughClusterIpv4                                                *                                     /*                     
                                                                             *                                     /healthz/ready*    

Risposte

2 ChristophRaab Aug 21 2020 at 20:44

Per prima cosa consiglio di abilitare la registrazione di envoy per il pod

kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace

Non seguire i registri del sidecar di istio

kubectl logs <pod-name> -c isito-proxy -f

Aggiornare

Poiché il proxy di Envoy registra un problema su entrambi i lati, la connessione funziona, ma non può essere stabilita.

Per quanto riguarda la porta 15006: In istio tutto il traffico viene instradato tramite il proxy envoy (istio-sidecar). Per questo istio mappa ogni porta con su 15006 per in entrata (che significa tutto il traffico in entrata al sidecar da qualche parte) e su 15001 per in uscita (che significa dal sidecar a qualche parte). Maggiori informazioni qui:https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration

La configurazione dei istioctl proxy-config listeners <pod-name>look è arrivata così lontano. Proviamo a trovare l'errore.

Istio a volte è molto severo con i suoi requisiti di configurazione. Per escluderlo, potresti prima modificare il tuo servizio type: ClusterIPe aggiungere una porta di destinazione per la porta mqtt:

- port: 1883
  name: tcp-mqtt
  targetPort: 1883

Inoltre, esegui: istioctl proxy-config endpoints <pod-name>e istioctl proxy-config routes <pod-name>.

1 Jakub Aug 24 2020 at 09:32

In base alla tua configurazione, direi che dovresti usare ServiceEntry per abilitare la comunicazione tra i pod nella mesh e i pod all'esterno della mesh.

ServiceEntry consente l'aggiunta di voci aggiuntive nel registro di servizio interno di Istio, in modo che i servizi rilevati automaticamente nella mesh possano accedere / instradare a questi servizi specificati manualmente . Una voce di servizio descrive le proprietà di un servizio (nome DNS, VIP, porte, protocolli, endpoint). Questi servizi potrebbero essere esterni alla mesh (ad esempio, API Web) o servizi interni alla mesh che non fanno parte del registro dei servizi della piattaforma (ad esempio, un insieme di VM che comunicano con i servizi in Kubernetes). Inoltre, gli endpoint di una voce di servizio possono anche essere selezionati dinamicamente utilizzando il campo workloadSelector. Questi endpoint possono essere carichi di lavoro VM dichiarati utilizzando l'oggetto WorkloadEntry o pod Kubernetes. La possibilità di selezionare sia pod che VM in un unico servizio consente la migrazione dei servizi dalle VM a Kubernetes senza dover modificare i nomi DNS esistenti associati ai servizi.

Si utilizza una voce di servizio per aggiungere una voce al registro di servizio che Istio gestisce internamente. Dopo aver aggiunto la voce del servizio, i proxy di Envoy possono inviare il traffico al servizio come se fosse un servizio nella rete mesh. La configurazione delle voci di servizio consente di gestire il traffico per i servizi in esecuzione al di fuori della mesh

Per ulteriori informazioni e altri esempi, visitare la documentazione di istio sull'inserimento del servizio qui e qui .


Fammi sapere se hai altre domande.