26

Kubernetes-ingress-http和https分流方案调研

 4 years ago
source link: http://dockone.io/article/9913
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

【编者的话】是否还在为Kubernetes的灰度发布,新旧版本流量控制,基于用户群体流量拆分等流量拆分及基于内容的高级路由而头疼,且看本文带你入门带你飞。

背景

由于业务需要针对http和https流量区分处理,为此针对Kubernetes Ingress实现http和https分流做了一番调研。

流量分布图

流量分布如图所示:

U7rymiz.jpg!web

方案调研

Kubernetes Service config

http-svc:

apiVersion: v1

kind: Service

metadata:

labels:

app: cyh-nginx

name: http-svc

namespace: cyh

spec:

ports:

- name: http

port: 80

protocol: TCP

targetPort: 80

selector:

app: cyh-nginx

sessionAffinity: None

type: ClusterIP

https-svc:

apiVersion: v1

kind: Service

metadata:

labels:

app: cyh-nginx

name: https-svc

namespace: cyh

spec:

ports:

- name: https

port: 443

protocol: TCP

targetPort: 443

selector:

app: cyh-nginx

sessionAffinity: None

type: ClusterIP

准备两个Service是为了Ingress Controller配置路由路由到不同的SVC上,进而选择不同的后端(80/443)。

HAProxy Ingress Controller

http-ingress config:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

annotations:

ingress.kubernetes.io/config-backend: ""

ingress.kubernetes.io/ssl-redirect: "false"

kubernetes.io/ingress.class: haproxy

labels:

app: cyh-nginx

name: http-ingress

namespace: cyh

spec:

rules:

- host: cyh.test.com

http:

  paths:

  - backend:

      serviceName: http-svc

      servicePort: 80

    path: /

https-ingress config:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

annotations:

ingress.kubernetes.io/config-backend: ""

ingress.kubernetes.io/ssl-passthrough: "false"

ingress.kubernetes.io/ssl-redirect: "false"

kubernetes.io/ingress.class: haproxy

labels:

app: cyh-nginx

name: https-ingress

namespace: cyh

spec:

rules:

- host: cyh.test.com

http:

  paths:

  - backend:

      serviceName: https-svc

      servicePort: 443

    path: /

tls:

- hosts:

- cyh.test.com

secretName: cyh-nginx-crt

应用之后查看HAProxy Ingress的关键配置如下:

........

acl host-cyh.test.com var(txn.hdr_host) -i cyh.test.com cyh.test.com:80 cyh.test.com:443 

........

use_backend cyh-http-svc if host-cyh.test.com

上述配置是http-ingress先创建,https-ingress后创建的结果。从上述的配置可以看出,要想实现http和https分流需要通过acl进行判断当前的请求是http请求还是https请求,如:

........

acl host-cyh.test.com var(txn.hdr_host) -i cyh.test.com cyh.test.com:80 cyh.test.com:443 

acl https-cyh.test.com var(txn.hdr_host) -i cyh.test.com cyh.test.com:80 cyh.test.com:443  

........



use_backend https-ingress-443 if from-https https-cyh.test.com

use_backend cyh-http-svc if host-cyh.test.com

理想是丰满的,现实却是残酷的。当前HAProxy Ingress并没有类似:

ingress.kubernetes.io/config-frontend: ""

的annotations。但我们能否可以通过HAProxy Ingress ConfigMap自定义frontend acl及use_backend规则呢。其实也不行,经验证当ConfigMap上配置frontend acl及use_backend规则时,HAProxy reload将会抛出如下WARNING:

rule placed after a 'use_backend' rule will still be processed before. 

这意思就是ConfigMap上配置的frontend acl及use_backend和已有的冲突了不会生效;即使进去查看haproxy.conf文件确实配置上了,但其实HAProxy使用的配置已被加载至内存,可以参考 haproxy-ingress-issue-frontend 。 另外也可以从HAProxy Ingress Controller的源码:

kubernetes-ingess/controller/frontend-annotations.go:

func (c *HAProxyController) handleRequestCapture(ingress *Ingress) error {

........

// Update rules

mapFiles := c.cfg.MapFiles

for _, sample := range strings.Split(annReqCapture.Value, "\n") {

    key := hashStrToUint(fmt.Sprintf("%s-%s-%d", REQUEST_CAPTURE, sample, captureLen))

    if status != EMPTY {

        mapFiles.Modified(key)

        c.cfg.HTTPRequestsStatus = MODIFIED

        c.cfg.TCPRequestsStatus = MODIFIED

        if status == DELETED {

            break

        }

    }

    if sample == "" {

        continue

    }

    for hostname := range ingress.Rules {

        mapFiles.AppendHost(key, hostname)

    }



    mapFile := path.Join(HAProxyMapDir, strconv.FormatUint(key, 10)) + ".lst"

    httpRule := models.HTTPRequestRule{

        ID:            utils.PtrInt64(0),

        Type:          "capture",

        CaptureSample: sample,

        Cond:          "if",

        CaptureLen:    captureLen,

        CondTest:      fmt.Sprintf("{ req.hdr(Host) -f %s }", mapFile),

    }

    tcpRule := models.TCPRequestRule{

        ID:       utils.PtrInt64(0),

        Type:     "content",

        Action:   "capture " + sample + " len " + strconv.FormatInt(captureLen, 10),

        Cond:     "if",

        CondTest: fmt.Sprintf("{ req_ssl_sni -f %s }", mapFile),

    }

    ........

} 

洞悉frontend的配置都是代码直接控制。并没有自定义入口。 为此HAProxy Ingress无法实现http和https分流。

Nginx Ingress Controller

负载均衡服务器除了HAProxy,还有应用广泛的Nginx。对Nginx Ingress Controller的Ingress方案调研了一番能做切入点的只有:

nginx.org/location-snippets

nginx.org/server-snippets

这两组合来自定义Location和Server,但Location只能根据path来匹配,都是"/" 路径这样并不合适,为此此方案并未验证;其实也是过于复杂。本文在此介绍另外的很简单的方案: NGINX VirtualServer and VirtualServerRouteNGINX VirtualServer and VirtualServerRoute 是在Nginx Ingress Controller release 1.5版本才开始支持,VirtualServer and VirtualServerRoute熟悉F5的比较清楚,Nginx VirtualServer and VirtualServerRoute的引入是为了替代掉Nginx Ingress,它具备更了Nginx Ingress不具备的能力: 流量拆分和基于内容的高级路由 ,这些能力在Nginx Ingress层面是自定义资源。安装Nginx Ingress Controller请参考: Installation with Manifests ,当前是1.6.3版本。创建 Nginx VirtualServer and VirtualServerRoute 资源实现http和https分流。

cyh-nginx-virtualserverroute.yaml:

apiVersion: k8s.nginx.org/v1

kind: VirtualServer

metadata:

name: cyh-nginx-virtualserver

namespace: cyh

spec:

host: cyh.test.com

tls:

secret: cyh-nginx-crt

upstreams:

- name: http-svc

service: http-svc

port: 80

- name: https-svc # nginx upstream name,但并非实际nginx config中生成的upsteam名称

service: https-svc # kubernetes service name

tls: # https 必须开启enable,否则会报400,也就是使用http协议访问https,开启了这才会在proxy_pass处使用https

  enable: true

port: 443

routes:

- path: / # nginx location 配置

  matches:

  - conditions: # 条件匹配进行基于内容的高级路由,流量权重的设置用Split。

    - variable: $scheme # 变量或者header,cookie,argument参数等,此处$scheme 表示http或https

      value: "https" # 当$scheme是https时走https-svc的后端,默认其他流量均走http

    action:

      pass: https-svc

  action:

    pass: http-svc

应用后,来看看实际Nginx的配置:

/etc/nginx/conf.d/vs_xxx.conf:

upstream vs_cyh_nginx-virtualserver_http-svc {

zone vs_cyh_nginx-virtualserver_http-svc 256k;

random two least_conn;

    server 172.49.40.251:80 max_fails=1 fail_timeout=10s max_conns=0;        

}

upstream vs_cyh_nginx-virtualserver_https-svc {

zone vs_cyh_nginx-virtualserver_https-svc 256k;

random two least_conn;

    server 172.x.40.y:443 max_fails=1 fail_timeout=10s max_conns=0;        

}

map $scheme $vs_cyh_nginx_virtualserver_matches_0_match_0_cond_0 {

    "https" 1;

    default 0;

} # https 路由匹配

map $vs_cyh_nginx_virtualserver_matches_0_match_0_cond_0 $vs_cyh_nginx_virtualserver_matches_0 {

    ~^1 @matches_0_match_0;

    default @matches_0_default;

} # 默认路由匹配

server {

listen 80;

server_name cyh.test.com;

    listen 443 ssl;

ssl_certificate /etc/nginx/secrets/cyh-cyh-nginx-crt;

ssl_certificate_key /etc/nginx/secrets/cyh-cyh-nginx-crt;

               server_tokens "on";

                   location / {

    error_page 418 = $vs_cyh_nginx_virtualserver_matches_0;

    return 418;

}        

location @matches_0_match_0 {

                            proxy_connect_timeout 60s;

    proxy_read_timeout 60s;

    proxy_send_timeout 60s;

    client_max_body_size 1m;

                proxy_buffering on;

                            proxy_http_version 1.1;

    set $default_connection_header close;

    proxy_set_header Upgrade $http_upgrade;

    proxy_set_header Connection $vs_connection_header;

    proxy_set_header Host $host;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_set_header X-Forwarded-Host $host;

    proxy_set_header X-Forwarded-Port $server_port;

    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_pass https://vs_cyh_nginx-virtualserver_https-svc;

    proxy_next_upstream error timeout;

    proxy_next_upstream_timeout 0s;

    proxy_next_upstream_tries 0;

        }

    location @matches_0_default {

                            proxy_connect_timeout 60s;

    proxy_read_timeout 60s;

    proxy_send_timeout 60s;

    client_max_body_size 1m;

                proxy_buffering on;

                            proxy_http_version 1.1;

    set $default_connection_header close;

    proxy_set_header Upgrade $http_upgrade;

    proxy_set_header Connection $vs_connection_header;

    proxy_set_header Host $host;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_set_header X-Forwarded-Host $host;

    proxy_set_header X-Forwarded-Port $server_port;

    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_pass http://vs_cyh_nginx-virtualserver_http-svc;

    proxy_next_upstream error timeout;

    proxy_next_upstream_timeout 0s;

    proxy_next_upstream_tries 0;

        }

} 

使用Nginx Ingress Controller VirtualServer and VirtualServerRoute基于内容高级路由和流量拆分一切都变得如此简单。其实Nginx Ingress Controller VirtualServer and VirtualServerRoute从http和https分流还可以延伸至基于内容的高级路由或流量拆分 (比如:当有一个预发布环境和正式环境来测试一项新特性需要根据用户进行流量拆分,张三流量放到新特性的预发布环境,而其他的人员流量仍然导到正式环境) 都变得如此简单。关于Nginx Ingress Controller VirtualServer and VirtualServerRoute更多使用请参考: VirtualServer and VirtualServerRoute Resources

结束语

当然除了Nginx Ingress Controller VirtualServer and VirtualServerRoute的方案外,还可以采用传统的HAProxy TCP转发或LVS TCP转发,只需要将两者的VIP与Kubernetes的Endpoints绑定即可,这两种方案本文不进行介绍。

参考


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK