KubeVPN - 云原生开发工具/网络 VPN 工具,打通本地和 k8s 集群网络 - V2EX
V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
naison
V2EX    Kubernetes

KubeVPN - 云原生开发工具/网络 VPN 工具,打通本地和 k8s 集群网络

  •  1
     
  •   naison 2024-01-18 17:31:07 +08:00 2323 次点击
    这是一个创建于 660 天前的主题,其中的信息可能已经有所发展或是发生改变。

    kubevpn

    GitHub Workflow Go Version Go Report Maintainability GitHub License Docker Pulls Releases GoDoc

    KubeVPN

    English | 中文 | 维基

    KubeVPN 是一个云原生开发工具。通过连接云端 kubernetes 网络,可以在本地使用 k8s dns 或者 Pod IP / Service IP 直接访问远端集群中的服务。拦截远端集群中的工作负载的入流量到本地电脑,配合服务网格便于调试及开发。同时还可以使用开发模式,直接在本地使用 Docker 模拟 k8s pod runtime 将容器运行在本地 (具有相同的环境变量,磁盘和网络)。

    快速开始

    从 Github release 下载编译好的二进制文件

    链接

    从 自定义 Krew 仓库安装

    ( kubectl krew index add kubevpn https://github.com/kubenetworks/kubevpn.git && \ kubectl krew install kubevpn/kubevpn && kubectl kubevpn ) 

    自己构建二进制文件

    ( git clone https://github.com/kubenetworks/kubevpn.git && \ cd kubevpn && make kubevpn && ./bin/kubevpn ) 

    安装 bookinfo 作为 demo 应用

    kubectl apply -f https://raw.githubusercontent.com/kubenetworks/kubevpn/master/samples/bookinfo.yaml 

    功能

    链接到集群网络

     ~ kubevpn connect Password: start to connect get cidr from cluster info... get cidr from cluster info ok get cidr from cni... wait pod cni-net-dir-kubevpn to be running timeout, reason , ignore get cidr from svc... get cidr from svc ok get cidr successfully traffic manager not exist, try to create it... label namespace default create serviceAccount kubevpn-traffic-manager create roles kubevpn-traffic-manager create roleBinding kubevpn-traffic-manager create service kubevpn-traffic-manager create deployment kubevpn-traffic-manager pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending Container Reason Message control-plane ContainerCreating vpn ContainerCreating webhook ContainerCreating pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running Container Reason Message control-plane ContainerRunning vpn ContainerRunning webhook ContainerRunning Creating mutatingWebhook_configuration for kubevpn-traffic-manager update ref count successfully port forward ready tunnel connected dns service ok +---------------------------------------------------------------------------+ | Now you can access resources in the kubernetes cluster, enjoy it :) | +---------------------------------------------------------------------------+ ~ 
     ~ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES authors-dbb57d856-mbgqk 3/3 Running 0 7d23h 172.29.2.132 192.168.0.5 <none> <none> details-7d8b5f6bcf-hcl4t 1/1 Running 0 61d 172.29.0.77 192.168.104.255 <none> <none> kubevpn-traffic-manager-66d969fd45-9zlbp 3/3 Running 0 74s 172.29.2.136 192.168.0.5 <none> <none> productpage-788df7ff7f-jpkcs 1/1 Running 0 61d 172.29.2.134 192.168.0.5 <none> <none> ratings-77b6cd4499-zvl6c 1/1 Running 0 61d 172.29.0.86 192.168.104.255 <none> <none> reviews-85c88894d9-vgkxd 1/1 Running 0 24d 172.29.2.249 192.168.0.5 <none> <none> 
     ~ ping 172.29.2.134 PING 172.29.2.134 (172.29.2.134): 56 data bytes 64 bytes from 172.29.2.134: icmp_seq=0 ttl=63 time=55.727 ms 64 bytes from 172.29.2.134: icmp_seq=1 ttl=63 time=56.270 ms 64 bytes from 172.29.2.134: icmp_seq=2 ttl=63 time=55.228 ms 64 bytes from 172.29.2.134: icmp_seq=3 ttl=63 time=54.293 ms ^C --- 172.29.2.134 ping statistics --- 4 packets transmitted, 4 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 54.293/55.380/56.270/0.728 ms 
     ~ kubectl get services -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR authors ClusterIP 172.21.5.160 <none> 9080/TCP 114d app=authors details ClusterIP 172.21.6.183 <none> 9080/TCP 114d app=details kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 319d <none> kubevpn-traffic-manager ClusterIP 172.21.2.86 <none> 8422/UDP,10800/TCP,9002/TCP,80/TCP 2m28s app=kubevpn-traffic-manager productpage ClusterIP 172.21.10.49 <none> 9080/TCP 114d app=productpage ratings ClusterIP 172.21.3.247 <none> 9080/TCP 114d app=ratings reviews ClusterIP 172.21.8.24 <none> 9080/TCP 114d app=reviews 
     ~ curl 172.21.10.49:9080 <!DOCTYPE html> <html> <head> <title>Simple ookstore App</title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" cOntent="IE=edge"> <meta name="viewport" cOntent="width=device-width, initial-scale=1"> 

    域名解析功能

     ~ curl productpage.default.svc.cluster.local:9080 <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" cOntent="IE=edge"> <meta name="viewport" cOntent="width=device-width, initial-scale=1"> 

    短域名解析功能

     ~ curl productpage:9080 <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" cOntent="IE=edge"> ... 

    链接到多集群网络

     ~ kubevpn status ID Mode Cluster Kubeconfig Namespace Status 0 full ccijorbccotmqodvr189g /Users/naison/.kube/config default Connected 
     ~ kubevpn connect -n default --kubeconfig ~/.kube/dev_config --lite start to connect got cidr from cache get cidr successfully update ref count successfully traffic manager already exist, reuse it port forward ready tunnel connected adding route... dns service ok +---------------------------------------------------------------------------+ | Now you can access resources in the kubernetes cluster, enjoy it :) | +---------------------------------------------------------------------------+ 
     ~ kubevpn status ID Mode Cluster Kubeconfig Namespace Status 0 full ccijorbccotmqodvr189g /Users/naison/.kube/config default Connected 1 lite ccidd77aam2dtnc3qnddg /Users/naison/.kube/dev_config default Connected ~ 

    反向代理

     ~ kubevpn proxy deployment/productpage already connect to cluster start to create remote inbound pod for deployment/productpage workload default/deployment/productpage is controlled by a controller rollout status for deployment/productpage Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... deployment "productpage" successfully rolled out rollout status for deployment/productpage successfully create remote inbound pod for deployment/productpage successfully +---------------------------------------------------------------------------+ | Now you can access resources in the kubernetes cluster, enjoy it :) | +---------------------------------------------------------------------------+ ~ 
    package main import ( "io" "net/http" ) func main() { http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) { _, _ = io.WriteString(writer, "Hello world!") }) _ = http.ListenAndServe(":9080", nil) } 
     ~ curl productpage:9080 Hello world!% ~ curl productpage.default.svc.cluster.local:9080 Hello world!% 

    反向代理支持 service mesh

    支持 HTTP, GRPC 和 WebSocket 等, 携带了指定 header "a: 1" 的流量,将会路由到本地

     ~ kubevpn proxy deployment/productpage --headers a=1 already connect to cluster start to create remote inbound pod for deployment/productpage patch workload default/deployment/productpage with sidecar rollout status for deployment/productpage Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination... deployment "productpage" successfully rolled out rollout status for deployment/productpage successfully create remote inbound pod for deployment/productpage successfully +---------------------------------------------------------------------------+ | Now you can access resources in the kubernetes cluster, enjoy it :) | +---------------------------------------------------------------------------+ ~ 
     ~ curl productpage:9080 <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" cOntent="IE=edge"> <meta name="viewport" cOntent="width=device-width, initial-scale=1"> ... 
     ~ curl productpage:9080 -H "a: 1" Hello world!% 

    如果你需要取消代理流量,可以执行如下命令:

     ~ kubevpn leave deployments/productpage leave workload deployments/productpage workload default/deployments/productpage is controlled by a controller leave workload deployments/productpage successfully 

    本地进入开发模式

    将 Kubernetes pod 运行在本地的 Docker 容器中,同时配合 service mesh, 拦截带有指定 header 的流量到本地,或者所有的流量到本地。这个开发模式依赖于本地 Docker 。

     ~ kubevpn dev deployment/authors --headers a=1 -it --rm --entrypoint sh connectting to cluster start to connect got cidr from cache get cidr successfully update ref count successfully traffic manager already exist, reuse it port forward ready tunnel connected dns service ok start to create remote inbound pod for Deployment.apps/authors patch workload default/Deployment.apps/authors with sidecar rollout status for Deployment.apps/authors Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination... deployment "authors" successfully rolled out rollout status for Deployment.apps/authors successfully create remote inbound pod for Deployment.apps/authors successfully /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4044542168121221027:/var/run/secrets/kubernetes.io/serviceaccount create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e Created container: nginx_default_kubevpn_a9a22 Wait container nginx_default_kubevpn_a9a22 to be running... Container nginx_default_kubevpn_a9a22 is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested Created main container: authors_default_kubevpn_a9a22 /opt/microservices # ls app /opt/microservices # ps -ef PID USER TIME COMMAND 1 root 0:00 nginx: master process nginx -g daemon off; 29 101 0:00 nginx: worker process 30 101 0:00 nginx: worker process 31 101 0:00 nginx: worker process 32 101 0:00 nginx: worker process 33 101 0:00 nginx: worker process 34 root 0:00 {sh} /usr/bin/qemu-x86_64 /bin/sh sh 44 root 0:00 ps -ef /opt/microservices # apk add curl fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz (1/4) Installing brotli-libs (1.0.9-r5) (2/4) Installing nghttp2-libs (1.43.0-r0) (3/4) Installing libcurl (8.0.1-r0) (4/4) Installing curl (8.0.1-r0) Executing busybox-1.33.1-r3.trigger OK: 8 MiB in 19 packages /opt/microservices # ./app & /opt/microservices # 2023/09/30 13:41:58 Start listening http port 9080 ... /opt/microservices # curl localhost:9080/health {"status":"Authors is healthy"}/opt/microservices # exit prepare to exit, cleaning up update ref count successfully tun device closed leave resource: deployments.apps/authors workload default/deployments.apps/authors is controlled by a controller leave resource: deployments.apps/authors successfully clean up successfully prepare to exit, cleaning up update ref count successfully clean up successfully ~ 

    如果你只是想在本地启动镜像,可以用一种简单的方式:

    kubevpn dev deploymentauthors --no-proxy -it --rm 

    例如:

     ~ kubevpn dev deployment/authors --no-proxy -it --rm connectting to cluster start to connect got cidr from cache get cidr successfully update ref count successfully traffic manager already exist, reuse it port forward ready tunnel connected dns service ok tar: removing leading '/' from member names /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/5631078868924498209:/var/run/secrets/kubernetes.io/serviceaccount tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets /var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/1548572512863475037:/var/run/secrets/kubernetes.io/serviceaccount create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e Created container: nginx_default_kubevpn_ff34b Wait container nginx_default_kubevpn_ff34b to be running... Container nginx_default_kubevpn_ff34b is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested Created main container: authors_default_kubevpn_ff34b 2023/09/30 14:02:31 Start listening http port 9080 ... 

    此时程序会挂起,默认为显示日志

    DinD ( Docker in Docker ) 在 Docker 中使用 kubevpn

    如果你想在本地使用 Docker in Docker (DinD) 的方式启动开发模式, 由于程序会读写 /tmp 目录,您需要手动添加参数 -v /tmp:/tmp, 还有一点需要注意, 如果使用 DinD 模式,为了共享容器网络和 pid, 还需要指定参数 --network

    例如:

     ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0 root@d0b3dab8912a:/app# kubevpn dev deployment/authors --headers user=naison -it --entrypoint sh hostname is d0b3dab8912a connectting to cluster start to connect got cidr from cache get cidr successfully update ref count successfully traffic manager already exist, reuse it port forward ready tunnel connected dns service ok start to create remote inbound pod for Deployment.apps/authors patch workload default/Deployment.apps/authors with sidecar rollout status for Deployment.apps/authors Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination... deployment "authors" successfully rolled out rollout status for Deployment.apps/authors successfully create remote inbound pod for Deployment.apps/authors successfully tar: removing leading '/' from member names /tmp/6460902982794789917:/var/run/secrets/kubernetes.io/serviceaccount tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets /tmp/5028895788722532426:/var/run/secrets/kubernetes.io/serviceaccount network mode is container:d0b3dab8912a Created container: nginx_default_kubevpn_6df63 Wait container nginx_default_kubevpn_6df63 to be running... Container nginx_default_kubevpn_6df63 is running now WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested Created main container: authors_default_kubevpn_6df5f /opt/microservices # ps -ef PID USER TIME COMMAND 1 root 0:00 {bash} /usr/bin/qemu-x86_64 /bin/bash /bin/bash 14 root 0:02 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn kubevpn dev deployment/authors --headers 25 root 0:01 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon 37 root 0:04 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon --sudo 53 root 0:00 nginx: master process nginx -g daemon off; (4/4) Installing curl (8.0.1-r0) Executing busybox-1.33.1-r3.trigger OK: 8 MiB in 19 packagesnx: worker process /opt/microservices # /opt/microservices # apk add curl OK: 8 MiB in 19 packages /opt/microservices # curl localhost:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> /opt/microservices # ls app /opt/microservices # ls -alh total 6M drwxr-xr-x 2 root root 4.0K Oct 18 2021 . drwxr-xr-x 1 root root 4.0K Oct 18 2021 .. -rwxr-xr-x 1 root root 6.3M Oct 18 2021 app /opt/microservices # ./app & /opt/microservices # 2023/09/30 14:27:32 Start listening http port 9080 ... /opt/microservices # curl authors:9080/health /opt/microservices # curl authors:9080/health {"status":"Authors is healthy"}/opt/microservices # /opt/microservices # curl localhost:9080/health {"status":"Authors is healthy"}/opt/microservices # exit prepare to exit, cleaning up update ref count successfully tun device closed leave resource: deployments.apps/authors workload default/deployments.apps/authors is controlled by a controller leave resource: deployments.apps/authors successfully clean up successfully prepare to exit, cleaning up update ref count successfully clean up successfully root@d0b3dab8912a:/app# exit exit ~ 

    支持多种协议

    • TCP
    • UDP
    • ICMP
    • GRPC
    • WebSocket
    • HTTP
    • ...

    支持三大平台

    • macOS
    • Linux
    • Windows
    1 条回复    2024-01-18 17:47:09 +08:00
    xiaooloong
        1
    xiaooloong  
       2024-01-18 17:47:09 +08:00
    我也来
    https://www.jianshu.com/p/b8a4c5ccc92d
    使用 calico cni 搭建网络直通集群
    关于     帮助文档     自助推广系统     博客     API     FAQ     Solana     2579 人在线   最高记录 6679       Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 26ms UTC 12:04 PVG 20:04 LAX 04:04 JFK 07:04
    Do have faith in what you're doing.
    ubao msn snddm index pchome yahoo rakuten mypaper meadowduck bidyahoo youbao zxmzxm asda bnvcg cvbfg dfscv mmhjk xxddc yybgb zznbn ccubao uaitu acv GXCV ET GDG YH FG BCVB FJFH CBRE CBC GDG ET54 WRWR RWER WREW WRWER RWER SDG EW SF DSFSF fbbs ubao fhd dfg ewr dg df ewwr ewwr et ruyut utut dfg fgd gdfgt etg dfgt dfgd ert4 gd fgg wr 235 wer3 we vsdf sdf gdf ert xcv sdf rwer hfd dfg cvb rwf afb dfh jgh bmn lgh rty gfds cxv xcv xcs vdas fdf fgd cv sdf tert sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf shasha9178 shasha9178 shasha9178 shasha9178 shasha9178 liflif2 liflif2 liflif2 liflif2 liflif2 liblib3 liblib3 liblib3 liblib3 liblib3 zhazha444 zhazha444 zhazha444 zhazha444 zhazha444 dende5 dende denden denden2 denden21 fenfen9 fenf619 fen619 fenfe9 fe619 sdf sdf sdf sdf sdf zhazh90 zhazh0 zhaa50 zha90 zh590 zho zhoz zhozh zhozho zhozho2 lislis lls95 lili95 lils5 liss9 sdf0ty987 sdft876 sdft9876 sdf09876 sd0t9876 sdf0ty98 sdf0976 sdf0ty986 sdf0ty96 sdf0t76 sdf0876 df0ty98 sf0t876 sd0ty76 sdy76 sdf76 sdf0t76 sdf0ty9 sdf0ty98 sdf0ty987 sdf0ty98 sdf6676 sdf876 sd876 sd876 sdf6 sdf6 sdf9876 sdf0t sdf06 sdf0ty9776 sdf0ty9776 sdf0ty76 sdf8876 sdf0t sd6 sdf06 s688876 sd688 sdf86