内容简介:master部署,也是参考大神的文章,然后自己亲自实践操作过多次vip 192.168.1.65节点1 192.168.1.60
master部署,也是参考大神的文章,然后自己亲自实践操作过多次
1.环境信息
系统版本:CentOS 7.3(最小化安装) 内核:3.10.0-514.el7.x86_64 Kubernetes: v1.13.3 Docker-ce: 18.06 Keepalived保证apiserever服务器的IP高可用 Haproxy实现apiserver的负载均衡复制代码
vip 192.168.1.65
节点1 192.168.1.60
节点2 192.168.1.61
节点3 192.168.1.62
2.环境准备
2.1 关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config setenforce 0 systemctl disable firewalld systemctl stop firewalld复制代码
2.2 关闭swap
swapoff -a复制代码
2.3 为每台服务器添加host解析记录
cat >>/etc/hosts<<EOF 192.168.1.60 host60 192.168.1.61 host61 192.168.1.62 host62 EOF复制代码
2.4 配置内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF sysctl --system复制代码
2.5 加载ipvs模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4复制代码
2.6 添加yum源
cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo复制代码
3. 部署keepalived和haproxy
3.1安装部署keepalived 和haproxy
yum install -y keepalived haproxy复制代码
3.2配置keepalived
3台服务器的权重分别是 priority 100 90 80
cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { *****@163.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_1 } vrrp_instance VI_1 { state MASTER interface eth0 lvs_sync_daemon_inteface eth0 virtual_router_id 88 advert_int 1 priority 100 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.65/24 } }复制代码
3.3配置harpoxy
cat /etc/haproxy/haproxy.cfg global chroot /var/lib/haproxy daemon group haproxy user haproxy log 127.0.0.1:514 local0 warning pidfile /var/lib/haproxy.pid maxconn 20000 spread-checks 3 nbproc 8 defaults log global mode tcp retries 3 option redispatch listen https-apiserver bind 192.168.1.65:8443 mode tcp balance roundrobin timeout server 15s timeout connect 15s server apiserver01 192.168.1.60:6443 check port 6443 inter 5000 fall 5 server apiserver02 192.168.1.61:6443 check port 6443 inter 5000 fall 5 server apiserver03 192.168.1.62:6443 check port 6443 inter 5000 fall 5复制代码
3.4 启动服务
systemctl enable keepalived && systemctl start keepalived systemctl enable haproxy && systemctl start haproxy 复制代码
4. 部署kubernetes
4.1安装对应的软件
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 ipvsadm ipset docker-ce-18.06.1.ce #启动docker systemctl enable docker && systemctl start docker #设置kubelet开机自启动 systemctl enable kubelet 复制代码
4.2 配置kubeadmin初始化文件
[root@host60 ~]# cat kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.60 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: host60 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer: timeoutForControlPlane: 4m0s certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "192.168.1.65:8443" dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kubernetesVersion: v1.13.3 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: "10.245.0.0/16" scheduler: {} controllerManager: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" 复制代码
4.3预先下载镜像
[root@host60 ~]# kubeadm config images pull --config kubeadm-init.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 复制代码
4.4初始化集群
[root@host60 ~]# kubeadm init --config kubeadm-init.yaml [init] Using Kubernetes version: v1.13.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [host60 localhost] and IPs [192.168.1.60 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [host60 localhost] and IPs [192.168.1.60 127.0.0.1 ::1] [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [host60 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.245.0.1 192.168.1.60 192.168.1.65] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 41.510432 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "host60" as an annotation [mark-control-plane] Marking the node host60 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node host60 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.1.65:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e02b46c1f697709552018f706f96a03922b159ecc2c3d82140365e4a8d0a83d4 复制代码
kubeadm init主要执行了以下操作:
-
[init]:指定版本进行初始化操作
-
[preflight] :初始化前的检查和下载所需要的 Docker 镜像文件
-
[kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
-
[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
-
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
-
[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
-
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
-
[wait-control-plane]:等待control-plan部署的Master组件启动。
-
[apiclient]:检查Master组件服务状态。
-
[uploadconfig]:更新配置
-
[kubelet]:使用configMap配置kubelet。
-
[patchnode]:更新CNI信息到Node上,通过注释的方式记录。
-
[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
-
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
- [addons]:安装附加组件CoreDNS和kube-proxy
4.5为kubectl准备Kubeconfig文件
kubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 复制代码
4.6.查看集群状态
[root@host60 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} 复制代码
[root@host60 ~]# kubectl get node NAME STATUS ROLES AGE VERSION host60 NotReady master 16h v1.13.3 复制代码
4.7复制证书到其他节点
USER=root CONTROL_PLANE_IPS="host61 host62" for host in ${CONTROL_PLANE_IPS}; do ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd" scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/ done 复制代码
4.8其他节点加入集群
kubeadm join 192.168.1.65:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e02b46c1f697709552018f706f96a03922b159ecc2c3d82140365e4a8d0a83d4 --experimental-control-plane 复制代码
4.9再次查看集群状态
因为网络没有通,所以都是未准备好的状态
[root@host60 ~]# kubectl get node NAME STATUS ROLES AGE VERSION host60 NotReady master 16h v1.13.3 host61 NotReady master 81s v1.13.3 host62 NotReady master 43s v1.13.3 复制代码
4.10配置集群网络
未配置网络的时候dns是没有启动成功的
[root@host60 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-89cc84847-lg9gr 0/1 Pending 0 16h coredns-89cc84847-zvsn8 0/1 Pending 0 16h etcd-host60 1/1 Running 0 16h etcd-host61 1/1 Running 0 10m etcd-host62 1/1 Running 0 9m20s kube-apiserver-host60 1/1 Running 0 16h kube-apiserver-host61 1/1 Running 0 9m55s kube-apiserver-host62 1/1 Running 0 9m12s kube-controller-manager-host60 1/1 Running 1 16h kube-controller-manager-host61 1/1 Running 0 9m55s kube-controller-manager-host62 1/1 Running 0 9m9s kube-proxy-64pwl 1/1 Running 0 16h kube-proxy-78bm9 1/1 Running 0 10m kube-proxy-xwghb 1/1 Running 0 9m23s kube-scheduler-host60 1/1 Running 1 16h kube-scheduler-host61 1/1 Running 0 10m kube-scheduler-host62 1/1 Running 0 9m23s 复制代码
export kubever=$(kubectl version | base64 | tr -d '\n') kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"复制代码
网络方案有很多种,但是大部分的方案都需要在初始化的时候加参数,否则将不能用,而weave则不需要,所以这里选择这个
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"复制代码
等待一段时间以后,网络插件完成
再次查看pod状态发现dns已经调度成功
我的这个有一个失败,和我的网络配置有关,还没排查出来原因,但是 有一个节点的是正常的
[root@host60 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-89cc84847-9hpqm 1/1 Running 1 19m 10.32.0.4 host61 <none> <none> coredns-89cc84847-jfgmx 0/1 ContainerCreating 0 9m49s <none> host60 <none> <none> etcd-host60 1/1 Running 2 17h 192.168.1.60 host60 <none> <none> etcd-host61 1/1 Running 2 73m 192.168.1.61 host61 <none> <none> etcd-host62 1/1 Running 2 73m 192.168.1.62 host62 <none> <none> kube-apiserver-host60 1/1 Running 2 17h 192.168.1.60 host60 <none> <none> kube-apiserver-host61 1/1 Running 1 73m 192.168.1.61 host61 <none> <none> kube-apiserver-host62 1/1 Running 2 73m 192.168.1.62 host62 <none> <none> kube-controller-manager-host60 1/1 Running 3 17h 192.168.1.60 host60 <none> <none> kube-controller-manager-host61 1/1 Running 3 73m 192.168.1.61 host61 <none> <none> kube-controller-manager-host62 1/1 Running 3 73m 192.168.1.62 host62 <none> <none> kube-proxy-64pwl 1/1 Running 2 17h 192.168.1.60 host60 <none> <none> kube-proxy-78bm9 1/1 Running 1 73m 192.168.1.61 host61 <none> <none> kube-proxy-xwghb 1/1 Running 2 73m 192.168.1.62 host62 <none> <none> kube-scheduler-host60 1/1 Running 3 17h 192.168.1.60 host60 <none> <none> kube-scheduler-host61 1/1 Running 2 73m 192.168.1.61 host61 <none> <none> kube-scheduler-host62 1/1 Running 2 73m 192.168.1.62 host62 <none> <none> weave-net-57xhp 2/2 Running 4 54m 192.168.1.60 host60 <none> <none> weave-net-d9l29 2/2 Running 2 54m 192.168.1.61 host61 <none> <none> weave-net-h8lbk 2/2 Running 4 54m 192.168.1.62 host62 <none> <none>复制代码
集群状态也正常了
[root@host60 ~]# kubectl get node NAME STATUS ROLES AGE VERSION host60 Ready master 17h v1.13.3 host61 Ready master 76m v1.13.3 host62 Ready master 75m v1.13.3 复制代码
5.添加node节点
5.1初始化系统
请参考上面的步骤
5.2安装必要的软件
请参考上面的步骤
5.3加入集群
kubeadm join 192.168.1.65:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e02b46c1f697709552018f706f96a03922b159ecc2c3d82140365e4a8d0a83d4复制代码
5.4查看集群状态
[root@host60 ~]# kubectl get node NAME STATUS ROLES AGE VERSION host60 Ready master 17h v1.13.3 host61 Ready master 95m v1.13.3 host62 Ready master 95m v1.13.3 host63 Ready <none> 2m51s v1.13.3复制代码
ps:删除了刚才有问题的dns,现在dns被调度到刚加入的节点里面,状态正常
[root@host60 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-89cc84847-9hpqm 1/1 Running 1 45m 10.32.0.4 host61 <none> <none> coredns-89cc84847-sglw7 1/1 Running 0 103s 10.37.0.1 host63 <none> <none> etcd-host60 1/1 Running 2 17h 192.168.1.60 host60 <none> <none> etcd-host61 1/1 Running 2 100m 192.168.1.61 host61 <none> <none> etcd-host62 1/1 Running 2 99m 192.168.1.62 host62 <none> <none> kube-apiserver-host60 1/1 Running 2 17h 192.168.1.60 host60 <none> <none> kube-apiserver-host61 1/1 Running 1 100m 192.168.1.61 host61 <none> <none> kube-apiserver-host62 1/1 Running 2 99m 192.168.1.62 host62 <none> <none> kube-controller-manager-host60 1/1 Running 3 17h 192.168.1.60 host60 <none> <none> kube-controller-manager-host61 1/1 Running 3 100m 192.168.1.61 host61 <none> <none> kube-controller-manager-host62 1/1 Running 3 99m 192.168.1.62 host62 <none> <none> kube-proxy-64pwl 1/1 Running 2 17h 192.168.1.60 host60 <none> <none> kube-proxy-78bm9 1/1 Running 1 100m 192.168.1.61 host61 <none> <none> kube-proxy-v28fs 1/1 Running 0 6m59s 192.168.1.63 host63 <none> <none> kube-proxy-xwghb 1/1 Running 2 99m 192.168.1.62 host62 <none> <none> kube-scheduler-host60 1/1 Running 3 17h 192.168.1.60 host60 <none> <none> kube-scheduler-host61 1/1 Running 2 100m 192.168.1.61 host61 <none> <none> kube-scheduler-host62 1/1 Running 2 99m 192.168.1.62 host62 <none> <none> weave-net-57xhp 2/2 Running 4 80m 192.168.1.60 host60 <none> <none> weave-net-d9l29 2/2 Running 2 80m 192.168.1.61 host61 <none> <none> weave-net-h8lbk 2/2 Running 4 80m 192.168.1.62 host62 <none> <none> weave-net-mhbpr 2/2 Running 1 6m59s 192.168.1.63 host63 <none> <none> 复制代码
6.整个集群查看
[root@host60 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@host60 ~]# kubectl get node NAME STATUS ROLES AGE VERSION host60 Ready master 18h v1.13.3 host61 Ready master 114m v1.13.3 host62 Ready master 113m v1.13.3 host63 Ready <none> 21m v1.13.3 [root@host60 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-67d4b848b4-qpmbz 1/1 Running 0 8m9s nginx-deployment-67d4b848b4-zdn4f 1/1 Running 0 8m9s nginx-deployment-67d4b848b4-zxd7l 1/1 Running 0 8m9s [root@host60 ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 18h nginx-server ClusterIP 10.245.117.70 <none> 80/TCP 68s [root@host60 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.245.0.1:443 rr -> 192.168.1.60:6443 Masq 1 1 0 -> 192.168.1.61:6443 Masq 1 0 0 -> 192.168.1.62:6443 Masq 1 1 0 TCP 10.245.0.10:53 rr -> 10.32.0.4:53 Masq 1 0 0 -> 10.37.0.1:53 Masq 1 0 0 TCP 10.245.117.70:80 rr -> 10.37.0.2:80 Masq 1 0 0 -> 10.37.0.3:80 Masq 1 0 1 -> 10.37.0.4:80 Masq 1 0 0 UDP 10.245.0.10:53 rr -> 10.32.0.4:53 Masq 1 0 0 -> 10.37.0.1:53 Masq 1 0 0 复制代码
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:- Canal 高可用架构部署
- 部署高可用Kubernetes集群
- Zookeeper 集群如何高可用部署?
- hbase高可用集群部署(cdh)
- 高可用负载均衡集群之 HAProxy 部署
- 利用saltstack一键部署高可用负载均衡集群
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
UNIX编程艺术
Eric S. Raymond / 姜宏、何源、蔡晓俊 / 电子工业出版社 / 2006-2 / 59.00元
本书主要介绍了Unix系统领域中的设计和开发哲学、思想文化体系、原则与经验,由公认的Unix编程大师、开源运动领袖人物之一Eric S. Raymond倾力多年写作而成。包括Unix设计者在内的多位领域专家也为本书贡献了宝贵的内容。本书内容涉及社群文化、软件开发设计与实现,覆盖面广、内容深邃,完全展现了作者极其深厚的经验积累和领域智慧。一起来看看 《UNIX编程艺术》 这本书的介绍吧!