Thursday, April 30, 2026

Cilium and Kubernetes - Installing Cilium Within Kubernetes

This post is installment #2 in a series of posts providing directions on installing and using Cilium for load balancing and SSL processing. Links to all of the posts in the series are provided below for convenience.

Cilium and Kubernetes - Caveats / Concepts
Cilium and Kubernetes - Installing Cilium Within Kubernetes
Cilium and Kubernetes - Configuring SSL and Load Balancing
Cilium and Kubernetes - Externally Accessing Services via ARP
Cilium and Kubernetes - Externally Accessing Services via BGP



As mentioned in the first installment on concepts, Cilium allows an ARP based approach and a BGP routing based approach to make internal virtual IPs reachable from clients outside the Kubernetes cluster. These approaches are not mutually exclusive of each other and require careful consideration of connectivity needs and available IP space at the time Cilium is being installed as well as for routine administration tasks.

Because this tutorial series is intended to explain BOTH the ARP and BGP approaches, the illustrations and instructions will reflect BOTH approaches being deployed within a single cluster simultaneously with different services being active simultaneously using both approaches. At first, this may appear to make naming conventions for files and Kubernetes objects more verbose / obtuse than necessary. However, such complexities will better illustrate where the two approaches differ and how those differences need to be accomodated outside the cluster.

Selecting and Devising an IP Scheme

If load balancer virtual IPs are going to be advertised for external access via ARP, the IP range selected must be assigned in light of these needs:

  1. The IP range must be used for the physical Ethernet ports of the Kuberenetes nodes in the cluster.
  2. The IP range must be large enough to contain the number of physical Kubernetes nodes in the cluster AND twice the number of externally accessible VIPs (one for the service VIP and one for the Gateway VIP)

As an example, if the Kubernetes cluster will consist of five nodes and a total of twenty services requiring SSL and load balancing will be deployed in the cluster, the ARP approach requires a subnet of at least 45 IPs to be allocated. Given subnetting logistics, the smallest block providing 45 IPs would be a /26 which contains 64 consecutive IPs.

If using BGP advertising, the IP range used for load balancer VIPs does not have to reside within the physical subnet used for the physical Kubernenetes hosts. It can be any range not already in use in the larger network.

For home hobby labs, it isn't likely necessary to further subdivide IP ranges used for load balancer IPs. However, for larger clusters that might serve a mix of production, STAGE, QA and DEV work, it may be advisible to further subdivide ranges into environment-specific ranges so firewalls can block non-production traffic from reaching production endpoints. Since this series is intended to explain some of the more complicated subjects, the following overall plan will be used to dedicate specific ranges for ARP and BGP for both a production and development environment.

In the examples to follow, the ARP advertisement functionality will be demonstrated using IPs in the 192.168.99.0/24 subnet already in use for the physical Kubernetes nodes. The resulting setup is reflected in this visual:

In the examples to follow, the BGP advertisement functionality will be demonstrated using IPs a standalone IP range of 192.168.77.0/24. The resulting setup is reflected in this visual:

Of course, one extra complexity of using the BGP approach in a home hobbyist setting is that the approach requires at least one additional BGP router existing outside the cluster to propagate routes into the local host network. It is possible to use FRR (Free Range Router) on Linux as a BGP-capable router but this involves more configuration work in technical areas many software developers avoid -- network design, subnetwork configurations and routing.


Kubernetes Host Preparation Steps

If using the ARP approach, many virtualization platforms including VMware and ProxMox may require modifications to the network configuration of virtual machines to DISABLE protections they deploy by default to halt "gratuitous ARP" traffic being originated from virtual hosts.

On ProxMox systems, this ARP filtering can be disabled by unchecking the Firewall attribute on the virtual machine's Network Device configuration. Here, the function is ENABLED and needs to be unchecked.

Also, for each VM, the MAC Filtering attribute should be disabled. The screen below shows the attribute still enabled which can interfer with ARP behavior required for load balancing.

Disabling Firewall Software

Attempting to run any CNI solution (Istio, MetalLB, Cilium) on nodes built upon virtual machines tends to encounter numerous errors dealing with the virtualization of virtualization of virtualization of physical Ethernet port processing, especially related to MAC discovery via ARP protocols. To avoid as much of these headaches as possible, it is wise to completely disable any firewall functionality running as part of the operating system. In Fedora, the firewalld daemon is enabled by default but should be disabled via these commands.

[root@kube1 ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead)
       Docs: man:firewalld(1)
[root@kube1 ~]# 

If the firewalld daemon is still running, it can be stopped and disabled for future reboots via this command.

[root@kube1 ~]# systemctl disable --now firewalld
Removed '/etc/systemd/system/multi-user.target.wants/firewalld.service'.
Removed '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'.
[root@kube1 ~]#

This step should be performed on ANY virtual machine acting as a node in a Kubernetes cluster.

Enabling ARP Traffic via Kernel Parameters

In addition to disabling functions running on hypervisors such as ProxMox or VMware that might block required ARP traffic, there are also operating system network settings which may default to inhibiting processing of ARP traffic that need to be explicitly enabled. These requirements are described in section 10.1.3 and are summarized here as well.

[root@kube1 ~]# sysctl --all | grep "ipv4.conf" | grep arp | grep default
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.default.arp_announce = 1
net.ipv4.conf.default.arp_evict_nocarrier = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.arp_ignore = 0
net.ipv4.conf.default.arp_notify = 1
net.ipv4.conf.default.drop_gratuitous_arp = 0
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.proxy_arp_pvlan = 0
[root@kube1 ~]# sysctl --all | grep "ipv4.conf" | grep arp | grep ens18
net.ipv4.conf.ens18.arp_accept = 1
net.ipv4.conf.ens18.arp_announce = 1
net.ipv4.conf.ens18.arp_evict_nocarrier = 1
net.ipv4.conf.ens18.arp_filter = 0
net.ipv4.conf.ens18.arp_ignore = 0
net.ipv4.conf.ens18.arp_notify = 1
net.ipv4.conf.ens18.drop_gratuitous_arp = 0
net.ipv4.conf.ens18.proxy_arp = 0
net.ipv4.conf.ens18.proxy_arp_pvlan = 0
[root@kube1 ~]#

Installing bpftool (Berkley Packet Filter) and Helm

CNI functions within Kubernetes use a lower level kernel framework named Berkeley Packet Filtering that allow layer 7 application traffic filtering to be inserted within the TCP kernel for more efficient processing. Most modern kernels actually enable BPF by default. However, many operating systems do NOT automatically include a companion diagnostic tool named bpftool with the core kernel function. That can be added by running this command as root.

dnf install bpftool

With that tool available, the required modules can be verified with the following command:

[root@kube1 ~]# bpftool feature probe kernel | grep -E "CONFIG_BPF|CONFIG_CGROUP_BPF|CONFIG_XDP"
CONFIG_BPF is set to y
CONFIG_BPF_SYSCALL is set to y
CONFIG_BPF_JIT is set to y
CONFIG_BPF_JIT_ALWAYS_ON is set to y
CONFIG_CGROUP_BPF is set to y
CONFIG_BPF_EVENTS is set to y
CONFIG_BPF_KPROBE_OVERRIDE is not set
CONFIG_XDP_SOCKETS is set to y
CONFIG_BPF_LIRC_MODE2 is set to y
CONFIG_BPF_STREAM_PARSER is set to y
[root@kube1 ~]#
The eBPF functionality is required on physical hosts acting as nodes in the Kubernetes cluster (like kube1 and kube2). It is NOT needed on hosts merely used for administration and source code development (like fedora1).

Use of Cilium within a cluster also requires use of the Helm configuration CLI of Kubernetes to be available on machines used to administer the cluster. Here, fedora1 is used for development and administration so Helm is installed by pulling a shell script from the Helm GitHub site, making the script executable and running it as shown here.

Installing Cilium CLI and the Cilium Deployment Within a Cluster

There are a few key tasks for installing Cilium with appropriate parameters for performing load balancing with the other components being used.

  1. Installing the Cilium CLI on an administrative host like fedora1.
  2. Removing the default Kubernetes kube-proxy function from the cluster which will be replaced by Cilium
  3. Using helm to deploy Cilium functions to the cluster with appropriate eBPF
  4. Verifying status

Installing Cilium CLI

Cilium includes its own command line interface (CLI) tool that is used for installing the software on a cluster and configuring and inspecting Cilium functions within that cluster on an ongoing basis.

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Disabling kube-proxy Daemon Set from Kubernetes

Default Kubernetes functionality for proxying traffic is provided by the kube-proxy daemon set. This functionality is replaced by Cilium so does not need to be active in the cluster. Prior to installing Cilium, it can be removed by the following commands:

mdh@fedora1:~ $ kubectl -n kube-system get daemonset
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-proxy     2         2         2       2            2           kubernetes.io/os=linux   17d
mdh@fedora1:~ $
mdh@fedora1:~ $ kubectl -n kube-system delete daemonset kube-proxy
daemonset.apps "kube-proxy" deleted from kube-system namespace
mdh@fedora1:~ $ kubectl -n kube-system get daemonset
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
mdh@fedora1:~ $

Deploying Cilium to the Cluster

Cilium can be deployed to the cluster by either using the CLI just installed via a cilium install command or by using helm. For the most part, the two approaches are equivalent but two crucial caveats apply for both.

  1. EVERY OPTION shown below must be included EXACTLY as typed for the load balancing and external routing into load balancer virtual IPs to function correctly.
  2. If installed via a cilium install command, adding additional settings generally requires a cilium uninstall then the new cilium install command to be executed. Examples online of updating a Cilium deploymenet using helm update to change individual settings generally DO NOT WORK if the original install was accomplished via cilium install.

The default installation of Cilium that will be executed with a simple cilium install command using the CLI will NOT enable the proper options required for load balancing and proxy (dumb!). In particular, the kubeProxyReplacement parameter must be set to true. The command below must also supply the host IP and TCP port used by the master node of the target Kubernetes cluster. This can be verified with this command:

mdh@fedora1:~ $ kubectl cluster-info
Kubernetes control plane is running at https://192.168.99.12:6443
CoreDNS is running at https://192.168.99.12:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
mdh@fedora1:~ $

With that information confirmed, the host of 192.168.99.12 (kube1) and TCP port 6443 can be referenced in the command below.


helm upgrade --install cilium cilium/cilium --version 1.19.3 \
 --namespace kube-system \
 --set k8sServiceHost=192.168.99.12 \
 --set k8sServicePort=6443 \
 --set ipam.mode=multi-pool \
 --set ipam.operator.autoCreateCiliumPodIPPools.default.ipv4.cidrs='{10.99.0.0/16}' \
 --set ipam.operator.autoCreateCiliumPodIPPools.default.ipv4.maskSize=27 \
 --set ipam.operator.clusterPoolIPv4PodCIDRList=10.0.0.0/8 \
 --set ipam.operator.clusterPoolIPv4MaskSize=27 \
 --set kubeProxyReplacement=true \
 --set enableLoadBalancer=true \
 --set hostServices.enabled=true \
 --set externalIPs.enabled=true \
 --set nodePort.enabled=true \
 --set hostPort.enabled=true \
 --set gatewayAPI.enabled=true \
 --set gatewayAPI.enableAlpn=true \
 --set devices=ens18 \
 --set bpf.masquerade=true \
 --set ipv4.masquerade=true \
 --set l2announcements.enabled=true \
 --set l2announcements.leaseDuration=10s \
 --set l2announcements.leaseRenewDeadline=5s \
 --set l2announcements.leaseRetryPeriod=1s \
 --set l2NeighDiscovery.enabled=true \
 --set k8sClientRateLimit.qps=20 \
 --set k8sClientRateLimit.burst=30 \
 --set l2announcements.leaseRetryPeriod=1s \
 --set l7proxy=true \
 --set nodeIPAM.enabled=true \
 --set bgpControlPlane.enabled=true \
 --set bgp-secrets-namespace=kube-system \
 --set hubble.enabled=true \
 --set hubble.relay.enabled=true \
 --set hubble.ui.enabled=true

At any point, if particular configuration settings of a running Cilium deployment need to be confirmed, all current settings can be summarized using the cilium config view command as shown below.

Given the criticality of parameters specified during installation, documenting both the installation command above and this configuration view output as a reference is highly suggested. If Cilium needs to be re-installed in the future, knowing the EXACT installation command can avoid HOURS of troubleshooting subtle issues stemming from a single omitted parameter.

mdh@fedora1:~ $ cilium config view
agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
auto-create-cilium-pod-ip-pools                   default=ipv4-cidrs:10.99.0.0/16;ipv4-mask-size:27
auto-direct-node-routes                           false
bgp-router-id-allocation-ip-pool
bgp-router-id-allocation-mode                     default
bgp-secrets-namespace                             kube-system
bpf-distributed-lru                               false
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
bpf-lb-algorithm-annotation                       false
bpf-lb-external-clusterip                         false
bpf-lb-map-max                                    65536
bpf-lb-mode-annotation                            false
bpf-lb-sock                                       false
bpf-lb-source-range-all-types                     false
bpf-map-dynamic-size-ratio                        0.0025
bpf-policy-map-max                                16384
bpf-policy-stats-map-max                          65536
bpf-root                                          /sys/fs/bpf
cgroup-root                                       /run/cilium/cgroupv2
cilium-endpoint-gc-interval                       5m0s
cluster-id                                        0
cluster-name                                      default
clustermesh-cache-ttl                             0s
clustermesh-enable-endpoint-sync                  false
clustermesh-enable-mcs-api                        false
clustermesh-mcs-api-install-crds                  true
cni-exclusive                                     true
cni-log-file                                      /var/run/cilium/cilium-cni.log
custom-cni-conf                                   false
datapath-mode                                     veth
debug                                             false
debug-verbose
default-lb-service-ipam                           lbipam
devices                                           ens18
direct-routing-skip-unreachable                   false
dnsproxy-enable-transparent-mode                  true
dnsproxy-socket-linger-timeout                    10
egress-gateway-reconciliation-trigger-interval    1s
enable-auto-protect-node-port-range               true
enable-bgp-control-plane                          true
enable-bgp-control-plane-status-report            true
enable-bgp-legacy-origin-attribute                false
enable-bpf-clock-probe                            false
enable-bpf-masquerade                             true
enable-drift-checker                              true
enable-dynamic-config                             true
enable-endpoint-health-checking                   true
enable-endpoint-lockdown-on-policy-overflow       false
enable-envoy-config                               true
enable-gateway-api                                true
enable-gateway-api-alpn                           true
enable-gateway-api-app-protocol                   true
enable-gateway-api-proxy-protocol                 false
enable-gateway-api-secrets-sync                   true
enable-health-check-loadbalancer-ip               false
enable-health-check-nodeport                      true
enable-health-checking                            true
enable-hubble                                     true
enable-ipv4                                       true
enable-ipv4-big-tcp                               false
enable-ipv4-masquerade                            true
enable-ipv6                                       false
enable-ipv6-big-tcp                               false
enable-ipv6-masquerade                            true
enable-k8s-networkpolicy                          true
enable-l2-announcements                           true
enable-l2-neigh-discovery                         true
enable-l7-proxy                                   true
enable-lb-ipam                                    true
enable-masquerade-to-route-source                 false
enable-metrics                                    true
enable-no-service-endpoints-routable              true
enable-node-ipam                                  true
enable-node-selector-labels                       false
enable-non-default-deny-policies                  true
enable-policy                                     default
enable-policy-secrets-sync                        true
enable-sctp                                       false
enable-service-topology                           false
enable-source-ip-verification                     true
enable-tcx                                        true
enable-vtep                                       false
enable-well-known-identities                      false
enable-xt-socket-fallback                         true
envoy-access-log-buffer-size                      4096
envoy-base-id                                     0
envoy-config-retry-interval                       15s
envoy-keep-cap-netbindservice                     false
external-envoy-proxy                              true
gateway-api-hostnetwork-enabled                   false
gateway-api-hostnetwork-nodelabelselector
gateway-api-secrets-namespace                     cilium-secrets
gateway-api-service-externaltrafficpolicy         Cluster
gateway-api-xff-num-trusted-hops                  0
health-check-icmp-failure-threshold               3
http-retry-count                                  3
http-stream-idle-timeout                          300
hubble-disable-tls                                false
hubble-listen-address                             :4244
hubble-network-policy-correlation-enabled         true
hubble-socket-path                                /var/run/cilium/hubble.sock
hubble-tls-cert-file                              /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files                        /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file                               /var/lib/cilium/tls/hubble/server.key
identity-allocation-mode                          crd
identity-gc-interval                              15m0s
identity-heartbeat-timeout                        30m0s
identity-management-mode                          agent
install-no-conntrack-iptables-rules               false
ipam                                              multi-pool
ipam-cilium-node-update-rate                      15s
iptables-random-fully                             false
k8s-client-burst                                  30
k8s-client-qps                                    20
k8s-require-ipv4-pod-cidr                         false
k8s-require-ipv6-pod-cidr                         false
kube-proxy-replacement                            true
kube-proxy-replacement-healthz-bind-address
l2-announcements-lease-duration                   10s
l2-announcements-renew-deadline                   5s
l2-announcements-retry-period                     1s
max-connected-clusters                            255
mesh-auth-enabled                                 false
mesh-auth-gc-interval                             5m0s
mesh-auth-queue-size                              1024
mesh-auth-rotated-identities-queue-size           1024
metrics-sampling-interval                         5m
monitor-aggregation                               medium
monitor-aggregation-flags                         all
monitor-aggregation-interval                      5s
nat-map-stats-entries                             32
nat-map-stats-interval                            30s
node-port-bind-protection                         true
nodeport-addresses
nodes-gc-interval                                 5m0s
operator-api-serve-addr                           127.0.0.1:9234
operator-prometheus-serve-addr                    :9963
packetization-layer-pmtud-mode                    blackhole
policy-cidr-match-mode
policy-default-local-cluster                      true
policy-deny-response                              none
policy-secrets-namespace                          cilium-secrets
policy-secrets-only-from-secrets-namespace        true
preallocate-bpf-maps                              false
procfs                                            /host/proc
proxy-cluster-max-connections                     1024
proxy-cluster-max-requests                        1024
proxy-connect-timeout                             2
proxy-idle-timeout-seconds                        60
proxy-initial-fetch-timeout                       30
proxy-max-active-downstream-connections           50000
proxy-max-concurrent-retries                      128
proxy-max-connection-duration-seconds             0
proxy-max-requests-per-connection                 0
proxy-use-original-source-address                 true
proxy-xff-num-trusted-hops-egress                 0
proxy-xff-num-trusted-hops-ingress                0
remove-cilium-node-taints                         true
routing-mode                                      tunnel
service-no-backend-response                       reject
set-cilium-is-up-condition                        true
set-cilium-node-taints                            true
synchronize-k8s-nodes                             true
tofqdns-dns-reject-response-code                  refused
tofqdns-enable-dns-compression                    true
tofqdns-endpoint-max-ip-per-hostname              1000
tofqdns-idle-connection-grace-period              0s
tofqdns-max-deferred-connection-deletes           10000
tofqdns-preallocate-identities                    true
tofqdns-proxy-response-max-delay                  100ms
tunnel-protocol                                   vxlan
tunnel-source-port-range                          0-0
unmanaged-pod-watcher-interval                    15s
vtep-cidr
vtep-endpoint
vtep-mac
vtep-mask
write-cni-conf-when-ready                         /host/etc/cni/net.d/05-cilium.conflist
mdh@fedora1:~ $

Installing Custom Resource Definitions (CRDs) for Cillium

Cilium functionality involves a handful of Custom Resource Definitions that are the Kubernetes equivalent of an XML schema definition used to define configuration data structures passed to Kubernetes. These CRDs must be applied to the Kubernetes cluster before attempting to build any Cilium related objects in the cluster. The list of commands to apply these CRDs from Cilium's GitHub site are shown below.


kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.4.1/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.4.1/config/crd/standard/gateway.networking.k8s.io_gateways.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.4.1/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.4.1/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.4.1/config/crd/standard/gateway.networking.k8s.io_grpcroutes.yaml

Any installed CRDs can be verified as shown below.

[mdh@fedora1 cdtrackerapi]$ kubectl get crds
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2026-03-13T03:38:55Z
ciliumclusterwideenvoyconfigs.cilium.io      2026-04-01T02:52:04Z
ciliumclusterwidenetworkpolicies.cilium.io   2026-03-13T03:38:54Z
ciliumendpoints.cilium.io                    2026-03-13T03:38:51Z
ciliumenvoyconfigs.cilium.io                 2026-04-01T02:52:05Z
ciliumgatewayclassconfigs.cilium.io          2026-04-01T02:52:07Z
ciliumidentities.cilium.io                   2026-03-13T03:38:49Z
ciliuml2announcementpolicies.cilium.io       2026-03-13T03:38:58Z
ciliumloadbalancerippools.cilium.io          2026-03-13T03:38:57Z
ciliumnetworkpolicies.cilium.io              2026-03-13T03:38:53Z
ciliumnodeconfigs.cilium.io                  2026-03-13T03:38:59Z
ciliumnodes.cilium.io                        2026-03-13T03:38:52Z
ciliumpodippools.cilium.io                   2026-03-13T03:38:50Z
gatewayclasses.gateway.networking.k8s.io     2026-03-28T22:13:51Z
gateways.gateway.networking.k8s.io           2026-03-28T22:13:51Z
grpcroutes.gateway.networking.k8s.io         2026-03-28T22:13:52Z
httproutes.gateway.networking.k8s.io         2026-03-28T22:13:52Z
referencegrants.gateway.networking.k8s.io    2026-03-28T22:13:52Z
[mdh@fedora1 cdtrackerapi]$
CRDs are left on a cluster even if Cilium is uninstalled. If Cilium has to be repeatedly installed and uninstalled, these CRDs do NOT need to be re-applied each time. They will remain.

Deleting / Re-Installing Cilium

Due to the complexity of Cilium, it is possible (likely) that the tool will require SEVERAL attempts at installation due to missing settings or special override steps required for some operating systems, etc. Once installed, a given installation can be removed from a Kubernetes cluster by the cilium uninstall command.

[mdh@fedora1 cdtrackerapi]$ cilium uninstall
🔥 Deleting pods in cilium-test namespace...
🔥 Deleting cilium-test namespace...
⌛ Uninstalling Cilium
🔥 Deleting pods in cilium-test namespace...
🔥 Deleting cilium-test namespace...
🔥 Cleaning up Cilium node annotations...
[mdh@fedora1 cdtrackerapi]$

Once deleted, it can be re-installed via the larger command in the prior section.


Defining IP Pools for Virtual IP Assignments

In addition to logically implementing SSL processing and load balancing functions for appllications, Cilium also implements IP address management functionality to automatically assign virtual IPs for load balancers based upon criteria in Service and Gateway objects defined for SSL access and load balancing. Earlier in this installment, an IP addressing scheme was described that provided for distinctions between development versus production environments and between load balancer VIPs that would be advertised via ARP versus BGP. With Cilium running in the cluster and its Custom Resource Definition files loaded for Cilium specific objects, those IP pools can now be configured. Here are the four YAML files along with the commands to apply them to the Kubernetes cluster.

Here is the mdhlabs-arp-vip-pool-dev.yaml file defining the development ARP block:

apiVersion: cilium.io/v2
kind: CiliumLoadBalancerIPPool
metadata:
  name: mdhlabs-arp-vip-pool-dev
spec:
  blocks:
  - start: "192.168.99.64"
    stop: "192.168.99.95"
  # allowFirstLastIPs: No reserves first and last
  # as gateway and broadcast IP of the subnet
  allowFirstLastIPs: "No"
  serviceSelector:
    matchLabels:
      "io.kubernetes.service.namespace": "development"
      "mdhlabs-arp": "enable"

Here is the mdhlabs-arp-vip-pool-prod.yaml file defining the production ARP block:

apiVersion: cilium.io/v2
kind: CiliumLoadBalancerIPPool
metadata:
  name: mdhlabs-arp-vip-pool-prod
spec:
  blocks:
  - start: "192.168.99.128"
    stop: "192.168.99.159"
  # allowFirstLastIPs: No reserves first and last
  # as gateway and broadcast IP of the subnet
  allowFirstLastIPs: "No"
  serviceSelector:
    matchLabels:
      "io.kubernetes.service.namespace": "prod"
      "mdhlabs-arp": "enable"

Here is the mdhlabs-bgp-vip-pool-dev.yaml file defining the development BGP block:

apiVersion: cilium.io/v2
kind: CiliumLoadBalancerIPPool
metadata:
  name: mdhlabs-bgp-vip-pool-dev
spec:
  blocks:
  - start: "192.168.77.64"
    stop: "192.168.77.95"
  # allowFirstLastIPs: No reserves first and last
  # as gateway and broadcast IP of the subnet
  allowFirstLastIPs: "No"
  serviceSelector:
    matchLabels:
      "io.kubernetes.service.namespace": "development"
      "mdhlabs-bgp": "enable"

Here is the mdhlabs-bgp-vip-pool-prod.yaml file defining the production BGP block:

apiVersion: cilium.io/v2
kind: CiliumLoadBalancerIPPool
metadata:
  name: mdhlabs-bgp-vip-pool-prod
spec:
  blocks:
  - start: "192.168.77.128"
    stop: "192.168.77.159"
  # allowFirstLastIPs: No reserves first and last
  # as gateway and broadcast IP of the subnet
  allowFirstLastIPs: "No"
  serviceSelector:
    matchLabels:
      "io.kubernetes.service.namespace": "prod"
      "mdhlabs-bgp": "enable"

With all of those configuration files created, they can be applied against the cluster with the following commands:

kubectl apply -f mdhlabs-arp-vip-pool-dev.yaml
kubectl apply -f mdhlabs-arp-vip-pool-prod.yaml
kubectl apply -f mdhlabs-bgp-vip-pool-dev.yaml
kubectl apply -f mdhlabs-bgp-vip-pool-prod.yaml

The status and names of all configured IP pool resources can be viewed as shown below.

mdh@fedora1:~/gitwork/webservices/cdtrackerapi $ kubectl get ciliumloadbalancerippool
NAME                        DISABLED   CONFLICTING   IPS AVAILABLE   AGE
mdhlabs-arp-vip-pool-dev    false      False         32              3s
mdhlabs-arp-vip-pool-prod   false      False         32              2d2h
mdhlabs-bgp-vip-pool-dev    false      False         32              19d
mdhlabs-bgp-vip-pool-prod   false      False         32              19d
mdh@fedora1:~/gitwork/webservices/cdtrackerapi $

The command kubectl describe ciliumloadbalancerippool mdhlabs-bgp-vip-pool-dev can be used to see the actual subnets in each pool, as illustrated here.

mdh@fedora1:~/gitwork/webservices/cdtrackerapi $ kubectl describe ciliumloadbalancerippool mdhlabs-bgp-vip-pool-prod
Name:         mdhlabs-bgp-vip-pool-prod
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  cilium.io/v2
Kind:         CiliumLoadBalancerIPPool
Metadata:
  Creation Timestamp:  2026-04-11T22:54:39Z
  Generation:          1
  Resource Version:    12016097
  UID:                 87c8d61d-667c-49e4-b053-ae3bee76142b
Spec:
  Allow First Last I Ps:  No
  Blocks:
    Start:   192.168.77.128
    Stop:    192.168.77.159
  Disabled:  false
  Service Selector:
    Match Labels:
      io.kubernetes.service.namespace:  prod

(other details omitted)
mdh@fedora1:~/gitwork/webservices/cdtrackerapi $ 

The next installment in this series will explain how to configure SSL processing and load balancing for an application using Service, Gateway and HTTPRoute resources provided by Cilium.


More information on using Cilium within Kubernetes is provided in other posts in this series:

Cilium and Kubernetes - Caveats / Concepts
Cilium and Kubernetes - Installing Cilium Within Kubernetes
Cilium and Kubernetes - Configuring SSL and Load Balancing
Cilium and Kubernetes - Externally Accessing Services via ARP
Cilium and Kubernetes - Externally Accessing Services via BGP