This post is installment #3 in a series of posts providing directions on installing and using Cilium for load balancing and SSL processing. Links to all of the posts in the series are provided below for convenience.
Cilium and Kubernetes - Caveats / Concepts
Cilium and Kubernetes - Installing Cilium Within Kubernetes
Cilium and Kubernetes - Configuring SSL and Load Balancing
Cilium and Kubernetes - Externally Accessing Services via ARP
Cilium and Kubernetes - Externally Accessing Services via BGP
This installment focuses on configuration elements that make virtual IP addresses assigned for load balancing within a Kubernetes cluster reachable to clients outside the cluster using ARP protocol. The processes for allocating IP addresses for auto-assignment and tagging Service objects with attributes to trigger ARP advertisements are described along with workarounds required for current defects in the Cilium implementation for ARP.
With the configuration elements built up to now, the pods are live with private IPs assigned within the cluster, they have been mapped to a service of type LoadBalancer which obtained an IP from a different reserved pool of IPs in the 192.168.99.128/25 range and a layer 7 gateway has been defined with hostname and SSL key information to terminate incoming SSL requests and forward them through an HTTPRoute to the LoadBalancer onto the pods.
However, the VIP address itself is not yet reachable from outside the cluster nodes. In order to expand reachabilty to services beyond the cluster via ARP, the following tasks must be performed:
- A CiliumL2AnnouncementPolicy must be activated to define the label to match on to determine which IPs will be advertised via ARP -- here the label
mdhlabs-arp=enablewill be used - The Service object defining the unencrypted LoadBalancer VIP for the pod Deployment must be altered to specify that label triggering ARP advertisements
- The Gateway object defining the SSL processing for incoming traffic must be altered to specify that same label triggering ARP advertisement.
- After the Gateway object is activated, the auto-generated Service object tied to the Gateway must be MANUALLY tagged with the same label of
mdhlabs-arp=enable
Visualized Configuration Flow
Given the number of discrete configuration elements required for a complete service deployment using Cilium and the possibility of additional environment-specific deployments, it is useful to depict ALL of the primary configuration objects in a single view that relates the references between the elements. When something isn't working, it is likely due to one of these components being overlooked or having a typo in it.
It is also useful to show all of the YAML configuration files for the overall cluster and specific application or service. Doing so illustrates the value in adopting a naming scheme for these files that reflects their content and object type and allows them to be used as a reminder to ensure all bases have been covered. Here, the files associated with using BGP advertisements for the "cd" service are:
mdh@fedora1:~/gitwork/webservices/cdtrackerapi $ls -l cd-arp*-rw-r--r--. 1 mdh mdh 1514 Apr 13 22:02 cd-arp-deploy.yaml -rw-r--r--. 1 mdh mdh 423 Apr 13 22:05 cd-arp-gw.dev.yaml -rw-r--r--. 1 mdh mdh 1137 Apr 13 22:06 cd-arp-gw.prod.yaml -rw-r--r--. 1 mdh mdh 371 Apr 13 22:03 cd-arp-httproute.dev.yaml -rw-r--r--. 1 mdh mdh 361 Apr 13 22:00 cd-arp-httproute.prod.yaml -rw-r--r--. 1 mdh mdh 469 Apr 13 21:58 cd-arp-svc.yaml mdh@fedora1:~/gitwork/webservices/cdtrackerapi $ls -l mdhlabs-arp*-rw-r--r--. 1 mdh mdh 882 Apr 9 18:47 mdhlabs-arp-l2-policy.yaml -rw-r--r--. 1 mdh mdh 382 Apr 27 19:31 mdhlabs-arp-vip-pool-dev.yaml -rw-r--r--. 1 mdh mdh 378 Apr 27 19:32 mdhlabs-arp-vip-pool-prod.yaml mdh@fedora1:~/gitwork/webservices/cdtrackerapi $
Creating the CiliumL2AdvertisementPolicy
The CiliumL2AdvertisementPolicy object defines criteria Cilium will use to decide which Kubernetes nodes are given responsibility for handling ARP traffic, identifying which physical interface should be used to listen for ARP requests and answer them and identify which types of virtual IPs should trigger this process. In this example, the nodeSelector critiera allows all nodes not responsible for the Kubernetes control plane to handle ARP advertisement work.
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
name: mdhlabs-layer2-policy
spec:
# 2. Select which nodes can announce these services (optional)
# Excludes control-plane nodes in this example
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist
# 3. Specify network interfaces for ARP/NDP responses (optional)
# Supports regular expressions (e.g., matching eth0, eth1)
interfaces:
- ens18
# 4. Choose which IP types to announce
externalIPs: true
loadBalancerIPs: true
This poicy should be deployed on the cluster with the following command.
| These policies are global and are not restricted to any particular namespace. |
[mdh@fedora1 cdtrackerapi]$kubectl apply -f kb.mdhlabs.layer2policy.yamlciliuml2announcementpolicy.cilium.io/mdhlabs-layer2-policy created [mdh@fedora1 cdtrackerapi]$kubectl get CiliumL2AnnouncementPolicyNAME AGE mdhlabs-layer2-policy 32s [mdh@fedora1 cdtrackerapi]$
Adding the Advertising Label to the Service (cd-arp-svc)
With the policy deployed, the required tag now neads to be added to the inner Service definition providing the LoadBalancer VIP into the pods of the Deployment. The YAML is shown below with the required line highlighted in RED.
apiVersion: v1
kind: Service
metadata:
labels:
app: cd-arp
mdhlabs-arp: enable
name: cd-arp-svc
spec:
selector:
app: cd-arp
ports:
- protocol: TCP
port: 6680
targetPort: 6680
type: LoadBalancer
# externalTrafficPolicy controls how requests from OUTSIDE the
# cluster are distributed WITHIN the cluster
# Local = traffic is processed by first node / pod that attracted the request
# but does not undergo source NAT
# Cluster (default) = requests are balanced across all nodes/pods but source IPs are NATed
externalTrafficPolicy: Cluster
# internalTrafficPolicy controls how reqeuests originating from WITHIN the
# cluster are distributed:
# Local - requests stay within pods on same node
# Cluster (default) - requests are balanced across all nodes and pods
internalTrafficPolicy: Cluster
Adding the Advertising Label to the Gateway (cd-arp-gw) and Service (cilium-service-cd-arp-gw)
The same label also needs to be included in the configuration of the Gateway that defines the SSL processing to perform on arriving external traffic prior to distribution to the inner pod load balancer. Here is the updated Gateway YAML with the additional label shown in RED.
# cd-arp-gw.prod.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
labels:
app: cd-arp
service-name: cd-arp-svc
mdhlabs-arp: enable
# NOTE: The mdhlabs-arp=enable label above must match the label
# specified by a CiliumL2AdvertisementPolicy object to trigger
# advertisement via ARP. HOWEVER, this Gateway object generates
# a second Service object of type LoadBalancer that must also
# have this mdhlabs-arp=enable label to trigger the actual ARP advertisement.
#
# Cilium up to version 1.19.3 has a bug that fails to copy this
# label to that auto-generated Service which prevents the ARP advertisement
# from being generated. That label must be manually added after
# this gateway is created via this command:
#
# kubectl -n prod label service cilium-gateway-cd-arp-gw mdhlabs-arp=enable
#
# Once added, Cilium will attempt to generate the ARP for the VIP
# NOTE: This tag must be reapplied each time Cilium auto-generates
# the Service.
name: cd-arp-gw
namespace: prod
spec:
gatewayClassName: cilium
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: "api.mdhlabs.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: api-secrettls
After this Gateway object is applied to the cluster, Cilium will auto-generate a second Service object of type LoadBalancer to provide a virtual IP to access this outer SSL processing layer. HOWEVER, a bug in Cilium that still exists in version 1.19.3 as of April 30, 2026 fails to automatically include the labels that were found in the Gateway YAML definition. As such, this VIP assigned for the Gateway will NOT trigger ARP advertisements via Cilum's automated behavior because it lacks the mdhlabs-arp=enable tag.
To correct this problem, the auto-generated Service object for the Gateway needs to have the mdhlabs-arp: enable label manually added to it. Here is the before and after showing the missing label, the command to add it, and the required label afterwards.
Troubleshooting Layer 2 ARP Advertisements
The following checklist covers some of the more common problems likely to occur when first implementing ARP advertisements for external access to services.
- Cilium MUST be installed with the devices option to specify which interfaces will be used to originate ARP messages to attract layer 2 traffic destined for a VIP. If nodes are not showing an active lease for a VIP that has been assigned, the VIP will work from those hosts but will NOT be reachable from other hosts on the same subnet.
- Gateway objects must be configured with an annotation specifying devices that match the devices included with the Cilium install command.
- In order for a node to generate an ARP message advertising the IP of a Gateway or Service, it has to acquire a "lease" on that VIP and the service must reference a device that is enabled for use in broacasting ARP messages to the rest of the world.
- Cilium developers have zero plans to support the ability to PING a virtual IP of any kind, making it harder to identify which layer of configuration is preventing actual traffic from reaching a service from external sources.
All Layer2 related settings within Cilium can be summarized with the following command:
[mdh@fedora1 cdtrackerapi]$cilium config view | grep l2enable-l2-announcements true enable-l2-neigh-discovery false l2-announcements-lease-duration 10s l2-announcements-renew-deadline 5s l2-announcements-retry-period 1s [mdh@fedora1 cdtrackerapi]$Information about leases and associated load balancer services that control them can be verified like this:
[mdh@fedora1 cdtrackerapi]$kubectl -n kube-system get leaseNAME HOLDER AGE apiserver-issnhmndiijorifh2ntjne5c2q apiserver-issnhmndiijorifh2ntjne5c2q_258531d0-1190-4dfd-91c9-51a5ca3928b4 122m cilium-l2announce-prod-cdtrackerapi-service kube2 14h cilium-operator-resource-lock kube2-86p7gzk7wb 21d kube-controller-manager kube1_201a8e2a-5280-4fa5-8952-cd81cc2074f3 21d kube-scheduler kube1_90c94d72-92a4-46e6-a052-457259e40caa 21d [mdh@fedora1 cdtrackerapi]$kubectl -n kube-system describe lease cilium-l2announce-prod-cdtrackerapi-serviceName: cilium-l2announce-prod-cdtrackerapi-service Namespace: kube-system Labels: <none> Annotations: <none> API Version: coordination.k8s.io/v1 Kind: Lease Metadata: Creation Timestamp: 2026-04-03T02:45:24Z Resource Version: 6676812 UID: 4e0ff733-9278-453e-bcc8-09b3660eab6a Spec: Acquire Time: 2026-04-03T14:46:06.313167Z Holder Identity: kube2 Lease Duration Seconds: 15 Lease Transitions: 1 Renew Time: 2026-04-03T16:48:51.528994Z Events:[mdh@fedora1 cdtrackerapi]$
More information on using Cilium within Kubernetes is provided in other posts in this series:
Cilium and Kubernetes - Caveats / Concepts
Cilium and Kubernetes - Installing Cilium Within Kubernetes
Cilium and Kubernetes - Configuring SSL and Load Balancing
Cilium and Kubernetes - Externally Accessing Services via ARP
Cilium and Kubernetes - Externally Accessing Services via BGP
