Monday, April 13, 2026
Monday, April 6, 2026
We can LLMs in three ways by usually
AI agents are semi autonomous systems that interact with environment, make decisions and perform tasks on behalf of users.
Saturday, April 4, 2026
K8S is a dynamic network. Pods are ephemeral. IP change on every restart.
Containers with in the pod shared a single network namespace.
K8S networking Model:
1) Every Pod receive a unique and cluster wide IP address.
2) All pods on the same node can communicate directly without NAT
3) All pods on different nods can communicate directly without NAT
4) A Pod self seen IP is identical to the IP other pods use to reach it [Flat network]
Kubernetes specifies what is required and CNI plugins decide How to implement it
Communication pattern in K8S
Container to Container - within same pod via loopbackup [127.0.0.1]
Pod to Pod - Direct IP communication across nodes without address translation
Pod to Service - Kube proxy intercepts traffic and load balancing to healthy end points
External to Service - Exposed via NodePort, LoadBalance type or Ingress controller
Node to Pod - Kubelet and monitoring agents
Kube proxy runs on every node as a DaemonSet and part of the Kubernetes control plane. It watches API sever for any change of resource or end points. API server initate a end point object when selector create a resourece. Kube proxy is maintaining a chain of IP table mode. I used to maintain local and forward routing. IPVS is a kernel level virutal load balancer. It will handle thousand of service request and routing at a same time.
Pod to service will take care of kube proxy and pod to pod communication will take care of CNI.
Infra [pause] container creates and own a network namespace for the pod. All application containers in the Pod share the Infra container namespace at startup.
Virtual [veth] pair : Two virtual NICs connect between Pod and Node side. One end lives inside the Pod's network namespace [eth0] and other end is attached to Node like linux bridge [cbr0]
Traffic flow : Pod [eth0] -> veth pair -> host bridge -> node routing table -> destination
Node to Node communication is used Overlay approach and Underlay approach.
Overlay approach is encapsulated a traffic and decapsulated from destionation node.
Underlay approach is a direct routing method.
Modern CNIs like calico & cilium will support both approach.
Overlay (VxLAN/Geneve) - It is universal compatibility and cloud friendly. It will support upto 50 bytes per packet if MTU set to 1450
Underlay - It required physical network to accept and route through BGP routing
Analysis a packet flow under flannel CNI.
controlplane:~$ kubectl get pods -n kube-flannel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-5sv5v 1/1 Running 0 15m controlplane <none> <none>
kube-flannel-ds-n7dxx 1/1 Running 0 15m node01 <none> <none>
controlplane:~$
node01:~$ tcpdump -i flannel.1 -n 'tcp' -vvv
tcpdump: listening on flannel.1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
node01:~$
Service:
K8S will face very difficult to manage an IP address across PODs. This issue will fix by service which providing an stable virtual IP (Cluster IP). It will act as a load balancer across all the pods. It will enable a loose coupling within application.
Service components:
Cluster IP: Virtual IP assigned by K8S
Port: The port of the service listens on
TargetPort: The port on the container that the service forwards traffic to
Endpoints: The actual pod IPs and ports maintained by the endpoint controller
Metadata: Name, namespace, and labels for the identification and discovery
* Most widely adopted controller in production and maintained by the community and NGINX.
TLS Block Structure : Specify hosts, Secret Name and optional Paths.
K8S is having limited Ingress by design. Gateway API will use to over limit of Ingress. It has a three Layers. 1) GatewayClass (controller implementation) 2) Gateway (listener and security config) 3) HTTPRoute (routing rules)
- Connection Timeout/ Refused : Check if a policy is selecting the pod and list polices in the namepsace.
- List policy - kubect get networkpolicy -n namespace | grep pod-label
- Describe policy - kubectl describe networkpolicy name -n namepsace
- check pod labels : kubectl get pods -n namespace --show-labels
- Test connectivity : kubectl exec source-pod -- curl destination-pod:port -v
- Check for DNS issues - If application cannot resolve hostnames, policy may missing egress of that host.
- CNI logs: kubectl logs -n kube-system calico-node/cilium-agent | grep DENIED








