Jump to content

Portal:Toolforge/Admin/Kubernetes/Pod tracing

From Wikitech
This page is currently a draft.
Material may not yet be complete, information may presently be omitted, and certain parts of the content may be subject to radical, rapid alteration. More information pertaining to this may be available on the talk page.

This article describes some procedures in cases we need to trace a pod when is misbehaving. Typical case is an API being hammered by an unknown tool in Toolforge, and the need to shutdown the corresponding pod.

In all cases, after you have identified the offending tool/pod, you can disable it as described at Help:Toolforge/Kubernetes#Monitoring_your_job (i.e., become the tool in a bastion, and then delete the deployment).

Case 1: Unknown Tool hammering an API

This case happened before. See T204267 for example.

In this example, the Wikidata API was being hammered by an IP from a k8s worker node in Toolforge. The tool was using an not-meaningfull User-Agent, so we had no way of identifying quickly which tool was causing it.

First, get an overview of pods running in the offending k8s node (you should know the k8s worker node becase that would be present in the API server logs):

Try to see at first glance if a tool is the obvious suspicious from causing the high traffic. If not, try running tcpdump in the k8s node to try to see and identify some traffic pattern which allows you to match the traffic to a given internal k8s IP address (i.e, 192.168.x.x). You can also inspect other resources, like conntrack -L which maintains a list of current NAT connections and like iptables-save, which contains the matching between internal k8s IP addresses and tool names (rules comments).