I’ve recently started working quite extensively with Kubernetes on Google Container Engine. But one thing that bugged me was that I couldn’t get logs from multiple pods simultaneously using the Google Cloud SDK (`kubectl`). This is something that you often want to do, at least if you run multiple instances of a pod behind a replication controller. This is especially true when experimenting with Kubernetes and you don’t have Kibana or similar setup (with Kubernetes 1.0 it’s not easy to get logs from multiple pods using the Google Cloud Logging interface either although this should improve in version 1.1, see here). So before the tooling catches up I decided to write a little bash script that allows you to tail multiple pods simultaneously in an easy manner. The script is called `kubetail` and is available at github.
Usage
First find the names of all your pods:
$ kubectl get pods
This will return a list looking something like this:
NAME READY STATUS RESTARTS AGE
app1-v1-aba8y 1/1 Running 0 1d
app1-v1-gc4st 1/1 Running 0 1d
app1-v1-m8acl 1/1 Running 0 6d
app1-v1-s20d0 1/1 Running 0 1d
app2-v31-9pbpn 1/1 Running 0 1d
app2-v31-q74wg 1/1 Running 0 1d
my-demo-v5-0fa8o 1/1 Running 0 3h
my-demo-v5-yhren 1/1 Running 0 2h
To tail the logs of the two “app2” pods in one go simply do:
$ kubetail app2
You’ll now get the logs streamed for all pods containing “app2”.
Known issues
When you press “ctrl+c” to end the log session you may end up with errors like this:
error: write /dev/stdout: broken pipe
I’m not quite sure why this happens, pull requests are welcome 🙂
7 thoughts on “Tail logs from multiple pods simultaneously in Kubernetes”
very handy.. thx!
thank you!
I don’t see below with ctrl+c, which is great 🙂 OS=MACOS
error: write /dev/stdout: broken pipe
No I think it’s fixed by now 🙂
It just tries to tail all pods in the namespace.
Any thought what went wrong?
`kubetail -n kube-system pod1,pod2`