Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ktunnel does not reconnect to cluster after interrupted internet connection #114

Open
advancingu opened this issue Jul 5, 2023 · 3 comments

Comments

@advancingu
Copy link

I would like to run ktunnel for an extended period of time during which the active internet connection may fail and be switched over to a secondary line by the network router. This failover causes TCP connections to be terminated.

Is it possible to update the ktunnel client so that connections to the cluster that have terminated are automatically re-established?

If not, could the client executable terminate fully upon connection loss so that the underlying Docker container terminates and can then be automatically restarted? Right now it appears that the client only prints out the notice below but does not terminate after connection loss.

lost connection to pod 
closing listener on 55232                     �[36merror�[0m="context canceled"
closing listener on 8080                      �[36merror�[0m="context canceled"
@it3xl
Copy link

it3xl commented Jul 20, 2023

Many things kill ktunnel's work in this way. This is a pain.
That forces us to do manual activities to restore a connection.

For example,

  • closing and opening of a laptop lid;
  • OS sleeping;
  • any prolonged network interruptions;

We just can't use a w/a how we do this for SSH.
In this situation SSH (or kubectl port-forward) process just fails and we use a while loop that automates SSH relaunching.
The ktunnel never stops by itself in this way.

Screenshot 2023-07-20 at 12 24 31

A workaround here is to abandon the ktunnel and use SSH Remote Port Forwarding ssh -R again.
But if you docker images don't have SSH installed and configured then you have to write a script that installs SSH after every pod creation/recreation.
But this doesn't work well if you pods are restarted often.

Even more!
If ktunnel will be stopped (Ctrl+C) during lack of a network connection then it deadlocks itself by hanging on an existing k8s service and deployment which ktunnel created during previous running.
It forces me to always start ktunnel only in this way
apoint=my-service; kubectl -n $ns delete deployment $apoint; kubectl -n $ns delete service $apoint; ktunnel -n $ns expose $apoint 55555:5443

@it3xl
Copy link

it3xl commented Jul 24, 2023

This is another non-self-healing condition
image
A pod is back after some time but ktunnel is never back.
There will be a manual operation to relaunch work of ktunnel.

@advancingu
Copy link
Author

Fyi, in case others have this issue as well, I worked around this by adding my own launcher shell script which I bake into my own Docker image. It kills the container whenever specific strings are found in the console output.

run-ktunnel.sh

#!/bin/sh

/ktunnel expose -n remote-access --force --reuse myname "$@" 2>&1 | \
while read -r line
do
  echo "ktunnel: $line"
  if echo "$line" | grep -q "lost connection" || echo "$line" | grep -q "error upgrading connection"; then
    echo "Exiting"
    # Kills all container processes which causes the container to exit; this is a workaround because killing PID 1 is too hard :p
    ps x | awk {'{print $1}'} | awk 'NR > 1' | xargs kill
    exit
  fi
done

Dockerfile

FROM omrieival/ktunnel:v1.5.3 as ktunnel


FROM alpine:3.18

COPY --from=ktunnel ktunnel /
ADD run-ktunnel.sh /

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants