Skip to main content
LEAP Docs
Gitlab Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Accessing your cluster

How to access_cluster.sh

This method allows provisioning from your local machine to remotes. For access to your backend cluster change into clusters/backend (for gateway into clusters/gateway) and run:

eval $(./scripts/access_cluster.sh --start)

Calling this script like described, will pull the k3s.yml from your controller node, adapt it for use on localhost, create in a background process a ssh tunnel with port forwarding on port 6443 to your controller node and automatically export the KUBECONFIG environment variable to your current shell.

You can also use the script to access a cluster that you have ssh-access to but that you haven’t provisioned yourself. If there is no terraform.tfstate file in the sibling directory you will be prompted for the public IP address of your controller node.

If you have provisioned your cluster with a different SSH key than your default one, you can amend --ssh-key <path/to/your/ssh-key to the command:

eval $(./scripts/access_cluster.sh --start --ssh-key <path/to/your/ssh-key>)

Test the kubectl commands using the following command (if you haven’t yet installed kubectl on your local machine, follow this guide: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/). This should return a table with your deployed nodes.

kubectl get nodes -o wide

If you want to run kubectl in a different shell then the one you’ve used to start port forwarding just export there the KUBECONFIG:

export KUBECONFIG=<path-to-your-work-dir>/k3s-local.yaml

Stopping port forwarding

Once you’re done with your work, you should close your ssh session to your controller node and disable port forwarding again. This can be done by running

./scripts/portforwarding.sh --stop