In this guide, we'll walk you through how to use the Kore CLI to set up a sandbox team environment locally and deploy a sample application.
We'll showcase how Kore can give you a head start with setting up clusters, team members and environments.
The installation of Kore created by
kore alpha local in this quick start is suitable for testing and proof-of-concept work. Bootstrapping a production installation of Kore is discussed in the install guide.
Please ensure you have the following installed on your machine:
- Docker: see Docker installation instructions
- Kubectl: see Kubectl installation instructions
- Kore CLI: see Getting the Kore CLI
This should provision a local kubernetes installation and deploy the official helm release.
$ kore local up✅ Performing preflight checks for installation ◉ Checking for kubectl binary requirement✅ Passed preflight checks for kore installation ◉ Single-sign on is disabled, using kore managed users ◉ Local admin not set, generating admin user password✅ Persisting the values to local file: "/home/jest/.kore/values.yaml"✅ Performing preflight checks for local cluster provider ◉ Kind binary requirement found in $PATH ◉ Docker binary requirement found in $PATH✅ Passed preflight checks for cluster provider✅ Attempting to build the local kubernetes cluster ◉ Checking if kind cluster: "kore" already exists ◉ Using Kind image: "kindest/node:v1.16.9" ◉ Provisioning a kind cluster: "kore" (usually takes 1-3mins) ◉ Still building the kind cluster "kore": 20s ◉ Still building the kind cluster "kore": 40s ◉ Built local kind cluster in 61s ◉ Exporting kubeconfig from kind cluster: "kore"✅ Exported the kubeconfig from provisioned cluster✅ Provisioned a local kubernetes cluster✅ Switched the kubectl context: "kind-kore"✅ Attempting to deploy the Kore release ◉ Using the official Helm chart for deployment ◉ Kore Helm chart has been installed at /home/jest/.kore/charts ◉ Waiting for kubernetes controlplane to become available ◉ Creating the kore namespace in cluster ◉ Deploying the kore installation to cluster✅ Deployed the Kore release into the cluster✅ Waiting for deployment to rollout successfully (5m0s timeout) ◉ Deployed Kore installation to cluster in 104s✅ Successfully deployed the kore release to cluster Access the Kore portal via http://localhost:3000 [ admin | VssJHJVQ ]Configure your CLI via $ kore login -a http://localhost:10080
You can now view the UI at http://localhost:3000 using the credentials output above, or use the CLI as described below.
You now have to login to be able to create teams and provision environments.
As you're the only user, you'll be assigned Admin privileges.
✔ Please enter the Kore API (e.g https://api.example.com) : http://localhost:10080Please enter your username : adminPlease confirm password for : ********$ kore whoami
You can also enable single-sign-on for the UI and all clusters; see
how to configure Auth0 as an IDP provider. To enable the feature on the
local demo add
kore local up --enable-sso which will prompt for your OpenID settings
(you can do this at any point).
Let's create a team with the CLI. In local mode, you'll be assigned as team member to this team.
As a team member, you'll be able to provision environments on behalf of team.
$ kore create team --description 'The Appvia product team, working on project Q.' team-appvia# "team-appvia" team was successfully created
To ensure the team was created,
$ kore get teams team-appvia# Name Description# team-appvia The Appvia product team, working on project Q.
We now need to tell Kore the details of the GCP project it needs to build a cluster on our behalf. These commands import a set of credentials into kore and allows your new team to use them to make clusters.
We'll then use these to create a cluster to host our sandbox environment. See a guide to configuring a token for GCP - we'll need the service account JSON which has owner privileges in the project.
$ kore create cloudcredential gcp-admin-service-account -c gcp --secret-files service_account_key=/path/to/kore-gcp-admin.json# Loading value for field service_account_key from file /path/to/kore-gcp-admin.json# Waiting for resource "cloudaccess.kore.appvia.io/v1alpha1/cloudcredentials/gcp-admin-service-account" to provision (background with ctrl-c)# Successfully provisioned the resource: "gcp-admin-service-account"
Next, create the cloud account so Kore understands the GCP project to use:
$ kore create cloudaccount gcp-shared -c gcp --type shared -i project-id --default-region europe-west2 --cred gcp-admin-service-account --allocate team-appvia# Waiting for resource "cloudaccess.kore.appvia.io/v1alpha1/cloudaccount/gcp-shared" to provision (background with ctrl-c)# Successfully provisioned the resource: "cloudaccount-gcp-gcp-shared"
It's time to use the Kore CLI to provision our Sandbox environment:
$ kore create cluster appvia-trial -t team-appvia --plan gke-development -a cloudaccount-gcp-gcp-shared --namespaces sandbox# Attempting to create cluster: "appvia-trial", plan: gke-development# Waiting for "appvia-trial" to provision (usually takes around 5 minutes, ctrl-c to background)# Cluster appvia-sdbox has been successfully provisioned# --> Attempting to create namespace: sandbox # Next, update your kubeconfig via: $ kore kubeconfig -t team-appvia# Then use 'kubectl' to interact with your team's cluster
There's a lot to unpack here. So, lets walk through it,
create cluster, we create a cluster to host our sandbox environment.
appvia-trial, the name of the cluster.
-t team-appvia, the team for which we are creating the sandbox environment.
--plan gke-development, a Kore predefined plan called
gke-development. This creates a cluster ideal for non-prod use.
-a cloudaccount-gcp-gcp-shared, the
cloudaccount-gcp-gcp-sharedallocated cloud account to use for creating this cluster.
--namespaces sandbox, creates an environment called
appvia-trialwhere we can deploy our apps, servers, etc..
You now have a sandbox environment locally provisioned for your team. 🎉
We'll be using
kubectl, the Kubernetes CLI, to make the deployment. If you don't have it already, please install and setup kubectl.
Now we have to configure our
kubectl kubeconfig in ~/.kube/config with our new GKE cluster.
./kore kubeconfig -t team-appvia# Successfully added team [team-appvia] provisioned clusters to your kubeconfig# Context Cluster# appvia-trial appvia-trial
Switch the current
kubectl context to
kubectl config use-context appvia-trial --namespace=sandbox# + kubectl config use-context appvia-trial --namespace=sandbox# Switched to context "appvia-trial".
Deploy the GKE example web application container available from the Google Cloud Repository
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0# + kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0# deployment.apps/hello-server created kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080# + kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080# service/hello-server exposed
kubectl get service hello-server# + kubectl get services# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE# hello-server LoadBalancer 10.70.10.119 <18.104.22.168> 80:31319/TCP 23s
Now navigate to the
EXTERNAL-IP as a url
You should see this on the webpage
Hello, world!Version: 1.0.0Hostname: hello-server-7f8fd4d44b-hpxls
To avoid incurring charges to your Google Cloud account for the resources used in this quickstart, follow these steps.
kubectl delete service hello-server
You can now use kore to destroy the cluster:
./kore delete --team team-appvia cluster appvia-trial# "appvia-trial" was successfully deleted
You can check for the cluster deletion completing by retrieving the cluster:
./kore get cluster appvia-trial --team team-appvia# Name Kind API Endpoint Auth Proxy Endpoint Status# appvia-trial GKE https://22.214.171.124 126.96.36.199 Deleting
Once the deletion is complete, the cluster will disappear from Kore:
./kore get cluster appvia-trial --team team-appvia# Error: "appvia-trial" does not exist
Finally, after waiting for your cluster to delete, you may stop your local kore environment:
$ kore alpha local destroy✅ Performing preflight checks ◉ Kind binary requirement found in $PATH ◉ Docker binary requirement found in $PATH✅ Passed preflight checks for local cluster provider✅ Attempting to delete the local kubernetes cluster ◉ Checking if kind cluster: "kore" already exists✅ Removed the local kubernetes cluster
If you don't have a Google Cloud account, grab a credit card and go to https://cloud.google.com/. Then, click the “Get started for free” button. Finally, choose whether you want a business account or an individual one.
Next step: On GCP, select an existing project or create a new one.
(You can skip this step if GKE API is already enabled for this project)
With a GCP Project selected or created,
- Head to the Google Developer Console.
- Enable the 'Kubernetes Engine API'.
- Enable the 'Cloud Resource Manager API'
- Enable the 'Compute Engine API'
- Enable the 'IAM Service Account Credentials API'
Alternatively you can enable these from the gcloud command line:
# Setup if requiredgcloud auth login # assuming you've not authenticatedgcloud config set project <project_id> # Enable the APIsgcloud services enable cloudresourcemanager.googleapis.comgcloud services enable iam.googleapis.comgcloud services enable compute.googleapis.comgcloud services enable container.googleapis.com
(You can skip this step if you already have a Service Account setup)
With the a GCP Project selected or created,
- Head to the IAM Console.
Create service account.
- Fill in the form with details with your team's service account.
(You can skip this step if you're Service Account has the
- Assign the
Ownerrole to your Service account.
(You can skip this step if you already have your Service Account key downloaded in JSON format)
Kore will use this key to access the Service Account.
This is the last step, create a key and download it in JSON format.