Skip to main content
Version: 0.10

Kore Architecture

This topic provides a high-level overview of Kore Operate's architecture.

Kore components#

Kore Operate is hosted by the customer, and can run on Kubernetes in AWS, GCP, and Azure public clouds. At a high level, Kore Operate consists of three elements:

  • API server

    The API server sits in front of Kubernetes, and handles API access to Kore Operate functionality, as well as single-sign-on (SSO).

  • Kubernetes

    Kore is deployed into a Kubernetes cluster. We use Kubernetes to deploy and run Kore, and we extend the Kubernetes APIs with Custom Resource Definitions (CRDs). Kore makes use of several open source Kubernetes projects, including controller-runtime, cert-manager, flux, ingress-nginx, external-dns, kaniko, gatekeeper, fluentd, and elgo-oidc.

  • MySQL or Postgres database

    The database stores information about users and teams, security events, and audit and cost information.

The following diagram shows the high-level Kore stack.

architecture

Once configured, the Kore platform handles:

  • Cloud account management
  • Kubernetes cluster creation
  • Cluster Security and resilience
  • Container builds
  • DNS and HTTPS
  • Cost tracking

Kore automation allows a single admin to support many teams. Behind the scenes, the platform:

  • Creates isolated cloud accounts following least privilege best practices
  • Sets up role-based access control (RBAC) for both Kore itself and the Kubernetes clusters created using Kore
  • Turns off insecure options when creating Kubernetes clusters, adds network and pod security policies, and turns on auto-scaling, using Appvia's best practices. This ensures that the public cloud Kubernetes services are configured correctly for enterprise security needs.

Component detail#

The following diagram shows a more detailed view of how Kore components interact with each other and with Kubernetes.

  • Kore is deployed into a Kubernetes cluster. The installation creates Kore's Management Cluster, shown on the left side of this diagram. This cluster uses a MySQL or Postgres database to store data such as Kore users, teams, events, etc.

  • The Management Cluster acts as the control plane, and interacts with the Kubernetes (k8s) clusters created by your development teams using Kore, shown on the right side of the diagram.

  • Kore is organized around development teams, the cloud infrastructure available to them, and the access policies and permissions they have. So in addition to other components, Kore's Management Cluster has a namespace for each team created in Kore, as shown in the bottom centre of the diagram.

lower level architecture

Recommended cloud configuration#

You can use Kore on multiple clouds. For example, you can install Kore Operate/Management Cluster on one public cloud, and have teams provision their Kubernetes clusters in a different public cloud.

Regardless of this choice, we recommend the following cloud configuration:

  • Use a dedicated cluster to host Kore because it creates and manages namespaces as teams are created.
  • Install Kore into a cluster that is not running other workloads.
  • Install only a single instance of Kore into a cluster.
  • Set up Kore to run using credentials managed entirely by AWS on EKS.

For more information, see the installation prerequisites.

Using Kore Operate#

Kore Operate has an API, a CLI, and a UI that serve two primary personas: the platform administrator and the developer.

The platform administrator interface#

Using the administrator interface, the Kore administrator:

  • Sets up cloud credentials and cloud account automation so that teams can have isolated development environments, following least privilege best practices for security
  • Sets up default cluster plans that comply with enterprise policy, and specifies which cluster parameters are allowed to be changed by development teams
  • Makes DNS available to development team clusters so that they have default domains for their apps
  • Configures cost integrations with the cloud provider, so that estimated and actual cloud running costs can be viewed in the Kore UI

For more information, see Get Started as a Kore Administrator.

The developer interface#

With the infrastructure and cluster plans put in place by the Kore administrator, development teams can easily provision Kubernetes clusters using a self-service model.

Using the developer interface, the development team:

  • Provisions Kubernetes clusters and namespaces, choosing from the available cloud providers and cluster plans
  • Uses default domains set up by the Kore administrator, or sets up custom domains for their workloads
  • Configures container builds. This lets their existing CI pipeline request Kore to build their software as a container image from their git repository, and make it available in their cluster.
  • Configures robots (service accounts) to run builds or deployments manually or using CI
  • Views actual and projected cloud running costs
  • Manages team members and roles, and views audit log of actions taken by team members
  • Is able to have direct access to the Kubernetes cluster using the Kubernetes CLI, kubectl, to deploy their apps manually, or have a Kore robot deploy them, to the infrastructure managed by Kore

For more information, see Get Started Using Kore.

Last updated on Aug 5, 2021