diff --git a/calico-cloud/networking/index.mdx b/calico-cloud/networking/index.mdx index faffb6f673..3f961efbaa 100644 --- a/calico-cloud/networking/index.mdx +++ b/calico-cloud/networking/index.mdx @@ -28,7 +28,6 @@ The $[prodname] network plugins provide a range of networking options to fit you - ## IP address management diff --git a/calico-enterprise/_includes/components/InstallAKS.js b/calico-enterprise/_includes/components/InstallAKS.js index e2ce3c2ee3..e7f1421f5d 100644 --- a/calico-enterprise/_includes/components/InstallAKS.js +++ b/calico-enterprise/_includes/components/InstallAKS.js @@ -259,8 +259,8 @@ spec:

The following example of a NodePort service may not be suitable for production and high availability. For options, see{' '} - - Fine-tune multi-cluster management for production + + Port and service requirements .

diff --git a/calico-enterprise/_includes/components/InstallEKS.js b/calico-enterprise/_includes/components/InstallEKS.js index 5486fe984f..2e018d6f22 100644 --- a/calico-enterprise/_includes/components/InstallEKS.js +++ b/calico-enterprise/_includes/components/InstallEKS.js @@ -330,8 +330,8 @@ spec:

The following example of a NodePort service may not be suitable for production and high availability. For options, see{' '} - - Fine-tune multi-cluster management for production + + Port and service requirements .

diff --git a/calico-enterprise/_includes/components/InstallGKE.js b/calico-enterprise/_includes/components/InstallGKE.js index 694a6c48e1..068fb8efbd 100644 --- a/calico-enterprise/_includes/components/InstallGKE.js +++ b/calico-enterprise/_includes/components/InstallGKE.js @@ -146,8 +146,8 @@ spec:

The following example of a NodePort service may not be suitable for production and high availability. For options, see{' '} - - Fine-tune multi-cluster management for production + + Port and service requirements .

diff --git a/calico-enterprise/_includes/components/InstallGeneric.js b/calico-enterprise/_includes/components/InstallGeneric.js index 5266d4ca60..389007315f 100644 --- a/calico-enterprise/_includes/components/InstallGeneric.js +++ b/calico-enterprise/_includes/components/InstallGeneric.js @@ -156,8 +156,8 @@ spec:

Create a service to expose the management cluster. The following example of a NodePort service may not be suitable for production and high availability. For options, see{' '} - - Fine-tune multi-cluster management for production + + Port and service requirements . Apply the following service manifest.

diff --git a/calico-enterprise/_includes/components/InstallOpenShift.js b/calico-enterprise/_includes/components/InstallOpenShift.js index c6920050b2..3e4b4189cd 100644 --- a/calico-enterprise/_includes/components/InstallOpenShift.js +++ b/calico-enterprise/_includes/components/InstallOpenShift.js @@ -295,8 +295,8 @@ spec:

Create a service to expose the management cluster. The following example of a NodePort service may not be suitable for production and high availability. For options, see{' '} - - Fine-tune multi-cluster management for production + + Port and service requirements . Apply the following service manifest.

diff --git a/calico-enterprise/about/index.mdx b/calico-enterprise/about/index.mdx index 294223bf1e..c732e93639 100644 --- a/calico-enterprise/about/index.mdx +++ b/calico-enterprise/about/index.mdx @@ -40,7 +40,7 @@ All of this is built on Calico Open Source, the most widely used container netwo description='Secure outbound traffic with fixed, routable IP assignment' /> @@ -127,7 +127,7 @@ All of this is built on Calico Open Source, the most widely used container netwo /> diff --git a/calico-enterprise/getting-started/install-on-clusters/kubernetes/helm.mdx b/calico-enterprise/getting-started/install-on-clusters/kubernetes/helm.mdx index 9075e005ae..6a6dda40d6 100644 --- a/calico-enterprise/getting-started/install-on-clusters/kubernetes/helm.mdx +++ b/calico-enterprise/getting-started/install-on-clusters/kubernetes/helm.mdx @@ -127,8 +127,8 @@ To install a standard $[prodname] cluster with Helm: **Multicluster Management** -- [Create a $[prodname] management cluster](../../../multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx) -- [Create a $[prodname] managed cluster](../../../multicluster/set-up-multi-cluster-management/standard-install/create-a-managed-cluster.mdx) +- [Create a $[prodname] management cluster](../../../multicluster/how-to/create-a-management-cluster.mdx) +- [Create a $[prodname] managed cluster](../../../multicluster/how-to/create-a-managed-cluster.mdx) **Recommended** diff --git a/calico-enterprise/multicluster/explanation/architecture.mdx b/calico-enterprise/multicluster/explanation/architecture.mdx new file mode 100644 index 0000000000..4af82b4b6b --- /dev/null +++ b/calico-enterprise/multicluster/explanation/architecture.mdx @@ -0,0 +1,71 @@ +--- +description: Understand the architecture of Calico Enterprise multi-cluster management, including management and managed cluster topology, the guardian component, and communication model. +--- + +# Multi-cluster management architecture + +$[prodname] multi-cluster management lets you centralize control of multiple Kubernetes clusters in a single management plane. This page explains the architectural components and how they interact. + +## Management and managed cluster topology + +A multi-cluster management deployment consists of two cluster roles: + +- **Management cluster**: The central cluster that hosts the $[prodname] web console (Manager), centralized log storage (Elasticsearch), and the control plane for all connected clusters. +- **Managed cluster**: A cluster that connects to the management cluster and forwards its log data, telemetry, and resource information to the central management plane. + +A management cluster can manage many managed clusters. Each managed cluster maintains a persistent connection to the management cluster. + +### What the management cluster provides + +- Centralized web console for visibility and control across all clusters +- Centralized Elasticsearch for log storage (flow logs, audit logs, DNS logs, L7 logs, events) +- Aggregated Prometheus metrics +- Cross-cluster RBAC enforcement — users authenticate on the management cluster and their identity is passed to managed clusters for authorization + +### What managed clusters handle locally + +- Local policy enforcement and data plane operations +- Local $[prodname] components (calico-node, Typha, kube-controllers) +- Connection to the management cluster via a `ManagementClusterConnection` resource + +## Communication model + +Managed clusters connect to the management cluster over **port 9449** (TCP). The management cluster exposes this port through a Kubernetes Service in the `calico-system` namespace that targets the Manager pod using the label selector `k8s-app: calico-manager`. + +The service type can be: + +- **NodePort** — simplest for getting started; maps an external port (e.g. 30449) to 9449 on the Manager pod. +- **LoadBalancer** — recommended for production and high availability. + +A security rule or firewall rule is required to allow inbound connections from managed clusters to the management cluster on this port. + +## Key custom resources + +| Resource | Cluster | Purpose | +|---|---|---| +| `ManagementCluster` | Management | Declares the cluster as a management cluster and specifies the address managed clusters use to connect. | +| `ManagedCluster` | Management | Registers a managed cluster, triggers generation of an installation manifest. | +| `ManagementClusterConnection` | Managed | Connects the managed cluster to the management cluster. Applied from the manifest generated by the `ManagedCluster` resource. | + +## Guardian + +Guardian is the component on managed clusters that maintains the connection to the management cluster. It is deployed automatically when the `ManagementClusterConnection` is applied to a managed cluster. Guardian: + +- Establishes and maintains a secure tunnel to the management cluster +- Forwards log data to centralized storage +- Proxies API requests from the management cluster to local cluster resources + +## Log data flow + +All log data from managed clusters flows through the Guardian tunnel to the management cluster's Elasticsearch instance. Indexes are namespaced by cluster name using the pattern: + +``` +.. +``` + +The management cluster itself uses the cluster name `cluster`. Managed clusters use the name chosen during registration. + +## Next steps + +- [Cluster mesh and federation](cluster-mesh.mdx) +- [Security model](security-model.mdx) diff --git a/calico-enterprise/multicluster/explanation/cluster-mesh.mdx b/calico-enterprise/multicluster/explanation/cluster-mesh.mdx new file mode 100644 index 0000000000..8f361fc66b --- /dev/null +++ b/calico-enterprise/multicluster/explanation/cluster-mesh.mdx @@ -0,0 +1,94 @@ +--- +description: Understand how Calico Enterprise cluster mesh enables cross-cluster endpoint identity, federated services, and multi-cluster networking. +--- + +# Cluster mesh + +## Overview + +$[prodname] cluster mesh secures cross-cluster connections with identity-aware network policy and federates services for cross-cluster service discovery. It also provides multi-cluster networking to establish cross-cluster connectivity. + +## Why cluster mesh + +By default, pods can only communicate with pods within the same cluster. Services and network policy only select pods from the local cluster. $[prodname] cluster mesh overcomes these barriers with three features: + +- **Federated endpoint identity** — A local cluster includes the workload and host endpoints of remote clusters in the calculation of local network policies on each node. +- **Federated services** — A local Kubernetes Service populates with Endpoints selected from both local and remote cluster Services. +- **Multi-cluster networking** — An overlay network between clusters provides cross-cluster connectivity. + +## Pod IP routability + +$[prodname] cluster mesh operates at the network layer based on pod IPs. + +Federated endpoint identity and federated services both require that pod IPs are routable between clusters. Identity-aware network policy needs source and destination pod IPs preserved to establish pod identity. Federated Service endpoints must also be routable to be useful. + +You can establish pod IP routability in two ways: + +- **$[prodname] multi-cluster networking** — extends the overlay network (VXLAN and/or WireGuard) between clusters. +- **Network-provided routing** — manually set up routing without encapsulation (e.g. VPC routing, BGP routing). + +## Federated endpoint identity + +Federated endpoint identity allows a local cluster to include remote cluster endpoints in local policy calculations. For example, Cluster A can write a network policy that allows its application pods to talk to database pods in Cluster B. + +Key points: + +- Network policies are **not** federated — policies from a remote cluster are not applied to local endpoints. +- Only the local cluster's policies are rendered and applied locally. +- Rule selectors can reference both local and remote endpoints based on labels. + +This works because each cluster synchronizes endpoint data from remote clusters via `RemoteClusterConfiguration` resources. Traffic from remote clusters preserves pod IPs, and the local cluster associates those IPs with the pod specifications it synchronized. + +## Federated services + +Federated services provide cross-cluster service discovery. A federated service consolidates endpoints from services across all clusters in the mesh into a single local Service. + +The Tigera Federated Services Controller manages federated services: + +- It monitors services across all clusters in the mesh. +- It populates a local federated service with endpoints matching a `federation.tigera.io/serviceSelector` annotation. +- It does not change configuration on remote clusters. + +A federated service uses an annotation instead of a pod selector: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-app-federated + namespace: default + annotations: + federation.tigera.io/serviceSelector: run == "my-app" +spec: + ports: + - name: my-app-ui + port: 8080 + protocol: TCP + type: ClusterIP +``` + +Endpoints are selected only when the service port name and protocol in the federated service matches the port name and protocol in the backing service. If you have an existing service discovery mechanism, federated services are optional. + +## Multi-cluster networking + +$[prodname] multi-cluster networking extends the overlay network of a cluster to include nodes from remote clusters. This is made possible by each cluster having a view into the datastore that includes remote pods and nodes. + +Multi-cluster networking uses the `overlayRoutingMode` field in `RemoteClusterConfiguration`. When set to `Enabled`, the cluster establishes cross-cluster overlay routes. + +Both VXLAN and WireGuard are supported for cross-cluster routing. If both are enabled and a WireGuard peer is not ready, communication falls back to VXLAN. + +## How the mesh is formed + +The cluster mesh is formed by a set of bidirectional `RemoteClusterConfiguration` connections: + +1. Each cluster creates a `kubeconfig` with limited credentials for remote clusters to use. +2. Each cluster creates a `RemoteClusterConfiguration` for every other cluster, referencing the remote cluster's `kubeconfig` stored in a Secret. +3. Typha connects to each remote cluster's API server using the stored credentials and synchronizes endpoint, node, and IP pool data. + +For a mesh of clusters A, B, and C, you need `RemoteClusterConfiguration` resources for each pair in both directions: \{A→B, B→A, A→C, C→A, B→C, C→B\}. + +## Next steps + +- [Security model](security-model.mdx) +- [Set up cluster mesh](../how-to/set-up-cluster-mesh.mdx) +- [Configure federated services](../how-to/configure-federated-services.mdx) diff --git a/calico-enterprise/multicluster/federation/index.mdx b/calico-enterprise/multicluster/explanation/index.mdx similarity index 70% rename from calico-enterprise/multicluster/federation/index.mdx rename to calico-enterprise/multicluster/explanation/index.mdx index a505a53f50..4f495d8538 100644 --- a/calico-enterprise/multicluster/federation/index.mdx +++ b/calico-enterprise/multicluster/explanation/index.mdx @@ -1,9 +1,9 @@ --- -description: Steps to configure cluster mesh. +description: Understand multi-cluster management and cluster mesh concepts. hide_table_of_contents: true --- -# Federation and multi-cluster networking +# Concepts import DocCardList from '@theme/DocCardList'; import { useCurrentSidebarCategory } from '@docusaurus/theme-common'; diff --git a/calico-enterprise/multicluster/explanation/security-model.mdx b/calico-enterprise/multicluster/explanation/security-model.mdx new file mode 100644 index 0000000000..bdc1f68bd5 --- /dev/null +++ b/calico-enterprise/multicluster/explanation/security-model.mdx @@ -0,0 +1,81 @@ +--- +description: Understand the security model for Calico Enterprise multi-cluster management, including credential generation, certificate management, and cross-cluster RBAC. +--- + +# Security model + +This page explains how $[prodname] secures communication and enforces access control across management and managed clusters, as well as within a cluster mesh. + +## Management cluster connection security + +### Certificate-based authentication + +When a managed cluster is registered on the management cluster (via a `ManagedCluster` resource), an installation manifest is generated containing a `ManagementClusterConnection` with the credentials the managed cluster needs to connect. The Guardian component on the managed cluster uses these credentials to establish and maintain a secure tunnel to the management cluster over port 9449. + +For Helm-based installations, certificates are generated explicitly: + +1. A self-signed certificate and key pair is generated for each managed cluster. +2. The managed cluster's certificate is registered on the management cluster. +3. The management cluster's TLS certificate is provided to the managed cluster for verification. + +This mutual TLS model ensures that both sides of the connection are authenticated. + +### User identity propagation + +When a user accesses a managed cluster's resources through the management cluster web console: + +1. The user authenticates against the management cluster using a service account or user account. +2. The management cluster passes the user's identity to the managed cluster for authorization. +3. The managed cluster evaluates the request against its own Kubernetes RBAC rules. + +This means: +- All users must have a valid account on the management cluster to log in. +- Users must have the **same username** defined on both the management cluster and any managed clusters they access. +- A user can have different permissions on each managed cluster, defined by Kubernetes Role and ClusterRole objects, but the username in the corresponding RoleBinding/ClusterRoleBinding must match. + +## Cluster mesh security + +### Federation credentials + +Each cluster in a cluster mesh creates a dedicated ServiceAccount (`tigera-federation-remote-cluster`) with limited permissions for remote clusters to use. The credentials flow is: + +1. A ServiceAccount is created in the `calico-system` namespace. +2. A ClusterRole and ClusterRoleBinding grant the account read-only access to the resources needed for federation (endpoints, nodes, IP pools, profiles). +3. A `kubeconfig` is generated from the ServiceAccount token and the cluster's API server certificate. +4. The `kubeconfig` is shared with remote clusters and stored as a Kubernetes Secret. + +### Secret access control + +When a cluster stores a remote cluster's `kubeconfig` as a Secret, Typha needs access to read it. An RBAC Role and RoleBinding in the secret's namespace grants the `calico-typha` ServiceAccount permission to watch, list, and get secrets in that namespace. + +### Credential verification + +You can verify that stored credentials work by extracting the kubeconfig from the secret and testing it: + +```bash +kubectl get secret -n remote-cluster-secret-name \ + -o=jsonpath="{.data.kubeconfig}" | base64 -d > verify_kubeconfig +kubectl --kubeconfig=verify_kubeconfig get nodes +``` + +## Log data access control + +Log data from all managed clusters is stored in centralized Elasticsearch on the management cluster. Access to log data is controlled using Kubernetes RBAC: + +- The API group is `lma.tigera.io`. +- `resources` controls access by cluster name. +- `resourceNames` controls access by log type (`flows`, `audit`, `audit_ee`, `audit_kube`, `events`, `dns`, `l7`). + +For example, to allow access to flow and DNS logs for a specific cluster: + +```yaml +- apiGroups: ['lma.tigera.io'] + resources: ['app-cluster'] + resourceNames: ['flows', 'dns'] + verbs: ['get'] +``` + +## Next steps + +- [Configure cross-cluster RBAC](../how-to/configure-cross-cluster-rbac.mdx) +- [Architecture](architecture.mdx) diff --git a/calico-enterprise/multicluster/federation/aws.mdx b/calico-enterprise/multicluster/federation/aws.mdx deleted file mode 100644 index 631b562f86..0000000000 --- a/calico-enterprise/multicluster/federation/aws.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -description: A sample configuration of Calico Enterprise federated endpoint identity and federated services for an AWS cluster. ---- - -# Cluster mesh example for clusters in AWS - -## Big picture - -A sample configuration for cluster mesh using AWS clusters. - -## Tutorial - -**Set up** - -The cluster is installed on real hardware where node and pod IPs are routable, using an edge VPN router to peer with the AWS cluster. - -![A diagram showing the key configuration requirements setting up an AWS cluster (using AWS VPN CNI) peering with an on-premise cluster.](/img/calico-enterprise/federation/aws-rcc.svg) - -**Calico Enterprise configuration** - -- IP pool resource is configured for the on-premise IP assignment with IPIP is disabled -- BGP peering to the VPN router -- A Remote Cluster Configuration resource references the AWS cluster -- Service discovery of the AWS cluster services uses the Calico Enterprise Federated Services Controller - -**Notes** - -- If VPN Router is configured as a route reflector for the on-premise cluster, you would: - - Configure the default BGP Configuration resource to disable node-to-node mesh - - Configure a global BGP Peer resource to peer with the VPN router -- If the IP Pool has `Outgoing NAT` enabled, then you must add an IP Pool covering the AWS cluster VPC with disabled set to `true`. When set to `true` the pool is not used for IP allocations, and SNAT is not performed for traffic to the AWS cluster. - -**AWS configuration** - -- A VPC CIDR is chosen that does not overlap with the on-premise IP ranges. -- There are 4 subnets within the VPC, split across two AZs (for availability) such that each AZ has a public and private subnet. In this particular example, the split of responsibility is: - - The private subnet is used for node and pod IP allocation - - The public subnet is used to home a NAT gateway for pod-to-internet traffic. -- The VPC is peered to an on-premise network using a VPN. This is configured as a VPN gateway for the AWS side, and a classic VPN for the customer side. BGP is used for route distribution. -- Routing table for private subnet has: - - "propagate" set to "true" to ensure BGP-learned routes are distributed - - Default route to the NAT gateway for public internet traffic - - Local VPC traffic -- Routing table for public subnet has default route to the internet gateway. -- Security group for the worker nodes has: - - Rule to allow traffic from the peered networks - - Other rules required for settings up VPN peering (refer to the AWS docs for details). - -To automatically create a Network Load Balancer (NLB) for the AWS deployment, we apply a service with the correct annotation. - -```yaml -apiVersion: v1 -kind: Service -metadata: - annotations: - service.beta.kubernetes.io/aws-load-balancer-type: nlb - name: nginx-external -spec: - externalTrafficPolicy: Local - ports: - - name: http - port: 80 - protocol: TCP - targetPort: 80 - selector: - run: nginx - type: LoadBalancer -``` diff --git a/calico-enterprise/multicluster/federation/kubeconfig.mdx b/calico-enterprise/multicluster/federation/kubeconfig.mdx deleted file mode 100644 index 3082114d2d..0000000000 --- a/calico-enterprise/multicluster/federation/kubeconfig.mdx +++ /dev/null @@ -1,419 +0,0 @@ ---- -description: Configure a local cluster to pull endpoint data from a remote cluster. ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -# Creating the cluster mesh -In this page, we will create a $[prodname] cluster mesh by connecting clusters together. Once created, $[prodname] cluster mesh enables multi-cluster networking, network policy for cross-cluster connections, cross-cluster services, and encryption via WireGuard. - -## Requirements -$[prodname] multi-cluster networking provides routing between clusters that preserves pod IPs. This section outlines the requirements for this routing to be established. If your network already provides routing between clusters that preserves pod IPs, you can skip this section. - -### Prerequisites for $[prodname] multi-cluster networking: -- All nodes participating in the cluster mesh must be able to establish connections to each other via their private IP. -- All nodes participating in the cluster mesh must have unique node names. -- Pod CIDRs between clusters must not overlap. -- All clusters must have at least one overlay network in common (VXLAN and/or WireGuard). -- All clusters must have the same `routeSource` setting on `FelixConfiguration`. - -If using VXLAN: -- The `vxlan*` settings on `FelixConfiguration` must be the same across clusters participating in the mesh. -- The underlying network must allow traffic on `vxlanPort` between clusters participating in the mesh. -- All clusters must use Calico CNI. - -If using WireGuard: -- The `wireguard*` settings on `FelixConfiguration` must be the same across clusters participating in the mesh. -- The underlying network must allow traffic on `wireguardListeningPort` between clusters participating in the mesh. -- All clusters must use Calico CNI OR All clusters must use non-Calico CNI (mixing non-Calico CNI types is supported). - -Note: much like intra-cluster routing in $[prodname], cross-cluster routing can utilize both VXLAN and WireGuard at the same time. If both are enabled and a WireGuard peer is not ready, communication with that peer will fall back to VXLAN. - -## Setup - -### Generate credentials for cross-cluster resource synchronization -:::tip[mental model] -The basis of cluster mesh is the ability for a cluster connect to a remote cluster and sync data from it. This enables each $[prodname] cluster to have a view into the datastore that includes both local and remote cluster pods. -::: - -In this section, we will create a `kubeconfig` for each cluster. This `kubeconfig` is what other clusters will use to connect to a given cluster and synchronize data from it. - -**For each** cluster in the cluster mesh, utilizing an existing `kubeconfig` with administrative privileges, follow these steps: - -1. Create the ServiceAccount used by remote clusters for authentication: - - ```bash - kubectl apply -f $[filesUrl]/manifests/federation-remote-sa.yaml - ``` - -1. Create the ClusterRole and ClusterRoleBinding used by remote clusters for authorization: - - ```bash - kubectl apply -f $[filesUrl]/manifests/federation-rem-rbac-kdd.yaml - ``` -1. Create the ServiceAccount token that will be used in the `kubeconfig`: - - ```yaml - kubectl apply -f - < $KUBECONFIG_NAME - ``` - -1. Verify that the `kubeconfig` file works: - - Issue the following command to validate the `kubeconfig` file can be used to connect to the current cluster and access resources: - ```bash - kubectl --kubeconfig=$KUBECONFIG_NAME get nodes - ``` - -Once you've created a `kubeconfig` for **each** cluster, you can proceed to the next section to establish the cluster connections that form the mesh. - -### Establish cross-cluster resource synchronization -:::tip[mental model] -The cluster mesh is formed when each cluster connects to every other cluster to synchronize data. A cluster connects to another cluster using a RemoteClusterConfiguration, which references a kubeconfig created for the remote cluster. -::: - -In this section, within each cluster, we will create a RemoteClusterConfiguration for each other cluster in the mesh. This RemoteClusterConfiguration instructs the cluster to connect to a cluster using a kubeconfig. - -With each cluster being connected to each other cluster, a full cluster mesh will be formed. - - - -:::tip[mental model] -$[prodname] achieves cross-cluster routing by extending the overlay network of a cluster to include nodes from remote clusters. This is made possible by each cluster having a view into the datastore that now includes remote pods and nodes. -::: -**For each pair** of clusters in the cluster mesh (e.g. \{A,B\}, \{A,C\}, \{B,C\} for clusters A,B,C): - -1. In cluster 1, create a secret that contains the `kubeconfig` for cluster 2: - - Determine the namespace (``) for the secret to replace in all steps. - The simplest method to create a secret for a remote cluster is to use the `kubectl` command because it correctly encodes the data and formats the file. - ```bash - kubectl create secret generic remote-cluster-secret-name -n \ - --from-literal=datastoreType=kubernetes \ - --from-file=kubeconfig= - ``` - -1. If RBAC is enabled in cluster 1, create a Role and RoleBinding for $[prodname] to use to access the secret that contains the `kubeconfig` for cluster 2: - ```bash - kubectl create -f - < - rules: - - apiGroups: [""] - resources: ["secrets"] - verbs: ["watch", "list", "get"] - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: remote-cluster-secret-access - namespace: - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: remote-cluster-secret-access - subjects: - - kind: ServiceAccount - name: calico-typha - namespace: calico-system - EOF - ``` - -1. Create the RemoteClusterConfiguration in cluster 1: - - Within the RemoteClusterConfiguration, we specify the secret used to access cluster 2, and the overlay routing mode which toggles the establishment of cross-cluster overlay routes. - ```bash - kubectl create -f - < - kind: Secret - syncOptions: - overlayRoutingMode: Enabled - EOF - ``` - -1. [Validate](#check-remote-cluster-connection) the that the remote cluster connection can be established. - -1. Repeat the above steps, switching cluster 1 and cluster 2. - - - -In this setup, the cluster mesh will rely on the underlying network to provides cross-cluster routing that preserves pod IPs. - -**For each pair** of clusters in the cluster mesh (e.g. \{A,B\}, \{A,C\}, \{B,C\} for clusters A,B,C): - -1. In cluster 1, create a secret that contains the `kubeconfig` for cluster 2: - - Determine the namespace (``) for the secret to replace in all steps. - The simplest method to create a secret for a remote cluster is to use the `kubectl` command because it correctly encodes the data and formats the file. - ```bash - kubectl create secret generic remote-cluster-secret-name -n \ - --from-literal=datastoreType=kubernetes \ - --from-file=kubeconfig= - ``` - -1. If RBAC is enabled in cluster 1, create a Role and RoleBinding for $[prodname] to use to access the secret that contains the `kubeconfig` for cluster 2: - ```bash - kubectl create -f - < - rules: - - apiGroups: [""] - resources: ["secrets"] - verbs: ["watch", "list", "get"] - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: remote-cluster-secret-access - namespace: - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: remote-cluster-secret-access - subjects: - - kind: ServiceAccount - name: calico-typha - namespace: calico-system - EOF - ``` - -1. Create the RemoteClusterConfiguration in cluster 1: - - Within the RemoteClusterConfiguration, we specify the secret used to access cluster 2, and the overlay routing mode which toggles the establishment of cross-cluster overlay routes. - ```bash - kubectl create -f - < - kind: Secret - syncOptions: - overlayRoutingMode: Disabled - EOF - ``` - -1. If you have no IP pools in cluster 1 with NAT-outgoing enabled, skip this step. - - Otherwise, if you have IP pools in cluster 1 with NAT-outgoing enabled, and workloads in that pool will egress to workloads in cluster 2, you need to instruct $[prodname] to not perform NAT on traffic destined for IP pools in cluster 2. - - You can achieve this by creating a disabled IP pool in cluster 1 for each CIDR in cluster 2. This IP pool should have NAT-outgoing disabled. For example: - - ```yaml - apiVersion: projectcalico.org/v3 - kind: IPPool - metadata: - name: cluster2-main-pool - spec: - cidr: - disabled: true - ``` - -1. [Validate](#check-remote-cluster-connection) the that the remote cluster connection can be established. - -1. Repeat the above steps, switching cluster 1 and cluster 2. - - - - -#### 🎉 Done! -After completing the above steps for all cluster pairs in the cluster mesh, your clusters should now be forming a cluster mesh! You should now be able to route traffic between clusters, and write policy that can select remote workloads. - -:::tip[mental model] -A cluster in the mesh can write policy rules that select pods from other clusters in the mesh. This is because traffic from remote clusters has pod IPs preserved, and the local cluster can associate remote pod IPs with the pod specs it synchronized from remote clusters. -::: - -## How to - -### Switch to multi-cluster networking -The steps above assume that you are configuring both federated endpoint identity and multi-cluster networking for the first time. If you already have federated endpoint identity, and want to use multi-cluster networking, follow these steps: - -1. Validate that all [requirements](#calico-enterprise-multi-cluster-networking) for multi-cluster networking have been met. -2. Update the ClusterRole in each cluster in the cluster mesh using the RBAC manifest found in [Generate credentials for cross-cluster authentication](#generate-credentials-for-cross-cluster-resource-synchronization) -3. In all RemoteClusterConfigurations, set `Spec.OverlayRoutingMode` to `Enabled`. -4. Verify that all RemoteClusterConfigurations are bidirectional (in both directions for each cluster pair) using these [instructions](#establish-cross-cluster-resource-synchronization). -5. If you had previously created disabled IP pools to prevent NAT outgoing from applying to remote cluster destinations, those disabled IP pools are no longer needed when using multi-cluster networking and must be deleted. - -### Validate federated endpoint identity & multi-cluster networking -#### Validate RemoteClusterConfiguration and federated endpoint identity -##### Check remote cluster connection -You can validate in a local cluster that Typha has synced to the remote cluster through the [Prometheus metrics for Typha](../../reference/component-resources/typha/prometheus#metric-reference). - -Alternatively, you can check the Typha logs for remote cluster connection status. Run the following command: -```bash -kubectl logs deployment/calico-typha -n calico-system | grep "Sending in-sync update" -``` -You should see an entry for each RemoteClusterConfiguration in the local cluster. - -If either output contains unexpected results, proceed to the [troubleshooting](#troubleshoot) section. - -#### Validate multi-cluster networking -If all requirements were met for $[prodname] to establish multi-cluster networking, you can test the functionality by establishing a connection from a pod in a local cluster to the IP of a pod in a remote cluster. Ensure that there is no policy in either cluster that may block this connection. - -If the connection fails, proceed to the [troubleshooting](#troubleshoot) section. - -### Create remote-identity-aware network policy -With federated endpoint identity and routing between clusters established, you can now use labels to reference endpoints on a remote cluster in local policy rules, rather than referencing them by IP address. - -The main policy selector still refers only to local endpoints; and that selector chooses which local endpoints to apply the policy. -However, rule selectors can now refer to both local and remote endpoints. - -In the following example, cluster A (an application cluster) has a network policy that governs outbound connections to cluster B (a database cluster). -```yaml -apiVersion: projectcalico.org/v3 -kind: NetworkPolicy -metadata: - name: default.app-to-db - namespace: myapp -spec: - # The main policy selector selects endpoints from the local cluster only. - selector: app == 'backend-app' - tier: default - egress: - - destination: - # Rule selectors can select endpoints from local AND remote clusters. - selector: app == 'postgres' - protocol: TCP - ports: [5432] - action: Allow -``` - -### Troubleshoot -#### Troubleshoot RemoteClusterConfiguration and federated endpoint identity - -##### Verify configuration -For each impacted remote cluster pair (between cluster A and cluster B): -1. Retrieve the `kubeconfig` from the secret stored in cluster A. Manually verify that it can be used to connect to cluster B. - ```bash - kubectl get secret -n remote-cluster-secret-name -o=jsonpath="{.data.kubeconfig}" | base64 -d > verify_kubeconfig_b - kubectl --kubeconfig=verify_kubeconfig_b get nodes - ``` - This validates that the credentials used by Typha to connect to cluster B's API server are stored in the correct location and provide sufficient access. - - The command above should yield a result like the following: - ``` - NAME STATUS ROLES AGE VERSION - clusterB-master Ready master 7d v1.27.0 - clusterB-worker-1 Ready worker 7d v1.27.0 - clusterB-worker-2 Ready worker 7d v1.27.0 - ``` - - If you do not see the nodes of cluster B listed in response to the command above, verify that you [created](#generate-credentials-for-cross-cluster-resource-synchronization) the `kubeconfig` for cluster B correctly, and that you [stored](#establish-cross-cluster-resource-synchronization) it in cluster A correctly. - - If you do see the nodes of cluster B listed in response to the command above, you can run this test (or a similar test) on a node in cluster A to verify that cluster A nodes can connect to the API server of cluster B. - -2. Validate that the Typha service account in Cluster A is authorized to retrieve the `kubeconfig` secret for cluster B. - ```bash - kubectl auth can-i list secrets --namespace --as=system:serviceaccount:calico-system:calico-typha - ``` - - This command should yield the following output: - ``` - yes - ``` - - If the command does not return this output, verify that you correctly [configured RBAC](#establish-cross-cluster-resource-synchronization) in cluster A. - -3. Repeat the above, switching cluster A to cluster B. - -##### Check logs -Validate that querying Typha logs yield the expected result outlined in the [validation](#validate-federated-endpoint-identity--multi-cluster-networking) section. - -If the Typha logs do not yield the expected result, review the warning or error-related logs in `typha` or `calico-node` for insights. - -###### calicoq -[calicoq](../../operations/clis/calicoq/installing) can be used to emulate the connection that Typha will make to remote clusters. Use the following command: -```bash -calicoq eval "all()" -``` -If all remote clusters are accessible, calicoq returns something like the following. Note the remote cluster prefixes: there should be endpoints prefixed with the name of each RemoteClusterConfiguration in the local cluster. -``` -Endpoints matching selector all(): - Workload endpoint remote-cluster-1/host-1/k8s/kube-system.kube-dns-5fbcb4d67b-h6686/eth0 - Workload endpoint remote-cluster-1/host-2/k8s/kube-system.cnx-manager-66c4dbc5b7-6d9xv/eth0 - Workload endpoint host-a/k8s/kube-system.kube-dns-5fbcb4d67b-7wbhv/eth0 - Workload endpoint host-b/k8s/kube-system.cnx-manager-66c4dbc5b7-6ghsm/eth0 -``` - -If this command fails, the error messages returned by the command may provide insight into where issues are occurring. - -#### Troubleshoot multi-cluster networking -##### Basic validation -* Ensure that RemoteClusterConfiguration and federated endpoint identity are [functioning correctly](#validate-federated-endpoint-identity--multi-cluster-networking) -* Verify that you have met the [prerequisites](#calico-enterprise-multi-cluster-networking) for multi-cluster networking -* If you had previously set up RemoteClusterConfigurations without multi-cluster networking, and are upgrading to use the feature, review the [switching considerations](#switch-to-multi-cluster-networking) -* Verify that traffic between clusters is not being denied by network policy - -##### Check overlayRoutingMode -Ensure that `overlayRoutingMode` is set to `"Enabled"` on all RemoteClusterConfigurations. - -If overlay routing is successfully enabled, you can view the logs of a Typha instance using: -```bash -kubectl logs deployment/calico-typha -n calico-system -``` - -You should see an output for each connected remote cluster that looks like this: -``` -18:49:35.394 [INFO][14] wrappedcallbacks.go 443: Creating syncer for RemoteClusterConfiguration(my-cluster) -18:49:35.394 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/ipam/v2/assignment/" -18:49:35.395 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/workloadendpoints" -18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/hostendpoints" -18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/profiles" -18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes" -18:49:35.397 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools" -``` - -If you do not see the each of the resource types above, overlay routing was not successfully enabled in your cluster. Verify that you followed the [setup](#establish-cross-cluster-resource-synchronization) correctly for overlay routing, and that the cluster is using a version of $[prodname] that supports multi-cluster networking. - -###### Check logs -Warning or error logs in `typha` or `calico-node` may provide insight into where issues are occurring. - -## Next steps - -[Configure federated services](services-controller.mdx) diff --git a/calico-enterprise/multicluster/federation/overview.mdx b/calico-enterprise/multicluster/federation/overview.mdx deleted file mode 100644 index 777537a648..0000000000 --- a/calico-enterprise/multicluster/federation/overview.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -description: Configure a cluster mesh for cross-cluster endpoints sharing, cross-cluster connectivity, and cross-cluster service discovery. ---- - -# Overview - -## Big picture - -Secure cross-cluster connections with identity-aware network policy, and federate services for cross-cluster service discovery. - -Utilize $[prodname] to establish cross-cluster connectivity. - -## Value - -At some point in your Kubernetes journey, you may have applications that need to access services and workloads running in another cluster. - -By default, pods can only communicate with pods within the same cluster. Additionally, services and network policy only select pods from within the same cluster. $[prodname] can help overcome these barriers by forming a cluster mesh the following features: -- **Federated endpoint identity** - - Allow a local Kubernetes cluster to include the workload endpoints (pods) and host endpoints of a remote cluster in the calculation of local network policies applied on each node of the local cluster. - -- **Federated services** - - Enable a local Kubernetes Service to populate with Endpoints selected from both local cluster and remote cluster Services. - -- **Multi-cluster networking** - - Establish an overlay network between clusters to provide cross-cluster connectivity with $[prodname]. - -## Concepts - -### Pod IP routability - -$[prodname] cluster mesh is implemented at Kubernetes at the network layer, based on pod IPs. - -Taking advantage of federated workload endpoint identity and federated services requires that pod IPs are routable between clusters. This is because identity-aware network policy requires source and destination pod IPs to be preserved to establish pod identity. Additionally, the Endpoint IPs of pods selected by a federated Service must be routable in order for that Service to be of value. - -You can utilize $[prodname] multi-cluster networking to establish pod IP routability between clusters via overlay. Alternatively, you can manually set up pod IP routability between clusters without encapsulation (e.g. VPC routing, BGP routing). - -### Federated endpoint identity - -Federated endpoint identity in a cluster mesh allows a local Kubernetes cluster to include the workload endpoints (pods) and host endpoints of a remote cluster in the calculation of the local policies for each node, e.g. Cluster A network policy allows its application pods to talk to database pods in Cluster B. - -This feature does not _federate network policies_; policies from a remote cluster are not applied to the endpoints on the local cluster, and the policy from the local cluster is rendered only locally and applied to the local endpoints. - -### Federated services - -Federated services in a cluster mesh works with federated endpoint identity, providing cross-cluster service discovery for a local cluster. If you have an existing service discovery mechanism, this feature is optional. - -Federated services use the Tigera Federated Services Controller to federate all Kubernetes endpoints (workload and host endpoints) across all of the clusters. The Federated Services Controller accesses service and endpoints data in the remote clusters directly through the Kubernetes API. - -## Next steps - -[Configure remote-aware policy and multi-cluster networking](kubeconfig.mdx) diff --git a/calico-enterprise/multicluster/federation/services-controller.mdx b/calico-enterprise/multicluster/federation/services-controller.mdx deleted file mode 100644 index 31206db2c2..0000000000 --- a/calico-enterprise/multicluster/federation/services-controller.mdx +++ /dev/null @@ -1,209 +0,0 @@ ---- -description: Configure a federated service for cross-cluster service discovery for local clusters. ---- - -# Configure federated services - -## Big picture - -Configure local clusters to discover services across multiple clusters. - -## Value - -Use cluster mesh and federated services discovery along with federated endpoint identity to extend and automate endpoints sharing. (Optional if you have your own service discovery mechanism.) - -## Concepts - -### Federated services - -A federated service (also called a backing service), is a set of services with consolidated endpoints. $[prodname] discovers services across a cluster mesh (both local cluster and remote clusters) and creates a "federated service" on the local cluster that encompasses all of the individual services. - -Federated services are managed by the Tigera Federated Service Controller, which monitors and maintains endpoints for each locally-federated service. The controller does not change configuration on remote clusters. - -A federated service looks similar to a regular Kubernetes service, but instead of using a pod selector, it uses an annotation. For example: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-app-federated - namespace: default - annotations: - federation.tigera.io/serviceSelector: run == "my-app" -spec: - ports: - - name: my-app-ui - port: 8080 - protocol: TCP - type: ClusterIP -``` - -| Annotation | Description | -| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `federation.tigera.io/serviceSelector` | Required field that specifies the services used in the federated service. Format is a standard $[prodname] selector (i.e. the same as $[prodname] policy resources) and selects services based on their labels. The selector annotation selects services, not pods.

Only services in the same namespace as the federated service are included. This implies namespace names across clusters are linked (this is a basic premise of federated endpoint identity).

If the value is incorrectly specified, the service is not federated and endpoint data is removed from the service. View the warning logs in the controller for any issues processing this value. | - -**Syntax and rules** - -- Services that you specify in the federated service must be in the same namespace or they are ignored. A basic assumption of federated endpoint identity is that namespace names are linked across clusters. -- If you specify a `spec.Selector` in a federated service, the service is not federated. -- You cannot federate another federated service. If a service has a federated services annotation, it is not included as a backing service of another federated service. -- The target port number in the federated service ports is not used. - -**Match services using a label** - -You can also match services using a label. The label is implicitly added to each service, but it does not appear in `kubectl` when viewing the service. - -| Label | Description | -| ---------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `federation.tigera.io/remoteClusterName` | Label added to all remote services that correspond to the Remote Cluster Configuration name for the remote cluster. Use this label to restrict the clusters selected by the services. **Note**: The label is not added for services in the local cluster. | - -**About endpoints** - -- Do not manually create or manage endpoints resources; let the Tigera controller do all of the work. User updates to endpoint resources are ignored. -- Endpoints are selected only when the service port name and protocol in the federated service matches the port name and protocol in the backing service. -- Endpoint data configured in the federated service is slightly modified from the original data of the backing service. For backing services on remote clusters, the `targetRef.name` field in the federated service adds the ``. For example, `/`. - -## Before you begin - -**Required** - -- [Configure federated endpoint identity](kubeconfig.mdx) - -## How to - -- [Create service resources](#create-service-resources) -- [Create a federated service](#create-a-federated-service) -- [Access a federated service](#access-a-federated-service) - -### Create service resources - -On each cluster in the mesh that is providing a particular service, create your service resources as you normal would with the following requirements: - -- Services must all be in the same namespace. -- Configure each service with a common label key and value to identify the common set of services across your clusters (for example, `run=my-app`). - -Kubernetes manages the service by populating the service endpoints from the pods that match the selector configured in the service spec. - -### Configure a federated service - -1. On a cluster that needs to access the federated set of pods that are running an application, create a - service on that cluster leaving the `spec selector` blank. -1. Set the `federation.tigera.io/serviceSelector` annotation to be a $[prodname] selector that selects the previously-configured services using the matching label match (for example, `run == "my-app"`). - -The Federated Services Controller manages this service, populating the service endpoints from all of the services that match the service selector configured in the annotation. - -### Access a federated service - -Any application can access the federated service using the local DNS name for that service. The simplest way to access a federated service is through its corresponding DNS name. - -By default, Kubernetes adds DNS entries to access a service locally. For a service called `my-svc` in the namespace -`my-namespace`, the following DNS entry would be added to access the service within the local cluster: - -``` -my-svc.my-namespace.svc.cluster.local -``` - -DNS lookup for this name returns the fixed ClusterIP address assigned for the federated service. The ClusterIP is translated in iptables to one of the federated service endpoint IPs, and is load balanced across all of the endpoints. - -## Tutorial - -### Create a service - -In the following example, the remote cluster defines the following service. - -```yaml -apiVersion: v1 -kind: Service -metadata: - labels: - run: my-app - name: my-app - namespace: default -spec: - selector: - run: my-app - ports: - - name: my-app-ui - port: 80 - protocol: TCP - targetPort: 9000 - - name: my-app-console - port: 81 - protocol: TCP - targetPort: 9001 - type: ClusterIP -``` - -This service definition exposes two ports for the application `my-app`. One port for accessing a UI, and the other for accessing a management console. The service specifies a Kubernetes selector, which means the endpoints for this service are automatically populated by Kubernetes from matching pods within the services own cluster. - -### Create a federated service - -To create a federated service on your local cluster that federates the web access port for both the local and remote service, you would create a service resource on your local cluster as follows: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-app-federated - namespace: default - annotations: - federation.tigera.io/serviceSelector: run == "my-app" -spec: - ports: - - name: my-app-ui - port: 8080 - protocol: TCP - type: ClusterIP -``` - -The `spec.selector` is not specified so it will not be managed by Kubernetes. Instead, we use a `federation.tigera.io/selector` annotation, indicating that this is a federated service managed by the Federated Services Controller. - -The controller matches the `my-app` services (matching the run label) on both the local and remote clusters, and consolidates endpoints from the `my-app-ui` TCP port for both of those services. Because the federated service does not specify the `my-app-console` port, the controller does not include these endpoints in the federated service. - -The endpoints data for the federated service is similar to the following. Note that the name of the remote cluster is included in `targetRef.name`. - -```yaml -apiVersion: v1 -kind: Endpoints -metadata: - creationTimestamp: 2018-07-03T19:41:38Z - annotations: - federation.tigera.io/serviceSelector: run == "my-app" - name: my-app-federated - namespace: default - resourceVersion: '701812' - selfLink: /api/v1/namespaces/default/endpoints/my-app-federated - uid: 1a0427e8-7ef9-11e8-a24c-0259d75c6290 -subsets: - - addresses: - - ip: 192.168.93.12 - nodeName: node1.localcluster.tigera.io - targetRef: - kind: Pod - name: my-app-59cf48cdc7-frf2t - namespace: default - resourceVersion: '701655' - uid: 19f5e914-7ef9-11e8-a24c-0259d75c6290 - ports: - - name: my-app-ui - port: 80 - protocol: TCP - - addresses: - - ip: 192.168.0.28 - nodeName: node1.remotecluster.tigera.io - targetRef: - kind: Pod - name: remotecluster/my-app-7b6f758bd5-ctgbh - namespace: default - resourceVersion: '701648' - uid: 19e2c841-7ef9-11e8-a24c-0259d75c6290 - ports: - - name: my-app-ui - port: 80 - protocol: TCP -``` - -## Additional resources - -- [Cluster mesh example for AWS](aws.mdx) -- [Federated service controller](../../reference/component-resources/kube-controllers/configuration.mdx) diff --git a/calico-enterprise/multicluster/fine-tune-deployment.mdx b/calico-enterprise/multicluster/fine-tune-deployment.mdx deleted file mode 100644 index 6c5f44327e..0000000000 --- a/calico-enterprise/multicluster/fine-tune-deployment.mdx +++ /dev/null @@ -1,154 +0,0 @@ ---- -description: Review your multi-cluster management deployment to ensure it is ready for production. ---- - -# Fine-tune multi-cluster management - -## Big picture - -Fine-tune your multi-cluster management deployment for production. - -## How to - -- [Review log storage collection and retention](#review-log-storage-collection-and-retention) -- [Review service type for the management cluster](#review-service-type-for-the-management-cluster) -- [Review user permissions](#review-user-permissions) -- [Review user permissions for managed cluster log data](#review-user-permissions-for-managed-cluster-log-data) -- [Filter log data for a managed cluster in Kibana](#filter-log-data-for-a-managed-cluster-in-kibana) - -### Review log storage collection and retention - -Because the management cluster stores all log data across your managed clusters, choose a size to accommodate your anticipated data volume. See [Adjust log storage size](../operations/logstorage/adjust-log-storage-size.mdx). - -### Review service type for the management cluster - -In the [Install multi-cluster management guide](./set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx), we used a `NodePort` service because it was the quickest way to expose the management cluster. But, there are drawbacks to using `NodePort` services, described in [Defining a Service in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service). For production and high availability, choose a type of service that is scalable. We have tested both `NodePort` and `LoadBalancer` services. For both, a security rule/firewall rule is needed to allow connections to the management cluster. - -The configuration for your service (regardless of type) should obey the following requirements: - -- Uses TCP protocol -- Maps to port 9449 on the Manager (web console) pod -- Exists within the `calico-system` namespace -- Uses label selector `k8s-app: calico-manager` - -The following is an example of a valid `LoadBalancer` service: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: calico-manager-mcm - namespace: calico-system -spec: - type: LoadBalancer - ports: - - port: 9449 - protocol: TCP - targetPort: 9449 - selector: - k8s-app: calico-manager -``` - -:::note - -Using a LoadBalancer may require additional steps, depending on how you provisioned your Kubernetes cluster. - -::: - -:::note - -If you previously set up a management cluster with a service, don’t forget to update the IP address in each managed clusters, by editing the `ManagementClusterConnection` [manifest that you downloaded](./set-up-multi-cluster-management/standard-install/create-a-managed-cluster.mdx) and apply it, or use `kubectl edit managementclusterconnection tigera-secure`. - -::: - -### Review user permissions - -In the [Install multi-cluster management guide](./set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx), we created a user with full admin-level permissions in both the management and managed cluster. In a production environment you will want to define narrow permissions for your users. - -When defining roles and permissions across your clusters, make note of the following: - -- All users that log in to the $[prodname] web console must use a valid service account or user account in the management cluster. -- When the management cluster performs actions on a managed cluster, it passes the user ID of the current logged in user to the managed cluster for authorization. As a requirement, the user must have the same username defined across the management cluster and managed clusters. A user can have different permissions for accessing resources in each managed cluster, as defined by Kubernetes Role and ClusterRole objects, but the username used in the corresponding RoleBinding and ClusterRoleBinding objects must always match what is in the management cluster. - -### Review user permissions for managed cluster log data - -Log data across all managed clusters is stored in a centralized Elasticsearch within the management cluster. You can use [Kubernetes RBAC roles and cluster roles](https://kubernetes.io/rbac/) to define granular access to cluster log data. For example, using the RBAC rule syntax, you can define rules to control access to specific log types or specific clusters by using the resources and resourceNames list fields. - -$[prodname] log data is stored within Elasticsearch indexes. The indexes have the following naming scheme: - -```bash -.. -``` - -A standalone cluster uses the cluster name cluster for Elasticsearch indexes. This is also the name used by a management cluster. For a managed cluster, its cluster name is the value chosen by the user at the time of registration, through the $[prodname] web console. - -To restrict to a specific cluster or subset of clusters use, resources. To restrict to a specific log type use, resourceNames. The following are valid cluster types: - -- “flows” -- “audit” -- “audit_ee” -- “audit_kube” -- “events” -- “dns” -- "l7" - -Let’s look at some examples for defining RBAC rules within a role or cluster role to restrict access to log data by log type and cluster name. - -The rule below allows access to log types flow and DNS for a single cluster with the name app-cluster. - -```yaml -- apiGroups: ['lma.tigera.io'] - resources: ['app-cluster'] - resourceNames: ['flows', 'dns'] - verbs: ['get'] -``` - -:::note - -The apiGroups will always be `lma.tigera.io`. The verbs will always be get. -The rule below allows access to any cluster for log types flow, DNS and audit. - -::: - -```yaml -- apiGroups: ['lma.tigera.io'] - resources: ['*'] - resourceNames: ['flows', 'dns', 'audit'] - verbs: ['get'] -``` - -The rule below allows access to any cluster for all log types. - -```yaml -- apiGroups: ['lma.tigera.io'] - resources: ['*'] - resourceNames: ['*'] - verbs: ['get'] -``` - -### Filter log data for a managed cluster in Kibana - -1. Log in to the $[prodname] web console. -1. In the left navigation, click Kibana and log in to the Kibana dashboard. -1. Navigate to the Discovery view and filter logs by managed cluster indexes. -1. Select a type of log (audit, dns, events, flow). -1. From the Available Fields section in the side panel, select the `_index` field. - - ![Kibana Cluster Indexes](/img/calico-enterprise/mcm/mcm-kibana.png) - -In the example above, the selected log type (flow logs) has the index prefix, `tigera_secure_ee_flows` and two cluster indexes available: - -- Index: tigera_secure_ee_flows.cluster.20200207 -- Index: tigera_secure_ee_flows.app-cluster-1.20200207 - -:::note - -The management cluster has a default cluster name to identify indexes. When filtering logs for the management cluster, use the cluster name: `cluster`. - -::: - -To filter log data by a given managed cluster you can include the filter criteria `_index: ..*` to your query when executing a search through the Kibana UI. - -# Additional resources - -- [ManagementClusterConnection resource reference](../reference/installation/api.mdx#managementclusterconnection) diff --git a/calico-enterprise/multicluster/change-cluster-type.mdx b/calico-enterprise/multicluster/how-to/change-cluster-type.mdx similarity index 89% rename from calico-enterprise/multicluster/change-cluster-type.mdx rename to calico-enterprise/multicluster/how-to/change-cluster-type.mdx index 028e231fb4..2290327821 100644 --- a/calico-enterprise/multicluster/change-cluster-type.mdx +++ b/calico-enterprise/multicluster/how-to/change-cluster-type.mdx @@ -4,15 +4,9 @@ description: Change an existing Calico Enterprise cluster type to a management c # Change a cluster type -## Big picture - Change the configuration type for an existing $[prodname] cluster to management, managed, or standalone. -## Value - -As you build out a multi-cluster management deployment, it is critical to have flexibility to repurpose existing cluster types to meet your needs. - -## Before you begin… +## Before you begin To verify the type of an existing cluster, run the following command: @@ -35,7 +29,7 @@ We do not support having both `ManagementCluster` and `ManagementClusterConnecti ### Change a standalone cluster to a management cluster 1. Create a service to expose the management cluster. - The following example of a NodePort service may not be suitable for production and high availability. For options, see [Fine-tune multi-cluster management for production](fine-tune-deployment.mdx). + The following example of a NodePort service may not be suitable for production and high availability. For options, see [Port and service requirements](../reference/port-and-service-requirements.mdx). Apply the following service manifest. ```bash @@ -63,7 +57,7 @@ We do not support having both `ManagementCluster` and `ManagementClusterConnecti export MANAGEMENT_CLUSTER_ADDR= ``` -1. Apply the [ManagementCluster](../reference/installation/api.mdx) CR. +1. Apply the [ManagementCluster](../../reference/installation/api.mdx) CR. ```bash kubectl apply -f - <.. +``` + +A standalone cluster uses the cluster name cluster for Elasticsearch indexes. This is also the name used by a management cluster. For a managed cluster, its cluster name is the value chosen by the user at the time of registration, through the $[prodname] web console. + +To restrict to a specific cluster or subset of clusters use, resources. To restrict to a specific log type use, resourceNames. The following are valid cluster types: + +- "flows" +- "audit" +- "audit_ee" +- "audit_kube" +- "events" +- "dns" +- "l7" + +### Examples + +The rule below allows access to log types flow and DNS for a single cluster with the name app-cluster. + +```yaml +- apiGroups: ['lma.tigera.io'] + resources: ['app-cluster'] + resourceNames: ['flows', 'dns'] + verbs: ['get'] +``` + +:::note + +The apiGroups will always be `lma.tigera.io`. The verbs will always be get. + +::: + +The rule below allows access to any cluster for log types flow, DNS and audit. + +```yaml +- apiGroups: ['lma.tigera.io'] + resources: ['*'] + resourceNames: ['flows', 'dns', 'audit'] + verbs: ['get'] +``` + +The rule below allows access to any cluster for all log types. + +```yaml +- apiGroups: ['lma.tigera.io'] + resources: ['*'] + resourceNames: ['*'] + verbs: ['get'] +``` diff --git a/calico-enterprise/multicluster/how-to/configure-federated-services.mdx b/calico-enterprise/multicluster/how-to/configure-federated-services.mdx new file mode 100644 index 0000000000..1a2abb2c25 --- /dev/null +++ b/calico-enterprise/multicluster/how-to/configure-federated-services.mdx @@ -0,0 +1,152 @@ +--- +description: Configure federated services for cross-cluster service discovery. +--- + +# Configure federated services + +Configure local clusters to discover services across multiple clusters. + +## Before you begin + +**Required** + +- [Set up cluster mesh](set-up-cluster-mesh.mdx) with federated endpoint identity configured + +## How to + +- [Create service resources](#create-service-resources) +- [Create a federated service](#create-a-federated-service) +- [Access a federated service](#access-a-federated-service) + +### Create service resources + +On each cluster in the mesh that is providing a particular service, create your service resources as you normal would with the following requirements: + +- Services must all be in the same namespace. +- Configure each service with a common label key and value to identify the common set of services across your clusters (for example, `run=my-app`). + +Kubernetes manages the service by populating the service endpoints from the pods that match the selector configured in the service spec. + +### Create a federated service + +1. On a cluster that needs to access the federated set of pods that are running an application, create a + service on that cluster leaving the `spec selector` blank. +1. Set the `federation.tigera.io/serviceSelector` annotation to be a $[prodname] selector that selects the previously-configured services using the matching label match (for example, `run == "my-app"`). + +The Federated Services Controller manages this service, populating the service endpoints from all of the services that match the service selector configured in the annotation. + +### Access a federated service + +Any application can access the federated service using the local DNS name for that service. The simplest way to access a federated service is through its corresponding DNS name. + +By default, Kubernetes adds DNS entries to access a service locally. For a service called `my-svc` in the namespace +`my-namespace`, the following DNS entry would be added to access the service within the local cluster: + +``` +my-svc.my-namespace.svc.cluster.local +``` + +DNS lookup for this name returns the fixed ClusterIP address assigned for the federated service. The ClusterIP is translated in iptables to one of the federated service endpoint IPs, and is load balanced across all of the endpoints. + +## Example + +### Create a service + +In the following example, the remote cluster defines the following service. + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + run: my-app + name: my-app + namespace: default +spec: + selector: + run: my-app + ports: + - name: my-app-ui + port: 80 + protocol: TCP + targetPort: 9000 + - name: my-app-console + port: 81 + protocol: TCP + targetPort: 9001 + type: ClusterIP +``` + +This service definition exposes two ports for the application `my-app`. One port for accessing a UI, and the other for accessing a management console. The service specifies a Kubernetes selector, which means the endpoints for this service are automatically populated by Kubernetes from matching pods within the services own cluster. + +### Create a federated service + +To create a federated service on your local cluster that federates the web access port for both the local and remote service, you would create a service resource on your local cluster as follows: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-app-federated + namespace: default + annotations: + federation.tigera.io/serviceSelector: run == "my-app" +spec: + ports: + - name: my-app-ui + port: 8080 + protocol: TCP + type: ClusterIP +``` + +The `spec.selector` is not specified so it will not be managed by Kubernetes. Instead, we use a `federation.tigera.io/serviceSelector` annotation, indicating that this is a federated service managed by the Federated Services Controller. + +The controller matches the `my-app` services (matching the run label) on both the local and remote clusters, and consolidates endpoints from the `my-app-ui` TCP port for both of those services. Because the federated service does not specify the `my-app-console` port, the controller does not include these endpoints in the federated service. + +The endpoints data for the federated service is similar to the following. Note that the name of the remote cluster is included in `targetRef.name`. + +```yaml +apiVersion: v1 +kind: Endpoints +metadata: + creationTimestamp: 2018-07-03T19:41:38Z + annotations: + federation.tigera.io/serviceSelector: run == "my-app" + name: my-app-federated + namespace: default + resourceVersion: '701812' + selfLink: /api/v1/namespaces/default/endpoints/my-app-federated + uid: 1a0427e8-7ef9-11e8-a24c-0259d75c6290 +subsets: + - addresses: + - ip: 192.168.93.12 + nodeName: node1.localcluster.tigera.io + targetRef: + kind: Pod + name: my-app-59cf48cdc7-frf2t + namespace: default + resourceVersion: '701655' + uid: 19f5e914-7ef9-11e8-a24c-0259d75c6290 + ports: + - name: my-app-ui + port: 80 + protocol: TCP + - addresses: + - ip: 192.168.0.28 + nodeName: node1.remotecluster.tigera.io + targetRef: + kind: Pod + name: remotecluster/my-app-7b6f758bd5-ctgbh + namespace: default + resourceVersion: '701648' + uid: 19e2c841-7ef9-11e8-a24c-0259d75c6290 + ports: + - name: my-app-ui + port: 80 + protocol: TCP +``` + +## Additional resources + +- [Federation annotations reference](../reference/federation-annotations.mdx) +- [Federated service controller](../../reference/component-resources/kube-controllers/configuration.mdx) diff --git a/calico-enterprise/multicluster/how-to/configure-log-storage.mdx b/calico-enterprise/multicluster/how-to/configure-log-storage.mdx new file mode 100644 index 0000000000..c2f2572ca8 --- /dev/null +++ b/calico-enterprise/multicluster/how-to/configure-log-storage.mdx @@ -0,0 +1,38 @@ +--- +description: Configure log storage collection, retention, and filtering for multi-cluster management. +--- + +# Configure log storage + +Review and configure log storage for your multi-cluster management deployment. + +## Review log storage collection and retention + +Because the management cluster stores all log data across your managed clusters, choose a size to accommodate your anticipated data volume. See [Adjust log storage size](../../operations/logstorage/adjust-log-storage-size.mdx). + +## Filter log data for a managed cluster in Kibana + +1. Log in to the $[prodname] web console. +1. In the left navigation, click Kibana and log in to the Kibana dashboard. +1. Navigate to the Discovery view and filter logs by managed cluster indexes. +1. Select a type of log (audit, dns, events, flow). +1. From the Available Fields section in the side panel, select the `_index` field. + + ![Kibana Cluster Indexes](/img/calico-enterprise/mcm/mcm-kibana.png) + +In the example above, the selected log type (flow logs) has the index prefix, `tigera_secure_ee_flows` and two cluster indexes available: + +- Index: tigera_secure_ee_flows.cluster.20200207 +- Index: tigera_secure_ee_flows.app-cluster-1.20200207 + +:::note + +The management cluster has a default cluster name to identify indexes. When filtering logs for the management cluster, use the cluster name: `cluster`. + +::: + +To filter log data by a given managed cluster you can include the filter criteria `_index: ..*` to your query when executing a search through the Kibana UI. + +## Additional resources + +- [ManagementClusterConnection resource reference](../../reference/installation/api.mdx#managementclusterconnection) diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/create-a-managed-cluster-helm.mdx b/calico-enterprise/multicluster/how-to/create-a-managed-cluster.mdx similarity index 67% rename from calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/create-a-managed-cluster-helm.mdx rename to calico-enterprise/multicluster/how-to/create-a-managed-cluster.mdx index dd6c5cd6be..d3ffe2512d 100644 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/create-a-managed-cluster-helm.mdx +++ b/calico-enterprise/multicluster/how-to/create-a-managed-cluster.mdx @@ -1,42 +1,82 @@ --- -description: Install Calico Enterprise managed cluster using Helm application package manager. +description: Create a Calico Enterprise managed cluster that you can control from your management cluster. --- -# Create a Calico Enterprise managed cluster - import CodeBlock from '@theme/CodeBlock'; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import InstallAKS from '@site/calico-enterprise/_includes/components/InstallAKS'; +import InstallGKE from '@site/calico-enterprise/_includes/components/InstallGKE'; +import InstallEKS from '@site/calico-enterprise/_includes/components/InstallEKS'; +import InstallGeneric from '@site/calico-enterprise/_includes/components/InstallGeneric'; +import InstallOpenShift from '@site/calico-enterprise/_includes/components/InstallOpenShift'; -## Big picture +# Create a managed cluster -Create a $[prodname] managed cluster that you can control from your management cluster using Helm 3. +## Before you begin -## Value +**Required** -Helm charts are a way to package up an application for Kubernetes (similar to `apt` or `yum` for operating systems). Helm is also used by tools like ArgoCD to manage applications in a cluster, taking care of install, upgrade (and rollback if needed), etc. +- A [$[prodname] management cluster](create-a-management-cluster.mdx) +- A [$[prodname] pull secret](../../getting-started/install-on-clusters/calico-enterprise.mdx) -## Before you begin +For Helm installations, you also need: -**Required** +- Helm 3 installed +- `kubeconfig` configured to work with your cluster (check by running `kubectl get nodes`) -- Install Helm 3 -- `kubeconfig` is configured to work with your cluster (check by running `kubectl get nodes`) -- [Credentials for the Tigera private registry and a license key](../../../getting-started/install-on-clusters/calico-enterprise.mdx) +## How to -## Concepts + + -### Operator-based installation +### Create a managed cluster -In this guide, you install the Tigera Calico operator and custom resource definitions using the Helm 3 chart. The Tigera Operator provides lifecycle management for $[prodname] exposed via the Kubernetes API defined as a custom resource definition. +Follow these steps in the cluster you intend to use as the managed cluster. -## How to + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ### Download the Helm chart -```bash -helm repo add tigera-ee https://downloads.tigera.io/ee/charts + +{'$[version]' === 'master' + ? `helm repo add tigera gs://tigera-helm-charts +helm repo update +helm pull tigera/tigera-operator --version $[releaseTitle]` + : `helm repo add tigera-ee https://downloads.tigera.io/ee/charts helm repo update -helm pull tigera-ee/tigera-operator --version $[releaseTitle] -``` +helm pull tigera-ee/tigera-operator --version $[releaseTitle]`} + ### Prepare the Installation Configuration @@ -47,7 +87,7 @@ Some important configurations you might need to provide to the installer (via `v Here are some examples for updating `values.yaml` with your configurations: -Example 1. Providing `kubernetesProvider`: if you are installing on a cluster installed by EKS, set the `kubernetesProvider` as described in the [Installation reference](../../../reference/installation/api.mdx#provider) +Example 1. Providing `kubernetesProvider`: if you are installing on a cluster installed by EKS, set the `kubernetesProvider` as described in the [Installation reference](../../reference/installation/api.mdx#provider) ```bash echo '{ installation: {kubernetesProvider: EKS }}' > values.yaml @@ -69,11 +109,11 @@ Example 2. Providing custom settings in `values.yaml` for Azure AKS cluster with EOF ``` -For more information about configurable options via `values.yaml` please see [Helm installation reference](../../../reference/installation/helm_customization). +For more information about configurable options via `values.yaml` please see [Helm installation reference](../../reference/installation/helm_customization). ### Install $[prodname] -To install a $[prodname] [managed](../standard-install/create-a-managed-cluster#value) cluster with Helm: +To install a $[prodname] managed cluster with Helm: 1. Export the service port number, and the public IP or host of the management cluster. (Ex. "example.com:1234" or "10.0.0.10:1234".) @@ -82,7 +122,7 @@ To install a $[prodname] [managed](../standard-install/create-a-managed-cluster# ``` 1. Export the management cluster certificate and managed cluster certificate and key. - + If you haven't already done so, generate the base64 encoded CRT and KEY for this managed cluster: ```bash @@ -153,20 +193,12 @@ Define admin-level permissions for the service account `mcm-user` we created to kubectl create clusterrolebinding mcm-user-admin --clusterrole=tigera-network-admin --serviceaccount=default:mcm-user ``` - Congratulations! You have now installed $[prodname] for a managed cluster using the Helm 3 chart. + Congratulations! You have now installed $[prodname] for a managed cluster. -## Next steps - -**Recommended** - -- [Configure access to the $[prodname] web console](../../../operations/cnx/access-the-manager.mdx) -- [Authentication quickstart](../../../operations/cnx/authentication-quickstart.mdx) -- [Configure your own identity provider](../../../operations/cnx/configure-identity-provider.mdx) + + -**Recommended - Networking** - -- The default networking is IP in IP encapsulation using BGP routing. For all networking options, see [Determine best networking option](../../../networking/determine-best-networking.mdx). - -**Recommended - Security** +## Next steps -- [Get started with $[prodname] tiered network policy](../../../network-policy/policy-tiers/tiered-policy.mdx) +- [Configure log storage](configure-log-storage.mdx) +- [Change cluster type](change-cluster-type.mdx) diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/create-a-management-cluster-helm.mdx b/calico-enterprise/multicluster/how-to/create-a-management-cluster.mdx similarity index 64% rename from calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/create-a-management-cluster-helm.mdx rename to calico-enterprise/multicluster/how-to/create-a-management-cluster.mdx index d669ec33f6..b9ddb3fcf7 100644 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/create-a-management-cluster-helm.mdx +++ b/calico-enterprise/multicluster/how-to/create-a-management-cluster.mdx @@ -1,36 +1,126 @@ --- -description: Install Calico Enterprise management cluster using Helm application package manager. +description: Create a Calico Enterprise management cluster to manage multiple clusters from a single management plane. --- -# Create a Calico Enterprise management cluster - import CodeBlock from '@theme/CodeBlock'; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -## Big picture +# Create a management cluster -Create a $[prodname] management cluster to manage multiple clusters from a single management plane using Helm 3. +## Before you begin -## Value +**Required** -Helm charts are a way to package up an application for Kubernetes (similar to `apt` or `yum` for operating systems). Helm is also used by tools like ArgoCD to manage applications in a cluster, taking care of install, upgrade (and rollback if needed), etc. +- A $[prodname] cluster, see [here](../../getting-started/install-on-clusters/index.mdx) for help +- A reachable, public IP address for the management cluster -## Before you begin +For Helm installations, you also need: -**Required** +- Helm 3 installed +- `kubeconfig` configured to work with your cluster (check by running `kubectl get nodes`) +- [Credentials for the Tigera private registry and a license key](../../getting-started/install-on-clusters/calico-enterprise.mdx) -- Install Helm 3 -- `kubeconfig` is configured to work with your cluster (check by running `kubectl get nodes`) -- [Credentials for the Tigera private registry and a license key](../../../getting-started/install-on-clusters/calico-enterprise.mdx) +## How to -## Concepts + + + +### Create a management cluster + +To control managed clusters from your central management plane, you must ensure it is reachable for connections. The simplest way to get started (but not for production scenarios), is to configure a `NodePort` service to expose the management cluster. Note that the service must live within the `calico-system` namespace. + +1. Create a service to expose the management cluster. + The following example of a NodePort service may not be suitable for production and high availability. For options, see [Port and service requirements](../reference/port-and-service-requirements.mdx). + Apply the following service manifest. + + ```bash + kubectl create -f - < + ``` +1. Apply the [ManagementCluster](../../reference/installation/api.mdx#managementcluster) CR. + + ```bash + kubectl apply -f - < + ### Get the Helm chart @@ -53,7 +143,7 @@ Some important configurations you might need to provide to the installer (via `v Here are some examples for updating `values.yaml` with your configurations: -Example 1. Providing `kubernetesProvider`: if you are installing on a cluster installed by EKS, set the `kubernetesProvider` as described in the [Installation reference](../../../reference/installation/api.mdx#provider) +Example 1. Providing `kubernetesProvider`: if you are installing on a cluster installed by EKS, set the `kubernetesProvider` as described in the [Installation reference](../../reference/installation/api.mdx#provider) ```bash echo '{ installation: {kubernetesProvider: EKS }}' > values.yaml @@ -75,16 +165,16 @@ Example 2. Providing custom settings in `values.yaml` for Azure AKS cluster with EOF ``` -For more information about configurable options via `values.yaml` please see [Helm installation reference](../../../reference/installation/helm_customization). +For more information about configurable options via `values.yaml` please see [Helm installation reference](../../reference/installation/helm_customization). ### Install $[prodname] - + -To install a $[prodname] [management](create-a-management-cluster-helm#value) cluster with Helm, using a NodePort service: +To install a $[prodname] management cluster with Helm, using a NodePort service: -1. [Configure a storage class for Calico Enterprise](../../../operations/logstorage/create-storage). +1. [Configure a storage class for Calico Enterprise](../../operations/logstorage/create-storage). 1. Export the service node port number @@ -131,7 +221,7 @@ To install a $[prodname] [management](create-a-management-cluster-helm#value) cl targetPort: 9449 protocol: TCP nodePort: $EXT_SERVICE_NODE_PORT - + managedClusters: enabled: true clusters: @@ -163,7 +253,7 @@ To install a $[prodname] [management](create-a-management-cluster-helm#value) cl -To install a $[prodname] [management](create-a-management-cluster-helm#value) cluster with Helm, using a LoadBalancer service: +To install a $[prodname] management cluster with Helm, using a LoadBalancer service: #### Meet cloud provider requirements @@ -173,7 +263,7 @@ For example, if you are using EKS, you must meet the requirements defined in [cr #### Install the management cluster -1. [Configure a storage class for Calico Enterprise](../../../operations/logstorage/create-storage). +1. [Configure a storage class for Calico Enterprise](../../operations/logstorage/create-storage). 1. Export one or more managed clusters. @@ -263,7 +353,7 @@ For example, if you are using EKS, you must meet the requirements defined in [cr ``` Replace the `address` field in the ManagementCluster resource. - + ```bash kubectl patch managementcluster tigera-secure --type merge -p "{\"spec\":{\"address\":\"${MANAGEMENT_CLUSTER_ADDR}\"}}" ``` @@ -271,7 +361,7 @@ For example, if you are using EKS, you must meet the requirements defined in [cr -#### Create an admin user and verify management cluster connection +### Create an admin user and verify management cluster connection To access resources in a managed cluster from the $[prodname] web console within the management cluster, the logged-in user must have appropriate permissions defined in that managed cluster (clusterrole bindings). @@ -285,20 +375,13 @@ Create an admin user, `mcm-user`, in the default namespace with full permissions Use the generated token, to connect to the UI. In the top right banner in the UI, your management cluster is displayed as the first entry in the cluster selection drop-down menu with the fixed name, `management cluster`. - Congratulations! You have now installed $[prodname] for a management cluster using the Helm 3 chart. + Congratulations! You have now installed $[prodname] for a management cluster. -## Next steps - -**Recommended** - -- [Configure access to the $[prodname] web console](../../../operations/cnx/access-the-manager.mdx) -- [Authentication quickstart](../../../operations/cnx/authentication-quickstart.mdx) -- [Configure your own identity provider](../../../operations/cnx/configure-identity-provider.mdx) - -**Recommended - Networking** - -- The default networking is IP in IP encapsulation using BGP routing. For all networking options, see [Determine best networking option](../../../networking/determine-best-networking.mdx). + + -**Recommended - Security** +## Next steps -- [Get started with $[prodname] tiered network policy](../../../network-policy/policy-tiers/tiered-policy.mdx) +- [Create a managed cluster](create-a-managed-cluster.mdx) +- [Port and service requirements](../reference/port-and-service-requirements.mdx) +- [Change cluster type](change-cluster-type.mdx) diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/index.mdx b/calico-enterprise/multicluster/how-to/index.mdx similarity index 64% rename from calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/index.mdx rename to calico-enterprise/multicluster/how-to/index.mdx index 32706b82e6..26017beca9 100644 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/helm-install/index.mdx +++ b/calico-enterprise/multicluster/how-to/index.mdx @@ -1,9 +1,9 @@ --- -description: Steps to configure management and managed clusters using Helm. +description: How-to guides for setting up and configuring multi-cluster management and cluster mesh. hide_table_of_contents: true --- -# Helm and multi-cluster management +# How-to guides import DocCardList from '@theme/DocCardList'; import { useCurrentSidebarCategory } from '@docusaurus/theme-common'; diff --git a/calico-enterprise/multicluster/how-to/set-up-cluster-mesh.mdx b/calico-enterprise/multicluster/how-to/set-up-cluster-mesh.mdx new file mode 100644 index 0000000000..3b32994476 --- /dev/null +++ b/calico-enterprise/multicluster/how-to/set-up-cluster-mesh.mdx @@ -0,0 +1,284 @@ +--- +description: Configure a cluster mesh by connecting clusters together for cross-cluster endpoint sharing, connectivity, and service discovery. +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Set up cluster mesh + +Create a $[prodname] cluster mesh by connecting clusters together. Once created, the cluster mesh enables multi-cluster networking, network policy for cross-cluster connections, cross-cluster services, and encryption via WireGuard. + +## Before you begin + +**Required** + +- [Configure federated endpoint identity](../explanation/cluster-mesh.mdx) — understand how the mesh works before proceeding. +- All clusters in the mesh must have $[prodname] installed. + +If you plan to use $[prodname] multi-cluster networking (overlay routing), also verify: + +- All nodes participating in the cluster mesh can establish connections to each other via their private IP. +- All nodes participating in the cluster mesh have unique node names. +- Pod CIDRs between clusters do not overlap. +- All clusters have at least one overlay network in common (VXLAN and/or WireGuard). +- All clusters have the same `routeSource` setting on `FelixConfiguration`. + +If using VXLAN: +- The `vxlan*` settings on `FelixConfiguration` must be the same across clusters participating in the mesh. +- The underlying network must allow traffic on `vxlanPort` between clusters participating in the mesh. +- All clusters must use Calico CNI. + +If using WireGuard: +- The `wireguard*` settings on `FelixConfiguration` must be the same across clusters participating in the mesh. +- The underlying network must allow traffic on `wireguardListeningPort` between clusters participating in the mesh. +- All clusters must use Calico CNI OR All clusters must use non-Calico CNI (mixing non-Calico CNI types is supported). + +:::note + +Much like intra-cluster routing in $[prodname], cross-cluster routing can utilize both VXLAN and WireGuard at the same time. If both are enabled and a WireGuard peer is not ready, communication with that peer will fall back to VXLAN. + +::: + +## Generate credentials for cross-cluster resource synchronization + +The basis of cluster mesh is the ability for a cluster to connect to a remote cluster and sync data from it. This enables each $[prodname] cluster to have a view into the datastore that includes both local and remote cluster pods. + +In this section, you will create a `kubeconfig` for each cluster. This `kubeconfig` is what other clusters will use to connect to a given cluster and synchronize data from it. + +**For each** cluster in the cluster mesh, utilizing an existing `kubeconfig` with administrative privileges, follow these steps: + +1. Create the ServiceAccount used by remote clusters for authentication: + + ```bash + kubectl apply -f $[filesUrl]/manifests/federation-remote-sa.yaml + ``` + +1. Create the ClusterRole and ClusterRoleBinding used by remote clusters for authorization: + + ```bash + kubectl apply -f $[filesUrl]/manifests/federation-rem-rbac-kdd.yaml + ``` +1. Create the ServiceAccount token that will be used in the `kubeconfig`: + + ```yaml + kubectl apply -f - < $KUBECONFIG_NAME + ``` + +1. Verify that the `kubeconfig` file works: + + Issue the following command to validate the `kubeconfig` file can be used to connect to the current cluster and access resources: + ```bash + kubectl --kubeconfig=$KUBECONFIG_NAME get nodes + ``` + +Once you've created a `kubeconfig` for **each** cluster, proceed to the next section. + +## Establish cross-cluster resource synchronization + +The cluster mesh is formed when each cluster connects to every other cluster to synchronize data. A cluster connects to another cluster using a RemoteClusterConfiguration, which references a kubeconfig created for the remote cluster. + +Within each cluster, create a RemoteClusterConfiguration for each other cluster in the mesh. + + + + +$[prodname] achieves cross-cluster routing by extending the overlay network of a cluster to include nodes from remote clusters. + +**For each pair** of clusters in the cluster mesh (e.g. \{A,B\}, \{A,C\}, \{B,C\} for clusters A,B,C): + +1. In cluster 1, create a secret that contains the `kubeconfig` for cluster 2: + + Determine the namespace (``) for the secret to replace in all steps. + The simplest method to create a secret for a remote cluster is to use the `kubectl` command because it correctly encodes the data and formats the file. + ```bash + kubectl create secret generic remote-cluster-secret-name -n \ + --from-literal=datastoreType=kubernetes \ + --from-file=kubeconfig= + ``` + +1. If RBAC is enabled in cluster 1, create a Role and RoleBinding for $[prodname] to use to access the secret that contains the `kubeconfig` for cluster 2: + ```bash + kubectl create -f - < + rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["watch", "list", "get"] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: remote-cluster-secret-access + namespace: + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: remote-cluster-secret-access + subjects: + - kind: ServiceAccount + name: calico-typha + namespace: calico-system + EOF + ``` + +1. Create the RemoteClusterConfiguration in cluster 1: + + Within the RemoteClusterConfiguration, we specify the secret used to access cluster 2, and the overlay routing mode which toggles the establishment of cross-cluster overlay routes. + ```bash + kubectl create -f - < + kind: Secret + syncOptions: + overlayRoutingMode: Enabled + EOF + ``` + +1. [Validate](validate-multi-cluster-setup.mdx) that the remote cluster connection can be established. + +1. Repeat the above steps, switching cluster 1 and cluster 2. + + + +In this setup, the cluster mesh will rely on the underlying network to provide cross-cluster routing that preserves pod IPs. + +**For each pair** of clusters in the cluster mesh (e.g. \{A,B\}, \{A,C\}, \{B,C\} for clusters A,B,C): + +1. In cluster 1, create a secret that contains the `kubeconfig` for cluster 2: + + Determine the namespace (``) for the secret to replace in all steps. + The simplest method to create a secret for a remote cluster is to use the `kubectl` command because it correctly encodes the data and formats the file. + ```bash + kubectl create secret generic remote-cluster-secret-name -n \ + --from-literal=datastoreType=kubernetes \ + --from-file=kubeconfig= + ``` + +1. If RBAC is enabled in cluster 1, create a Role and RoleBinding for $[prodname] to use to access the secret that contains the `kubeconfig` for cluster 2: + ```bash + kubectl create -f - < + rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["watch", "list", "get"] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: remote-cluster-secret-access + namespace: + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: remote-cluster-secret-access + subjects: + - kind: ServiceAccount + name: calico-typha + namespace: calico-system + EOF + ``` + +1. Create the RemoteClusterConfiguration in cluster 1: + + Within the RemoteClusterConfiguration, we specify the secret used to access cluster 2, and the overlay routing mode which toggles the establishment of cross-cluster overlay routes. + ```bash + kubectl create -f - < + kind: Secret + syncOptions: + overlayRoutingMode: Disabled + EOF + ``` + +1. If you have no IP pools in cluster 1 with NAT-outgoing enabled, skip this step. + + Otherwise, if you have IP pools in cluster 1 with NAT-outgoing enabled, and workloads in that pool will egress to workloads in cluster 2, you need to instruct $[prodname] to not perform NAT on traffic destined for IP pools in cluster 2. + + You can achieve this by creating a disabled IP pool in cluster 1 for each CIDR in cluster 2. This IP pool should have NAT-outgoing disabled. For example: + + ```yaml + apiVersion: projectcalico.org/v3 + kind: IPPool + metadata: + name: cluster2-main-pool + spec: + cidr: + disabled: true + ``` + +1. [Validate](validate-multi-cluster-setup.mdx) that the remote cluster connection can be established. + +1. Repeat the above steps, switching cluster 1 and cluster 2. + + + + +After completing the above steps for all cluster pairs in the cluster mesh, your clusters should now be forming a cluster mesh. You should now be able to route traffic between clusters, and write policy that can select remote workloads. + +## Switch to multi-cluster networking + +The steps above assume that you are configuring both federated endpoint identity and multi-cluster networking for the first time. If you already have federated endpoint identity, and want to use multi-cluster networking, follow these steps: + +1. Validate that all [requirements](#before-you-begin) for multi-cluster networking have been met. +2. Update the ClusterRole in each cluster in the cluster mesh using the RBAC manifest found in [Generate credentials for cross-cluster authentication](#generate-credentials-for-cross-cluster-resource-synchronization) +3. In all RemoteClusterConfigurations, set `Spec.OverlayRoutingMode` to `Enabled`. +4. Verify that all RemoteClusterConfigurations are bidirectional (in both directions for each cluster pair) using these [instructions](#establish-cross-cluster-resource-synchronization). +5. If you had previously created disabled IP pools to prevent NAT outgoing from applying to remote cluster destinations, those disabled IP pools are no longer needed when using multi-cluster networking and must be deleted. + +## Next steps + +- [Validate multi-cluster setup](validate-multi-cluster-setup.mdx) +- [Configure federated services](configure-federated-services.mdx) diff --git a/calico-enterprise/multicluster/how-to/validate-multi-cluster-setup.mdx b/calico-enterprise/multicluster/how-to/validate-multi-cluster-setup.mdx new file mode 100644 index 0000000000..5bc544df49 --- /dev/null +++ b/calico-enterprise/multicluster/how-to/validate-multi-cluster-setup.mdx @@ -0,0 +1,52 @@ +--- +description: Validate your multi-cluster setup including RemoteClusterConfiguration, federated endpoint identity, and multi-cluster networking. +--- + +# Validate multi-cluster setup + +## Validate RemoteClusterConfiguration and federated endpoint identity + +### Check remote cluster connection + +You can validate in a local cluster that Typha has synced to the remote cluster through the [Prometheus metrics for Typha](../../reference/component-resources/typha/prometheus#metric-reference). + +Alternatively, you can check the Typha logs for remote cluster connection status. Run the following command: +```bash +kubectl logs deployment/calico-typha -n calico-system | grep "Sending in-sync update" +``` +You should see an entry for each RemoteClusterConfiguration in the local cluster. + +If either output contains unexpected results, proceed to the [Troubleshooting](../troubleshooting.mdx) page. + +## Validate multi-cluster networking + +If all requirements were met for $[prodname] to establish multi-cluster networking, you can test the functionality by establishing a connection from a pod in a local cluster to the IP of a pod in a remote cluster. Ensure that there is no policy in either cluster that may block this connection. + +If the connection fails, proceed to the [Troubleshooting](../troubleshooting.mdx) page. + +## Create remote-identity-aware network policy + +With federated endpoint identity and routing between clusters established, you can now use labels to reference endpoints on a remote cluster in local policy rules, rather than referencing them by IP address. + +The main policy selector still refers only to local endpoints; and that selector chooses which local endpoints to apply the policy. +However, rule selectors can now refer to both local and remote endpoints. + +In the following example, cluster A (an application cluster) has a network policy that governs outbound connections to cluster B (a database cluster). +```yaml +apiVersion: projectcalico.org/v3 +kind: NetworkPolicy +metadata: + name: default.app-to-db + namespace: myapp +spec: + # The main policy selector selects endpoints from the local cluster only. + selector: app == 'backend-app' + tier: default + egress: + - destination: + # Rule selectors can select endpoints from local AND remote clusters. + selector: app == 'postgres' + protocol: TCP + ports: [5432] + action: Allow +``` diff --git a/calico-enterprise/multicluster/index.mdx b/calico-enterprise/multicluster/index.mdx index 40c70c3b65..9b737587d4 100644 --- a/calico-enterprise/multicluster/index.mdx +++ b/calico-enterprise/multicluster/index.mdx @@ -1,40 +1,51 @@ --- -description: Calico Enterprise features for scaling to production. -hide_table_of_contents: true +description: Centralize control of multiple Kubernetes clusters with multi-cluster management, federated endpoint identity, federated services, and multi-cluster networking. --- import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components'; -# Multi-cluster management and federation +# Multi-cluster management and cluster mesh With multi-cluster management, you can centralize control of multiple Kubernetes clusters in a single management plane, with federated endpoint identity, federated services, and multi-cluster networking. -## Setting up multi-cluster management +## Understand - - + + + -## Setting up multi-cluster management using Helm +## Set up and configure - - + + + + + + + + -## Cluster mesh +## Reference - - - - + + + + -## Advanced +## Troubleshoot - - - \ No newline at end of file + + + +## Tutorial + + + + diff --git a/calico-enterprise/multicluster/reference/custom-resources.mdx b/calico-enterprise/multicluster/reference/custom-resources.mdx new file mode 100644 index 0000000000..1f4cd74254 --- /dev/null +++ b/calico-enterprise/multicluster/reference/custom-resources.mdx @@ -0,0 +1,34 @@ +--- +description: Quick reference for multi-cluster management custom resource definitions. +--- + +# Custom resources + +This page provides a quick reference for the custom resource definitions (CRDs) used in $[prodname] multi-cluster management and cluster mesh. + +## Multi-cluster management CRDs + +| Resource | API Group | Cluster | Description | +|---|---|---|---| +| [ManagementCluster](../../reference/installation/api.mdx#managementcluster) | `operator.tigera.io/v1` | Management | Declares a cluster as a management cluster. Specifies the address that managed clusters use to connect. | +| [ManagedCluster](../../reference/resources/managedcluster.mdx) | `projectcalico.org/v3` | Management | Registers a managed cluster on the management cluster. Creating this resource generates an installation manifest for the managed cluster. | +| [ManagementClusterConnection](../../reference/installation/api.mdx#managementclusterconnection) | `operator.tigera.io/v1` | Managed | Connects a managed cluster to the management cluster. Applied from the manifest generated by the ManagedCluster resource. | + +## Cluster mesh CRDs + +| Resource | API Group | Cluster | Description | +|---|---|---|---| +| [RemoteClusterConfiguration](../../reference/resources/remoteclusterconfiguration.mdx) | `projectcalico.org/v3` | Any | Configures a connection to a remote cluster in the mesh. References a Secret containing a kubeconfig for the remote cluster. Controls overlay routing mode. | + +## Usage patterns + +### Management cluster setup + +1. Apply a `ManagementCluster` resource on the management cluster to declare it as a management cluster and specify the connection address. +2. For each managed cluster, create a `ManagedCluster` resource on the management cluster. +3. Apply the generated `ManagementClusterConnection` manifest on the managed cluster. + +### Cluster mesh setup + +1. For each cluster pair (A, B), create a `RemoteClusterConfiguration` in cluster A referencing cluster B's kubeconfig, and vice versa. +2. Set `syncOptions.overlayRoutingMode` to `Enabled` for $[prodname] multi-cluster networking, or `Disabled` for network-provided routing. diff --git a/calico-enterprise/multicluster/reference/federation-annotations.mdx b/calico-enterprise/multicluster/reference/federation-annotations.mdx new file mode 100644 index 0000000000..283c81a821 --- /dev/null +++ b/calico-enterprise/multicluster/reference/federation-annotations.mdx @@ -0,0 +1,54 @@ +--- +description: Reference for federation annotations and labels used with federated services. +--- + +# Federation annotations + +Reference for the annotations and labels used by $[prodname] federated services. + +## Annotations + +| Annotation | Description | +|---|---| +| `federation.tigera.io/serviceSelector` | Required field that specifies the services used in the federated service. Format is a standard $[prodname] selector (i.e. the same as $[prodname] policy resources) and selects services based on their labels. The selector annotation selects services, not pods.

Only services in the same namespace as the federated service are included. This implies namespace names across clusters are linked (this is a basic premise of federated endpoint identity).

If the value is incorrectly specified, the service is not federated and endpoint data is removed from the service. View the warning logs in the controller for any issues processing this value. | + +### Syntax and rules + +- Services that you specify in the federated service must be in the same namespace or they are ignored. A basic assumption of federated endpoint identity is that namespace names are linked across clusters. +- If you specify a `spec.Selector` in a federated service, the service is not federated. +- You cannot federate another federated service. If a service has a federated services annotation, it is not included as a backing service of another federated service. +- The target port number in the federated service ports is not used. + +### Example + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-app-federated + namespace: default + annotations: + federation.tigera.io/serviceSelector: run == "my-app" +spec: + ports: + - name: my-app-ui + port: 8080 + protocol: TCP + type: ClusterIP +``` + +## Labels + +| Label | Description | +|---|---| +| `federation.tigera.io/remoteClusterName` | Label added to all remote services that correspond to the Remote Cluster Configuration name for the remote cluster. Use this label to restrict the clusters selected by the services. **Note**: The label is not added for services in the local cluster. | + +### Match services using a label + +You can match services using the `federation.tigera.io/remoteClusterName` label. The label is implicitly added to each service, but it does not appear in `kubectl` when viewing the service. + +## Endpoints behavior + +- Do not manually create or manage endpoints resources; let the Tigera controller do all of the work. User updates to endpoint resources are ignored. +- Endpoints are selected only when the service port name and protocol in the federated service matches the port name and protocol in the backing service. +- Endpoint data configured in the federated service is slightly modified from the original data of the backing service. For backing services on remote clusters, the `targetRef.name` field in the federated service adds the ``. For example, `/`. diff --git a/calico-enterprise/multicluster/reference/helm-values.mdx b/calico-enterprise/multicluster/reference/helm-values.mdx new file mode 100644 index 0000000000..964ef4f564 --- /dev/null +++ b/calico-enterprise/multicluster/reference/helm-values.mdx @@ -0,0 +1,109 @@ +--- +description: Reference for Helm values used in multi-cluster management installations. +--- + +# Helm values + +Reference for Helm `values.yaml` blocks used in $[prodname] multi-cluster management installations. + +For the complete Helm installation reference, see [Helm installation reference](../../reference/installation/helm_customization). + +## Management cluster values + +### `managementCluster` + +Configure the cluster as a management cluster. + +```yaml +managementCluster: + enabled: true + address: : + service: + enabled: true + annotations: + type: NodePort # or LoadBalancer + port: 9449 + targetPort: 9449 + protocol: TCP + nodePort: 30449 # only for NodePort +``` + +| Field | Description | +|---|---| +| `enabled` | Set to `true` to configure this cluster as a management cluster. | +| `address` | The address (host:port) that managed clusters use to connect. For LoadBalancer, set this after the LB is provisioned. | +| `service.enabled` | Set to `true` to create a Service to expose the management cluster. | +| `service.type` | `NodePort` or `LoadBalancer`. | +| `service.port` | Service port. Must be `9449`. | +| `service.targetPort` | Target port on the Manager pod. Must be `9449`. | +| `service.nodePort` | External node port (NodePort only). | +| `service.annotations` | Service annotations (e.g., for AWS NLB configuration). | + +#### EKS LoadBalancer annotations + +If using EKS with a LoadBalancer, add the following annotations: + +```yaml +managementCluster: + service: + annotations: + - key: service.beta.kubernetes.io/aws-load-balancer-type + value: "external" + - key: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type + value: "instance" + - key: service.beta.kubernetes.io/aws-load-balancer-scheme + value: "internet-facing" +``` + +### `managedClusters` + +Register managed clusters on the management cluster. + +```yaml +managedClusters: + enabled: true + clusters: + - name: my-managed-cluster + operatorNamespace: tigera-operator + certificate: +``` + +| Field | Description | +|---|---| +| `enabled` | Set to `true` to register managed clusters. | +| `clusters[].name` | Name of the managed cluster. | +| `clusters[].operatorNamespace` | Namespace where the Tigera Operator runs on the managed cluster. | +| `clusters[].certificate` | Base64-encoded certificate for the managed cluster. | + +## Managed cluster values + +### `managementClusterConnection` + +Configure the cluster as a managed cluster. + +```yaml +managementClusterConnection: + enabled: true + managementClusterAddress: : + management: + tls: + crt: + managed: + tls: + crt: + key: +``` + +| Field | Description | +|---|---| +| `enabled` | Set to `true` to configure this cluster as a managed cluster. | +| `managementClusterAddress` | Address of the management cluster. | +| `management.tls.crt` | Base64-encoded TLS certificate of the management cluster. | +| `managed.tls.crt` | Base64-encoded TLS certificate for the managed cluster. | +| `managed.tls.key` | Base64-encoded TLS private key for the managed cluster. | + +:::note + +When installing a managed cluster with Helm, also set `logStorage.enabled=false` and `manager.enabled=false` because these components run on the management cluster. + +::: diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/index.mdx b/calico-enterprise/multicluster/reference/index.mdx similarity index 57% rename from calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/index.mdx rename to calico-enterprise/multicluster/reference/index.mdx index 9f14dd31f6..d3f8f37fa3 100644 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/index.mdx +++ b/calico-enterprise/multicluster/reference/index.mdx @@ -1,9 +1,9 @@ --- -description: Steps to configure management and managed clusters using standard operator installation. +description: Reference documentation for multi-cluster management resources, annotations, and configuration. hide_table_of_contents: true --- -# Standard operator install and multi-cluster management +# Reference import DocCardList from '@theme/DocCardList'; import { useCurrentSidebarCategory } from '@docusaurus/theme-common'; diff --git a/calico-enterprise/multicluster/reference/port-and-service-requirements.mdx b/calico-enterprise/multicluster/reference/port-and-service-requirements.mdx new file mode 100644 index 0000000000..ad759fb997 --- /dev/null +++ b/calico-enterprise/multicluster/reference/port-and-service-requirements.mdx @@ -0,0 +1,103 @@ +--- +description: Port, service, and network requirements for multi-cluster management and cluster mesh. +--- + +# Port and service requirements + +## Management cluster service requirements + +The management cluster must be exposed to managed clusters via a Kubernetes Service. The service configuration must meet these requirements: + +- Uses TCP protocol +- Maps to port 9449 on the Manager (web console) pod +- Exists within the `calico-system` namespace +- Uses label selector `k8s-app: calico-manager` + +### NodePort example + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: calico-manager-mcm + namespace: calico-system +spec: + type: NodePort + ports: + - nodePort: 30449 + port: 9449 + protocol: TCP + targetPort: 9449 + selector: + k8s-app: calico-manager +``` + +### LoadBalancer example + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: calico-manager-mcm + namespace: calico-system +spec: + type: LoadBalancer + ports: + - port: 9449 + protocol: TCP + targetPort: 9449 + selector: + k8s-app: calico-manager +``` + +:::note + +Using a LoadBalancer may require additional steps, depending on how you provisioned your Kubernetes cluster. + +::: + +:::note + +If you previously set up a management cluster with a service, don't forget to update the IP address in each managed cluster, by editing the `ManagementClusterConnection` and applying it, or by using `kubectl edit managementclusterconnection tigera-secure`. + +::: + +For both NodePort and LoadBalancer, a security rule/firewall rule is needed to allow connections to the management cluster. + +## Cluster mesh network requirements + +### Node connectivity + +All nodes participating in the cluster mesh must be able to establish connections to each other via their private IP. + +### Unique node names + +All nodes participating in the cluster mesh must have unique node names. + +### Non-overlapping pod CIDRs + +Pod CIDRs between clusters must not overlap. + +### Overlay requirements + +All clusters must have at least one overlay network in common (VXLAN and/or WireGuard). + +All clusters must have the same `routeSource` setting on `FelixConfiguration`. + +#### VXLAN requirements + +- The `vxlan*` settings on `FelixConfiguration` must be the same across clusters. +- The underlying network must allow traffic on `vxlanPort` between clusters. +- All clusters must use Calico CNI. + +#### WireGuard requirements + +- The `wireguard*` settings on `FelixConfiguration` must be the same across clusters. +- The underlying network must allow traffic on `wireguardListeningPort` between clusters. +- All clusters must use Calico CNI OR all clusters must use non-Calico CNI (mixing non-Calico CNI types is supported). + +:::note + +Cross-cluster routing can utilize both VXLAN and WireGuard at the same time. If both are enabled and a WireGuard peer is not ready, communication with that peer will fall back to VXLAN. + +::: diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/create-a-managed-cluster.mdx b/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/create-a-managed-cluster.mdx deleted file mode 100644 index ff52c2aecf..0000000000 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/create-a-managed-cluster.mdx +++ /dev/null @@ -1,72 +0,0 @@ ---- -description: Create a Calico Enterprise managed cluster that you can control from you management cluster. ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -import InstallAKS from '@site/calico-enterprise/_includes/components/InstallAKS'; -import InstallGKE from '@site/calico-enterprise/_includes/components/InstallGKE'; -import InstallEKS from '@site/calico-enterprise/_includes/components/InstallEKS'; -import InstallGeneric from '@site/calico-enterprise/_includes/components/InstallGeneric'; -import InstallOpenShift from '@site/calico-enterprise/_includes/components/InstallOpenShift'; - -# Create a Calico Enterprise managed cluster - -## Big picture - -Create a $[prodname] managed cluster that you can control from your management cluster. - -## Value - -Managing standalone clusters and multiple instances of Elasticsearch is not onerous when you first install $[prodname]. -As you move to production with 300+ clusters, it is not scalable; you need centralized cluster management and log storage. -With $[prodname] multi-cluster management, you can securely connect multiple clusters from different cloud providers -in a single management plane, and control user access using RBAC. This architecture also supports federation of network -policy resources across clusters, and lays the foundation for a “single pane of glass.” - -## Before you begin... - -**Required** - -- A [Calico Enterprise management cluster](create-a-management-cluster.mdx) -- A [$[prodname] pull secret](../../../getting-started/install-on-clusters/calico-enterprise.mdx) - -## How to - -### Create a managed cluster - -Follow these steps in the cluster you intend to use as the managed cluster. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -## Next steps - -- When you are ready to fine-tune your multi-cluster management deployment for production, see [Fine-tune multi-cluster management](../../fine-tune-deployment.mdx) -- To change an existing $[prodname] standalone cluster to a management or managed cluster, see [Change cluster types](../../change-cluster-type.mdx) diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx b/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx deleted file mode 100644 index 9ea70be047..0000000000 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -description: Create a Calico Enterprise management cluster to manage multiple clusters from a single management plane. ---- - -# Create a Calico Enterprise management cluster - -## Big picture - -Create a $[prodname] management cluster to manage multiple clusters from a single management plane. - -## Value - -Managing standalone clusters and multiple instances of Elasticsearch is not onerous when you first install $[prodname]. But as you move to production with 300+ clusters, it is not scalable; you need centralized cluster management and log storage. With $[prodname] multi-cluster management, you can securely connect multiple clusters from different cloud providers in a single management plane, and control user access using RBAC. This architecture also supports federation of network policy resources across clusters, and lays the foundation for a “single pane of glass.” - -## Before you begin... - -**Required** - -- A Calico Enterprise cluster, see [here](../../../getting-started/install-on-clusters/index.mdx) for help -- A reachable, public IP address for the management cluster - -## How to - -### Create a management cluster - -To control managed clusters from your central management plane, you must ensure it is reachable for connections. The simplest way to get started (but not for production scenarios), is to configure a `NodePort` service to expose the management cluster. Note that the service must live within the `calico-system` namespace. - -1. Create a service to expose the management cluster. - The following example of a NodePort service may not be suitable for production and high availability. For options, see [Fine-tune multi-cluster management for production](../../fine-tune-deployment.mdx). - Apply the following service manifest. - - ```bash - kubectl create -f - < - ``` -1. Apply the [ManagementCluster](../../../reference/installation/api.mdx#managementcluster) CR. - - ```bash - kubectl apply -f - < remote-cluster-secret-name -o=jsonpath="{.data.kubeconfig}" | base64 -d > verify_kubeconfig_b + kubectl --kubeconfig=verify_kubeconfig_b get nodes + ``` + This validates that the credentials used by Typha to connect to cluster B's API server are stored in the correct location and provide sufficient access. + + The command above should yield a result like the following: + ``` + NAME STATUS ROLES AGE VERSION + clusterB-master Ready master 7d v1.27.0 + clusterB-worker-1 Ready worker 7d v1.27.0 + clusterB-worker-2 Ready worker 7d v1.27.0 + ``` + + If you do not see the nodes of cluster B listed in response to the command above, verify that you [created](how-to/set-up-cluster-mesh.mdx#generate-credentials-for-cross-cluster-resource-synchronization) the `kubeconfig` for cluster B correctly, and that you [stored](how-to/set-up-cluster-mesh.mdx#establish-cross-cluster-resource-synchronization) it in cluster A correctly. + + If you do see the nodes of cluster B listed in response to the command above, you can run this test (or a similar test) on a node in cluster A to verify that cluster A nodes can connect to the API server of cluster B. + +2. Validate that the Typha service account in Cluster A is authorized to retrieve the `kubeconfig` secret for cluster B. + ```bash + kubectl auth can-i list secrets --namespace --as=system:serviceaccount:calico-system:calico-typha + ``` + + This command should yield the following output: + ``` + yes + ``` + + If the command does not return this output, verify that you correctly [configured RBAC](how-to/set-up-cluster-mesh.mdx#establish-cross-cluster-resource-synchronization) in cluster A. + +3. Repeat the above, switching cluster A to cluster B. + +### Check logs + +Validate that querying Typha logs yield the expected result outlined in the [validation](how-to/validate-multi-cluster-setup.mdx) page. + +If the Typha logs do not yield the expected result, review the warning or error-related logs in `typha` or `calico-node` for insights. + +### calicoq + +[calicoq](../operations/clis/calicoq/installing) can be used to emulate the connection that Typha will make to remote clusters. Use the following command: +```bash +calicoq eval "all()" +``` +If all remote clusters are accessible, calicoq returns something like the following. Note the remote cluster prefixes: there should be endpoints prefixed with the name of each RemoteClusterConfiguration in the local cluster. +``` +Endpoints matching selector all(): + Workload endpoint remote-cluster-1/host-1/k8s/kube-system.kube-dns-5fbcb4d67b-h6686/eth0 + Workload endpoint remote-cluster-1/host-2/k8s/kube-system.cnx-manager-66c4dbc5b7-6d9xv/eth0 + Workload endpoint host-a/k8s/kube-system.kube-dns-5fbcb4d67b-7wbhv/eth0 + Workload endpoint host-b/k8s/kube-system.cnx-manager-66c4dbc5b7-6ghsm/eth0 +``` + +If this command fails, the error messages returned by the command may provide insight into where issues are occurring. + +## Troubleshoot multi-cluster networking + +### Basic validation + +* Ensure that RemoteClusterConfiguration and federated endpoint identity are [functioning correctly](how-to/validate-multi-cluster-setup.mdx) +* Verify that you have met the [prerequisites](reference/port-and-service-requirements.mdx#cluster-mesh-network-requirements) for multi-cluster networking +* If you had previously set up RemoteClusterConfigurations without multi-cluster networking, and are upgrading to use the feature, review the [switching considerations](how-to/set-up-cluster-mesh.mdx#switch-to-multi-cluster-networking) +* Verify that traffic between clusters is not being denied by network policy + +### Check overlayRoutingMode + +Ensure that `overlayRoutingMode` is set to `"Enabled"` on all RemoteClusterConfigurations. + +If overlay routing is successfully enabled, you can view the logs of a Typha instance using: +```bash +kubectl logs deployment/calico-typha -n calico-system +``` + +You should see an output for each connected remote cluster that looks like this: +``` +18:49:35.394 [INFO][14] wrappedcallbacks.go 443: Creating syncer for RemoteClusterConfiguration(my-cluster) +18:49:35.394 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/ipam/v2/assignment/" +18:49:35.395 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/workloadendpoints" +18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/hostendpoints" +18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/profiles" +18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes" +18:49:35.397 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools" +``` + +If you do not see the each of the resource types above, overlay routing was not successfully enabled in your cluster. Verify that you followed the [setup](how-to/set-up-cluster-mesh.mdx#establish-cross-cluster-resource-synchronization) correctly for overlay routing, and that the cluster is using a version of $[prodname] that supports multi-cluster networking. + +### Check logs + +Warning or error logs in `typha` or `calico-node` may provide insight into where issues are occurring. diff --git a/calico-enterprise/multicluster/tutorial/aws-cluster-mesh.mdx b/calico-enterprise/multicluster/tutorial/aws-cluster-mesh.mdx new file mode 100644 index 0000000000..86d3322064 --- /dev/null +++ b/calico-enterprise/multicluster/tutorial/aws-cluster-mesh.mdx @@ -0,0 +1,115 @@ +--- +description: Set up a Calico Enterprise cluster mesh between an on-premise cluster and an AWS cluster. +--- + +# Cluster mesh with AWS + +## Objectives + +In this tutorial, you will learn how to: + +- Configure an on-premise cluster to peer with an AWS cluster using a VPN +- Set up a RemoteClusterConfiguration to connect the clusters +- Configure federated services across the mesh + +## Prerequisites + +- An on-premise Kubernetes cluster with $[prodname] installed +- An AWS EKS cluster with $[prodname] installed +- VPN connectivity between on-premise and AWS networks +- [Cluster mesh credentials](../how-to/set-up-cluster-mesh.mdx#generate-credentials-for-cross-cluster-resource-synchronization) generated for both clusters + +## Architecture overview + +The on-premise cluster is installed on real hardware where node and pod IPs are routable, using an edge VPN router to peer with the AWS cluster. + +![A diagram showing the key configuration requirements setting up an AWS cluster (using AWS VPN CNI) peering with an on-premise cluster.](/img/calico-enterprise/federation/aws-rcc.svg) + +## Step 1: Configure the on-premise cluster + +Configure the following $[prodname] resources on the on-premise cluster: + +- **IP pool**: Configure for on-premise IP assignment with IPIP disabled. +- **BGP peering**: Set up peering to the VPN router. +- **RemoteClusterConfiguration**: Reference the AWS cluster. +- **Federated services**: Use the $[prodname] Federated Services Controller for AWS cluster service discovery. + +### BGP configuration notes + +If your VPN Router is configured as a route reflector for the on-premise cluster: + +- Configure the default BGP Configuration resource to disable node-to-node mesh. +- Configure a global BGP Peer resource to peer with the VPN router. + +If the IP Pool has `Outgoing NAT` enabled, then you must add an IP Pool covering the AWS cluster VPC with `disabled` set to `true`. When set to `true`, the pool is not used for IP allocations, and SNAT is not performed for traffic to the AWS cluster. + +## Step 2: Configure the AWS environment + +### VPC configuration + +1. Choose a VPC CIDR that does not overlap with the on-premise IP ranges. +2. Create 4 subnets within the VPC, split across two AZs (for availability), such that each AZ has a public and private subnet: + - The private subnet is used for node and pod IP allocation. + - The public subnet is used to home a NAT gateway for pod-to-internet traffic. + +### VPN configuration + +1. Peer the VPC to the on-premise network using a VPN. Configure a VPN gateway for the AWS side, and a classic VPN for the customer side. BGP is used for route distribution. + +### Routing tables + +- **Private subnet routing table**: + - Set "propagate" to "true" to ensure BGP-learned routes are distributed. + - Default route to the NAT gateway for public internet traffic. + - Local VPC traffic. +- **Public subnet routing table**: Default route to the internet gateway. + +### Security groups + +Configure security groups for the worker nodes with: +- A rule to allow traffic from the peered networks. +- Other rules required for setting up VPN peering (refer to the AWS docs for details). + +## Step 3: Create a Network Load Balancer + +To automatically create a Network Load Balancer (NLB) for the AWS deployment, apply a service with the correct annotation: + +```yaml +apiVersion: v1 +kind: Service +metadata: + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: nlb + name: nginx-external +spec: + externalTrafficPolicy: Local + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 80 + selector: + run: nginx + type: LoadBalancer +``` + +## Step 4: Set up the cluster mesh + +Follow the steps in [Set up cluster mesh](../how-to/set-up-cluster-mesh.mdx) to create RemoteClusterConfigurations between the on-premise and AWS clusters. + +## Step 5: Verify the mesh + +Follow the steps in [Validate multi-cluster setup](../how-to/validate-multi-cluster-setup.mdx) to verify that the clusters are connected and routing traffic correctly. + +## Summary + +You have configured a cluster mesh between an on-premise cluster and an AWS cluster. The clusters can now: + +- Share endpoint identity for identity-aware network policy +- Federate services for cross-cluster service discovery +- Route traffic between clusters + +## Next steps + +- [Configure federated services](../how-to/configure-federated-services.mdx) +- [Cluster mesh concepts](../explanation/cluster-mesh.mdx) diff --git a/calico-enterprise/multicluster/set-up-multi-cluster-management/index.mdx b/calico-enterprise/multicluster/tutorial/index.mdx similarity index 69% rename from calico-enterprise/multicluster/set-up-multi-cluster-management/index.mdx rename to calico-enterprise/multicluster/tutorial/index.mdx index 383decd626..af5b20e47c 100644 --- a/calico-enterprise/multicluster/set-up-multi-cluster-management/index.mdx +++ b/calico-enterprise/multicluster/tutorial/index.mdx @@ -1,9 +1,9 @@ --- -description: Steps to configure management and managed clusters. +description: Tutorials for multi-cluster management and cluster mesh. hide_table_of_contents: true --- -# Multi-cluster management +# Tutorials import DocCardList from '@theme/DocCardList'; import { useCurrentSidebarCategory } from '@docusaurus/theme-common'; diff --git a/calico-enterprise/operations/cnx/roles-and-permissions.mdx b/calico-enterprise/operations/cnx/roles-and-permissions.mdx index ac2d522394..21c168f2e2 100644 --- a/calico-enterprise/operations/cnx/roles-and-permissions.mdx +++ b/calico-enterprise/operations/cnx/roles-and-permissions.mdx @@ -48,4 +48,4 @@ For RBAC details on any given feature, see the feature. For example: - [Staged policy RBAC](../../network-policy/staged-network-policies.mdx) - [Elasticsearch logs RBAC](../../observability/elastic/rbac-elasticsearch.mdx) - [Compliance reports RBAC](../../compliance/overview.mdx) -- [Multi-cluster management RBAC](../../multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx) +- [Multi-cluster management RBAC](../../multicluster/how-to/create-a-management-cluster.mdx) diff --git a/calico-enterprise/operations/comms/certificate-management.mdx b/calico-enterprise/operations/comms/certificate-management.mdx index e5d4868ead..e1bfbf58da 100644 --- a/calico-enterprise/operations/comms/certificate-management.mdx +++ b/calico-enterprise/operations/comms/certificate-management.mdx @@ -23,7 +23,7 @@ temporarily remove [the logstorage resource](../../reference/installation/api.md before following the steps to enable certificate management and then re-apply afterwards. For detailed steps on re-creating logstorage, read more on [how to create a new Elasticsearch cluster](../../observability/elastic/troubleshoot.mdx#how-to-create-a-new-cluster). -Currently, this feature is not supported in combination with [Multi-cluster management](../../multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx). +Currently, this feature is not supported in combination with [Multi-cluster management](../../multicluster/how-to/create-a-management-cluster.mdx). **Supported algorithms** diff --git a/calico-enterprise/operations/monitor/metrics/recommended-metrics.mdx b/calico-enterprise/operations/monitor/metrics/recommended-metrics.mdx index cae29d4310..53d9869624 100644 --- a/calico-enterprise/operations/monitor/metrics/recommended-metrics.mdx +++ b/calico-enterprise/operations/monitor/metrics/recommended-metrics.mdx @@ -92,7 +92,7 @@ This section provides metrics recommendations for maintaining optimal cluster op ## Typha cluster mesh metrics -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/explanation/cluster-mesh.mdx). Note that this metric requires a count syntax because you will have a copy of the metric per RemoteClusterConfiguration. As shown in the table, the value `2 = In Sync` reflects good connections. @@ -115,7 +115,7 @@ remote_cluster_connection_status\{cluster="baz"\} = 1 ### Remote cluster connections (out-of-sync) -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/explanation/cluster-mesh.mdx). | Remote cluster connections (out-of-sync) | | | ---------------------------------------- | ------------------------------------------------------------ | diff --git a/calico-enterprise/reference/component-resources/kube-controllers/configuration.mdx b/calico-enterprise/reference/component-resources/kube-controllers/configuration.mdx index a5846410ab..d96f0d1f76 100644 --- a/calico-enterprise/reference/component-resources/kube-controllers/configuration.mdx +++ b/calico-enterprise/reference/component-resources/kube-controllers/configuration.mdx @@ -82,7 +82,7 @@ also have write access to update `Endpoints`. The federation controller is disabled by default if `ENABLED_CONTROLLERS` is not explicitly specified. This controller is valid for all $[prodname] datastore types. For more details refer to the -[Configuring federated services](../../../multicluster/federation/services-controller.mdx) usage guide. +[Configuring federated services](../../../multicluster/how-to/configure-federated-services.mdx) usage guide. diff --git a/calico-enterprise/reference/public-cloud/aws.mdx b/calico-enterprise/reference/public-cloud/aws.mdx index e9c53a7980..12f93b9970 100644 --- a/calico-enterprise/reference/public-cloud/aws.mdx +++ b/calico-enterprise/reference/public-cloud/aws.mdx @@ -89,7 +89,7 @@ EOF $[prodname] supports the AWS VPC CNI plugin, which creates ENI interfaces for the pods that fall within the VPC of the cluster. Routing to these pods is automatically handled by AWS. -We recommend using the AWS VPC CNI plugin with [federation](../../multicluster/federation/overview.mdx) as it provides seamless IP connectivity +We recommend using the AWS VPC CNI plugin with [federation](../../multicluster/explanation/cluster-mesh.mdx) as it provides seamless IP connectivity between your AWS cluster and a remote cluster. Ensure that you use version 1.1 or later. Install the AWS VPC CNI plugin in your Kubernetes cluster as follows. @@ -107,7 +107,7 @@ Install the AWS VPC CNI plugin in your Kubernetes cluster as follows. :::caution - Required for [federation](../../multicluster/federation/overview.mdx). + Required for [federation](../../multicluster/explanation/cluster-mesh.mdx). ::: diff --git a/calico-enterprise/reference/resources/managedcluster.mdx b/calico-enterprise/reference/resources/managedcluster.mdx index b4aa4d13d4..dbdf7cf95d 100644 --- a/calico-enterprise/reference/resources/managedcluster.mdx +++ b/calico-enterprise/reference/resources/managedcluster.mdx @@ -69,4 +69,4 @@ plane and managed plane will be reported as following: | type | Type of status that is being reported | - | string | `ManagedClusterConnected` | | status | Status of the connection between a Managed cluster and management cluster | `Unknown`, `True`, `False` | string | `Unknown` | -[Multi-cluster management](../../multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx) +[Multi-cluster management](../../multicluster/how-to/create-a-management-cluster.mdx) diff --git a/calico-enterprise/reference/resources/remoteclusterconfiguration.mdx b/calico-enterprise/reference/resources/remoteclusterconfiguration.mdx index 2f8ac0d1f7..45597f1a86 100644 --- a/calico-enterprise/reference/resources/remoteclusterconfiguration.mdx +++ b/calico-enterprise/reference/resources/remoteclusterconfiguration.mdx @@ -19,7 +19,7 @@ A remote cluster configuration causes Typha and `calicoq` to retrieve the follow When using the Kubernetes API datastore with RBAC enabled on the remote cluster, the RBAC rules must be configured to allow access to these resources. -For more details on the federation feature refer to the [Overview](../../multicluster/federation/overview.mdx). +For more details on the federation feature refer to the [Overview](../../multicluster/explanation/cluster-mesh.mdx). For the meaning of the fields matches the configuration used for configuring `calicoctl`, see [Kubernetes datastore](../../operations/clis/calicoctl/configure/datastore.mdx) instructions for more details. diff --git a/calico-enterprise_versioned_docs/version-3.20-2/operations/monitor/metrics/recommended-metrics.mdx b/calico-enterprise_versioned_docs/version-3.20-2/operations/monitor/metrics/recommended-metrics.mdx index cae29d4310..f816524b2d 100644 --- a/calico-enterprise_versioned_docs/version-3.20-2/operations/monitor/metrics/recommended-metrics.mdx +++ b/calico-enterprise_versioned_docs/version-3.20-2/operations/monitor/metrics/recommended-metrics.mdx @@ -92,7 +92,7 @@ This section provides metrics recommendations for maintaining optimal cluster op ## Typha cluster mesh metrics -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). Note that this metric requires a count syntax because you will have a copy of the metric per RemoteClusterConfiguration. As shown in the table, the value `2 = In Sync` reflects good connections. @@ -115,7 +115,7 @@ remote_cluster_connection_status\{cluster="baz"\} = 1 ### Remote cluster connections (out-of-sync) -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). | Remote cluster connections (out-of-sync) | | | ---------------------------------------- | ------------------------------------------------------------ | diff --git a/calico-enterprise_versioned_docs/version-3.21-2/operations/monitor/metrics/recommended-metrics.mdx b/calico-enterprise_versioned_docs/version-3.21-2/operations/monitor/metrics/recommended-metrics.mdx index cae29d4310..f816524b2d 100644 --- a/calico-enterprise_versioned_docs/version-3.21-2/operations/monitor/metrics/recommended-metrics.mdx +++ b/calico-enterprise_versioned_docs/version-3.21-2/operations/monitor/metrics/recommended-metrics.mdx @@ -92,7 +92,7 @@ This section provides metrics recommendations for maintaining optimal cluster op ## Typha cluster mesh metrics -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). Note that this metric requires a count syntax because you will have a copy of the metric per RemoteClusterConfiguration. As shown in the table, the value `2 = In Sync` reflects good connections. @@ -115,7 +115,7 @@ remote_cluster_connection_status\{cluster="baz"\} = 1 ### Remote cluster connections (out-of-sync) -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). | Remote cluster connections (out-of-sync) | | | ---------------------------------------- | ------------------------------------------------------------ | diff --git a/calico-enterprise_versioned_docs/version-3.22-2/operations/monitor/metrics/recommended-metrics.mdx b/calico-enterprise_versioned_docs/version-3.22-2/operations/monitor/metrics/recommended-metrics.mdx index cae29d4310..f816524b2d 100644 --- a/calico-enterprise_versioned_docs/version-3.22-2/operations/monitor/metrics/recommended-metrics.mdx +++ b/calico-enterprise_versioned_docs/version-3.22-2/operations/monitor/metrics/recommended-metrics.mdx @@ -92,7 +92,7 @@ This section provides metrics recommendations for maintaining optimal cluster op ## Typha cluster mesh metrics -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). Note that this metric requires a count syntax because you will have a copy of the metric per RemoteClusterConfiguration. As shown in the table, the value `2 = In Sync` reflects good connections. @@ -115,7 +115,7 @@ remote_cluster_connection_status\{cluster="baz"\} = 1 ### Remote cluster connections (out-of-sync) -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). | Remote cluster connections (out-of-sync) | | | ---------------------------------------- | ------------------------------------------------------------ | diff --git a/calico-enterprise_versioned_docs/version-3.23-1/operations/monitor/metrics/recommended-metrics.mdx b/calico-enterprise_versioned_docs/version-3.23-1/operations/monitor/metrics/recommended-metrics.mdx index cae29d4310..f816524b2d 100644 --- a/calico-enterprise_versioned_docs/version-3.23-1/operations/monitor/metrics/recommended-metrics.mdx +++ b/calico-enterprise_versioned_docs/version-3.23-1/operations/monitor/metrics/recommended-metrics.mdx @@ -92,7 +92,7 @@ This section provides metrics recommendations for maintaining optimal cluster op ## Typha cluster mesh metrics -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). Note that this metric requires a count syntax because you will have a copy of the metric per RemoteClusterConfiguration. As shown in the table, the value `2 = In Sync` reflects good connections. @@ -115,7 +115,7 @@ remote_cluster_connection_status\{cluster="baz"\} = 1 ### Remote cluster connections (out-of-sync) -The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/federation/overview.mdx). +The following metrics are applicable only if you have implemented [Cluster mesh](../../../multicluster/federation/overview.mdx). | Remote cluster connections (out-of-sync) | | | ---------------------------------------- | ------------------------------------------------------------ | diff --git a/calico/networking/configuring/add-maglev-load-balancing.mdx b/calico/networking/configuring/add-maglev-load-balancing.mdx index d1a2682ab6..5ebb686eeb 100644 --- a/calico/networking/configuring/add-maglev-load-balancing.mdx +++ b/calico/networking/configuring/add-maglev-load-balancing.mdx @@ -26,7 +26,7 @@ Note Maglev load balancing cannot be used with the following: ## Prerequisites -* Your cluster uses the eBPF data plane with [direct server return mode](../../../operations/ebpf/enabling-ebpf#try-out-direct-server-return-mode). +* Your cluster uses the eBPF data plane with [direct server return mode](../../operations/ebpf/enabling-ebpf#enable-direct-server-return-mode). * All your nodes are running on Linux. * You have a service with a VIP/External-IP, possibly allocated by $[prodname] [LB IPAM](../ipam/service-loadbalancer.mdx), which you are advertising outside of your cluster. diff --git a/docusaurus.config.js b/docusaurus.config.js index 27c33b5307..35bcb34aaa 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -116,8 +116,8 @@ export default async function createAsyncConfig() { searchPagePath: '/search', }, announcementBar: { - id: "calico_ebpf", - content: 'Calico 3.30+ users: Sign up for Calico Cloud Free today!', + id: "calico_hackathon", + content: '🚀 The Calico 3.30+ Hackathon is live! Leverage Calico 3.30+ to solve networking challenges and win up to $1,000.', backgroundColor: "#FCE181", textColor: "#000", isCloseable: true diff --git a/sidebars-calico-enterprise.js b/sidebars-calico-enterprise.js index 1f4a792134..2039276cf6 100644 --- a/sidebars-calico-enterprise.js +++ b/sidebars-calico-enterprise.js @@ -407,40 +407,47 @@ module.exports = { items: [ { type: 'category', - label: 'Set up multi-cluster management', - link: { type: 'doc', id: 'multicluster/set-up-multi-cluster-management/index' }, + label: 'Concepts', + link: { type: 'doc', id: 'multicluster/explanation/index' }, items: [ - { - type: 'category', - label: 'Standard operator install', - link: { type: 'doc', id: 'multicluster/set-up-multi-cluster-management/standard-install/index' }, - items: [ - 'multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster', - 'multicluster/set-up-multi-cluster-management/standard-install/create-a-managed-cluster', - ], - }, - { - type: 'category', - label: 'Helm install', - link: { type: 'doc', id: 'multicluster/set-up-multi-cluster-management/helm-install/index' }, - items: [ - 'multicluster/set-up-multi-cluster-management/helm-install/create-a-management-cluster-helm', - 'multicluster/set-up-multi-cluster-management/helm-install/create-a-managed-cluster-helm', - ], - }, + 'multicluster/explanation/architecture', + 'multicluster/explanation/cluster-mesh', + 'multicluster/explanation/security-model', + ], + }, + { + type: 'category', + label: 'How-to guides', + link: { type: 'doc', id: 'multicluster/how-to/index' }, + items: [ + 'multicluster/how-to/create-a-management-cluster', + 'multicluster/how-to/create-a-managed-cluster', + 'multicluster/how-to/change-cluster-type', + 'multicluster/how-to/set-up-cluster-mesh', + 'multicluster/how-to/configure-federated-services', + 'multicluster/how-to/configure-log-storage', + 'multicluster/how-to/configure-cross-cluster-rbac', + 'multicluster/how-to/validate-multi-cluster-setup', + ], + }, + { + type: 'category', + label: 'Reference', + link: { type: 'doc', id: 'multicluster/reference/index' }, + items: [ + 'multicluster/reference/custom-resources', + 'multicluster/reference/federation-annotations', + 'multicluster/reference/helm-values', + 'multicluster/reference/port-and-service-requirements', ], }, - 'multicluster/fine-tune-deployment', - 'multicluster/change-cluster-type', + 'multicluster/troubleshooting', { type: 'category', - label: 'Cluster mesh', - link: { type: 'doc', id: 'multicluster/federation/index' }, + label: 'Tutorials', + link: { type: 'doc', id: 'multicluster/tutorial/index' }, items: [ - 'multicluster/federation/overview', - 'multicluster/federation/kubeconfig', - 'multicluster/federation/services-controller', - 'multicluster/federation/aws', + 'multicluster/tutorial/aws-cluster-mesh', ], }, ],