Skip to content

Commit 2fe464c

Browse files
authored
Merge pull request #47 from aws-observability/42-existing-single-cluster-observability-pattern-with-aws-mixed-approach-services
Existing single Cluster Observability Pattern with AWS Mixed approach services
2 parents b2ee867 + 06ff6b6 commit 2fe464c

File tree

3 files changed

+149
-0
lines changed

3 files changed

+149
-0
lines changed
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
import ExistingEksMixedConstruct from '../lib/existing-eks-mixed-observability-construct';
2+
import { configureApp, errorHandler } from '../lib/common/construct-utils';
3+
4+
const app = configureApp();
5+
6+
new ExistingEksMixedConstruct().buildAsync(app, 'existing-eks-mixed').catch((error) => {
7+
errorHandler(app, "Existing Cluster Pattern is missing information of existing cluster: " + error);
8+
});
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
# Existing EKS Cluster AWS Mixed Observability Accelerator
2+
3+
## Architecture
4+
5+
The following figure illustrates the architecture of the pattern we will be deploying for Existing EKS Cluster AWS Mixed Observability pattern, using AWS native tools such as CloudWatch and X-Ray and Open Source tools such as Amazon Distro for OpenTelmetry (ADOT) and Prometheus Node Exporter.
6+
7+
![Architecture](../images/mixed-diagram.png)
8+
9+
This example makes use of CloudWatch, as a metric and log aggregation layer, while X-Ray is used as a trace-aggregation layer. In order to collect the metrics and traces, we use the Open Source ADOT collector. Fluent Bit is used to export the logs to CloudWatch Logs.
10+
11+
In this architecture, AWS X-Ray provides a complete view of requests as they travel through your application and filters visual data across payloads, functions, traces, services, and APIs. X-Ray also allows you to perform analytics, to gain powerful insights about your distributed trace data.
12+
13+
Utilizing CloudWatch and X-Ray as an aggregation layer allows for a fully-managed scalable telemetry backend. In this example we get those benefits while still having the flexibility and rapid development of the Open Source collection tools.
14+
15+
## Objective
16+
17+
This pattern aims to add Observability on top of an existing EKS cluster, with a mixture of AWS native and open source managed AWS services.
18+
19+
## Prerequisites:
20+
21+
Ensure that you have installed the following tools on your machine:
22+
23+
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
24+
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
25+
3. [cdk](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)
26+
4. [npm](https://docs.npmjs.com/cli/v8/commands/npm-install)
27+
28+
You will also need:
29+
30+
1. Either an existing EKS cluster, or you can setup a new one with [Single New EKS Cluster Observability Accelerator](../single-new-eks-observability-accelerators/single-new-eks-cluster.md)
31+
2. An OpenID Connect (OIDC) provider, associated to the above EKS cluster (Note: Single EKS Cluster Pattern takes care of that for you)
32+
33+
## Deploying
34+
35+
1. Edit `~/.cdk.json` by setting the name of your existing cluster:
36+
37+
```json
38+
"context": {
39+
...
40+
"existing.cluster.name": "...",
41+
...
42+
}
43+
```
44+
45+
2. Edit `~/.cdk.json` by setting the kubectl role name; if you used Single New EKS Cluster Observability Accelerator to setup your cluster, the kubectl role name would be provided by the output of the deployment, on your command-line interface (CLI):
46+
47+
```json
48+
"context": {
49+
...
50+
"existing.kubectl.rolename":"...",
51+
...
52+
}
53+
```
54+
55+
3. Run the following command from the root of this repository to deploy the pipeline stack:
56+
57+
```bash
58+
make build
59+
make pattern existing-eks-mixed-observability deploy
60+
```
61+
62+
## Verify the resources
63+
64+
Please see [Single New EKS Cluster AWS Mixed Observability Accelerator](../single-new-eks-observability-accelerators/single-new-eks-mixed-observability.md).
65+
66+
## Teardown
67+
68+
You can teardown the whole CDK stack with the following command:
69+
70+
```bash
71+
make pattern existing-eks-mixed-observability destroy
72+
```
73+
74+
If you setup your cluster with Single New EKS Cluster Observability Accelerator, you also need to run:
75+
76+
```bash
77+
make pattern single-new-eks-cluster destroy
78+
```
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
import { ImportClusterProvider, utils } from '@aws-quickstart/eks-blueprints';
2+
import * as blueprints from '@aws-quickstart/eks-blueprints';
3+
import { cloudWatchDeploymentMode } from '@aws-quickstart/eks-blueprints';
4+
import { ObservabilityBuilder } from '../common/observability-builder';
5+
import * as cdk from "aws-cdk-lib";
6+
import * as eks from 'aws-cdk-lib/aws-eks';
7+
8+
export default class ExistingEksMixedobservabilityConstruct {
9+
async buildAsync(scope: cdk.App, id: string) {
10+
// AddOns for the cluster
11+
const stackId = `${id}-observability-accelerator`;
12+
13+
const clusterName = utils.valueFromContext(scope, "existing.cluster.name", undefined);
14+
const kubectlRoleName = utils.valueFromContext(scope, "existing.kubectl.rolename", undefined);
15+
16+
const account = process.env.COA_ACCOUNT_ID! || process.env.CDK_DEFAULT_ACCOUNT!;
17+
const region = process.env.COA_AWS_REGION! || process.env.CDK_DEFAULT_REGION!;
18+
19+
const sdkCluster = await blueprints.describeCluster(clusterName, region); // get cluster information using EKS APIs
20+
const vpcId = sdkCluster.resourcesVpcConfig?.vpcId;
21+
22+
/**
23+
* Assumes the supplied role is registered in the target cluster for kubectl access.
24+
*/
25+
26+
const importClusterProvider = new ImportClusterProvider({
27+
clusterName: sdkCluster.name!,
28+
version: eks.KubernetesVersion.of(sdkCluster.version!),
29+
clusterEndpoint: sdkCluster.endpoint,
30+
openIdConnectProvider: blueprints.getResource(context =>
31+
new blueprints.LookupOpenIdConnectProvider(sdkCluster.identity!.oidc!.issuer!).provide(context)),
32+
clusterCertificateAuthorityData: sdkCluster.certificateAuthority?.data,
33+
kubectlRoleArn: blueprints.getResource(context => new blueprints.LookupRoleProvider(kubectlRoleName).provide(context)).roleArn,
34+
clusterSecurityGroupId: sdkCluster.resourcesVpcConfig?.clusterSecurityGroupId
35+
});
36+
37+
const cloudWatchAdotAddOn = new blueprints.addons.CloudWatchAdotAddOn({
38+
deploymentMode: cloudWatchDeploymentMode.DEPLOYMENT,
39+
namespace: 'default',
40+
name: 'adot-collector-cloudwatch',
41+
metricsNameSelectors: ['apiserver_request_.*', 'container_memory_.*', 'container_threads', 'otelcol_process_.*'],
42+
});
43+
44+
const addOns: Array<blueprints.ClusterAddOn> = [
45+
new blueprints.addons.CloudWatchLogsAddon({
46+
logGroupPrefix: `/aws/eks/${stackId}`,
47+
logRetentionDays: 30
48+
}),
49+
new blueprints.addons.AdotCollectorAddOn(),
50+
cloudWatchAdotAddOn,
51+
new blueprints.addons.XrayAdotAddOn(),
52+
];
53+
54+
ObservabilityBuilder.builder()
55+
.account(account)
56+
.region(region)
57+
.addExistingClusterObservabilityBuilderAddOns()
58+
.clusterProvider(importClusterProvider)
59+
.resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId)) // this is required with import cluster provider
60+
.addOns(...addOns)
61+
.build(scope, stackId);
62+
}
63+
}

0 commit comments

Comments
 (0)