Deploying Nx Private Cloud to AWS
You can easily deploy your Nx Private Cloud instance to AWS.
Using ECS
First, create a container configuration using the following image: nxprivatecloud/nxcloud:latest
Second, set up a mount point.
1"mountPoints": [
2 {
3 "readOnly": null,
4 "containerPath": "/data",
5 "sourceVolume": "data"
6 }
7],
8
Third, configure the following env variables:
1"environment": [
2 {
3 "name": "ADMIN_PASSWORD",
4 "value": "admin-password"
5 },
6 {
7 "name": "GITHUB_API_URL",
8 "value": "https://api.github.com"
9 },
10 {
11 "name": "GITHUB_AUTH_TOKEN",
12 "value": "your-github-auth-token"
13 },
14 {
15 "name": "GITHUB_WEBHOOK_SECRET",
16 "value": "your-github-webhook-secret"
17 },
18 {
19 "name": "NX_CLOUD_APP_URL",
20 "value": "url-accessible-from-ci-and-dev-machines"
21 }
22]
23
All env variables prefixed with GITHUB
are required for the Nx Cloud GitHub integration. If you don't use GitHub, you don't need to set them.
To test that everything works, open NX_CLOUD_APP_URL
in the browser and log in using the username "admin" and the password provisioned above.
For reference, here is an example complete task definition:
1{
2 "ipcMode": null,
3 "executionRoleArn": null,
4 "containerDefinitions": [
5 {
6 "dnsSearchDomains": null,
7 "environmentFiles": null,
8 "logConfiguration": {
9 "logDriver": "awslogs",
10 "secretOptions": null,
11 "options": {
12 "awslogs-group": "/ecs/DeployCloud",
13 "awslogs-region": "us-east-1",
14 "awslogs-stream-prefix": "ecs"
15 }
16 },
17 "entryPoint": null,
18 "portMappings": [
19 {
20 "hostPort": 8081,
21 "protocol": "tcp",
22 "containerPort": 8081
23 }
24 ],
25 "command": null,
26 "linuxParameters": null,
27 "cpu": 0,
28 "environment": [
29 {
30 "name": "ADMIN_PASSWORD",
31 "value": "admin-password"
32 },
33 {
34 "name": "GITHUB_API_URL",
35 "value": "https://api.github.com"
36 },
37 {
38 "name": "GITHUB_AUTH_TOKEN",
39 "value": "your-github-auth-token"
40 },
41 {
42 "name": "GITHUB_WEBHOOK_SECRET",
43 "value": "your-github-webhoook-secret"
44 },
45 {
46 "name": "NX_CLOUD_APP_URL",
47 "value": "url-accessible-from-ci-and-dev-machines"
48 }
49 ],
50 "resourceRequirements": null,
51 "ulimits": null,
52 "dnsServers": null,
53 "mountPoints": [
54 {
55 "readOnly": null,
56 "containerPath": "/data",
57 "sourceVolume": "data"
58 }
59 ],
60 "workingDirectory": null,
61 "secrets": null,
62 "dockerSecurityOptions": null,
63 "memory": 2000,
64 "memoryReservation": null,
65 "volumesFrom": [],
66 "stopTimeout": null,
67 "image": "nxprivatecloud/nxcloud:latest",
68 "startTimeout": null,
69 "firelensConfiguration": null,
70 "dependsOn": null,
71 "disableNetworking": null,
72 "interactive": null,
73 "healthCheck": null,
74 "essential": true,
75 "links": null,
76 "hostname": null,
77 "extraHosts": null,
78 "pseudoTerminal": null,
79 "user": null,
80 "readonlyRootFilesystem": null,
81 "dockerLabels": null,
82 "systemControls": null,
83 "privileged": null,
84 "name": "PrivateCloud"
85 }
86 ],
87 "placementConstraints": [],
88 "memory": null,
89 "taskRoleArn": null,
90 "compatibilities": ["EC2"],
91 "taskDefinitionArn": "your-task-definition-arn",
92 "family": "deploy-nx-cloud",
93 "requiresAttributes": [
94 {
95 "targetId": null,
96 "targetType": null,
97 "value": null,
98 "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
99 },
100 {
101 "targetId": null,
102 "targetType": null,
103 "value": null,
104 "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
105 },
106 {
107 "targetId": null,
108 "targetType": null,
109 "value": null,
110 "name": "ecs.capability.docker-plugin.local"
111 },
112 {
113 "targetId": null,
114 "targetType": null,
115 "value": null,
116 "name": "com.amazonaws.ecs.capability.docker-remote-api.1.25"
117 }
118 ],
119 "pidMode": null,
120 "requiresCompatibilities": ["EC2"],
121 "networkMode": null,
122 "cpu": null,
123 "status": "ACTIVE",
124 "inferenceAccelerators": null,
125 "proxyConfiguration": null,
126 "volumes": [
127 {
128 "fsxWindowsFileServerVolumeConfiguration": null,
129 "efsVolumeConfiguration": null,
130 "name": "data",
131 "host": null,
132 "dockerVolumeConfiguration": {
133 "autoprovision": true,
134 "labels": null,
135 "scope": "shared",
136 "driver": "local",
137 "driverOpts": null
138 }
139 }
140 ]
141}
142
When using this configuration, the metadata and file artifacts are stored in the /data
volume.
Using S3
If you want to use S3 for storing and delivering cached artifacts, add the following env variables:
1"environment": [
2 {
3 "name": "AWS_S3_ACCESS_KEY_ID",
4 "value": "your-access-key-id"
5 },
6 {
7 "name": "AWS_S3_SECRET_ACCESS_KEY",
8 "value": "your-secret-access-key"
9 },
10 {
11 "name": "AWS_S3_BUCKET",
12 "value": "your-backet-name"
13 }
14]
15
Using this configuration, the metadata will be stored on the volume and the file artifacts will be stored using S3.
We highly recommend using S3 for large workspaces.