Cross-Functional Architecture And Tools For Cloud-Based Operating Models
After reading some of the architecture documentation, an engineer can get all 4 working demos up and running very quickly.
You can either run the initial demos on a DevBox or in GitHub.
NOTE: This document describes how to get Agile Cloud Manager working on GitHub. If you want to get the demos running in a DevBox, try this link instead.
The simplest way to get the demos running is to configure the GitHub working demos that already have all the automation deployed and tested for you.
The following steps describe how to get the demos working in GitHub:
Actions
tab of your repository as described below.After you get the 4 demos working properly, you can use the 4 working demos as a starting point to develop your own system templates and your own appliances within the permissive license.
Detailed instructions for each of the three simple steps required are given in the following sections.
Clone the acm-demos-github repository and push it into a new repository in your own account.
Repeat this step two twice so that you have two different keys.yaml and two different config.yaml.
The different values in each version of each file will enable you to run all demos in parallel at the same time.
keys.yaml and config.yaml are simple lists of key/value pairs. For the demos, you need to populate these files in the very specific way described below. But after you get the demos working properly, you can subsequently customize the contents of these files as much as you want.
To start, download configFilesGenerator.py and then run it by navigating the command line to the proper directory and then typing the following in a command line:
python configFilesGenerator.py
A config.yaml and a keys.yaml will have been created in the same directory where you ran the python configFilesGenerator.py
command. Both files will have all the required fields, but you will have to make some modifications to the values before you can use the files.
Change the names to something like config1.yaml and keys1.yaml and then run the python configFilesGenerator.py
command a second time to create a second config.yaml and a second keys.yaml.
Each config.yaml will already be almost completely populated with randomly-generated strings that will help avoid naming conflicts during runtime, for resources like s3 buckets and others that require unique global names.
Populate the remaining contents of each keys.yaml and of each config.yaml using the instructions in the remainder of this section below as follows.
Each keys.yaml file will look like the following:
secretsType: master
clientName: <follow-instructions-below-to-get-value>
clientId: <follow-instructions-below-to-get-value>
clientSecret: <follow-instructions-below-to-get-value>
gitUsername: <follow-instructions-below-to-get-value>
gitPass: <follow-instructions-below-to-get-value>
KeyName: <follow-instructions-below-to-get-value>
AWSAccessKeyId: <follow-instructions-below-to-get-value>
AWSSecretKey: <follow-instructions-below-to-get-value>
A description of how to populate the values for each keys.yaml is summarized in the following table.
Key | How To Get Value | Underlying Cloud Resource |
---|---|---|
secretsType | “master” is the only valid value. | None. This explicit value is used by one of the Agile Cloud Manager’s built-in controllers. |
clientName | Follow instructions in article entitled “Set up Azure Seed Credentials” | Azure App Registration with permissions described in article. |
clientId | Follow instructions in article entitled “Set up Azure Seed Credentials” | Azure App Registration with permissions described in article. |
clientSecret | Follow instructions in article entitled “Set up Azure Seed Credentials” | Azure App Registration with permissions described in article. |
gitUserName | Name of your GitHub account. This can be the same for each keys.yaml | |
gitPass | Personal Access Token following instructions on GitHub. This can be the same for each keys.yaml | |
KeyName | Follow instructions in article entitled “Set up AWS Seed Credentials” | AWS KeyPair for EC2 instances. |
AWSAccessKeyId | Follow instructions in article entitled “Set up AWS Seed Credentials” | AWS account with permissions described in article. |
AWSSecretKey | Follow instructions in article entitled “Set up AWS Seed Credentials” | AWS account with permissions described in article. |
For each keys.yaml, the values are given explicitly by the cloud provider when you create the underlying cloud resources following the instructions in the articles in the links in the preceding table.
The only exception is that the value for the secretsType
key must always be “master” for manually-input keys like this. The secretsType
key is used by some of the built-in controllers in the demo for terraform and packer in Azure, and those custom controllers use automation to create other secrets whose secretsType
value is different. But at this point in the process, put master
as the value for secretsType
and follow the linked articles to get the values for all the other fields in each keys.yaml.
Also note that the demos retrieve templates from public repositories. Later, to actually use this in your organization, you will need to add credentials to your keys.yaml that will enable the Agile Cloud Manager to retrieve templates from your private repositories. The gitUsername
and gitPass
variables in each keys.yaml are there to make it easier for you to switch to your own private repositories.
Each config.yaml requires all 30 of the fields that are generated when you run the python configFilesGenerator.py
command. But 27 of those fields will already be populated with values that you must leave un-touched for the demos. The only three fields whose values you need to change are the first 3 lines in the file, which should read as follows:
subscriptionId: Follow instructions in article entitled “Set up Azure Seed Credentials”
subscriptionName: Follow instructions in article entitled “Set up Azure Seed Credentials”
tenantId: Follow instructions in article entitled “Set up Azure Seed Credentials”
In order for the GitHub automation to be able to use the keys and config you just created, do the following:
However, you will need to understand what the Agile Cloud Manager is and how it works in order to be able to navigate through the logs and the cloud provider portals in the course of managing the things that the very powerful, yet short and simple commands are doing.
Therefore, you should read and understand the following prerequisite articles before running these demos:
Make sure that you are still logged into the Azure Portal and logged into the AWS Console before you run the commands below because you will want to view the things that are being created and destroyed in each portal when the commands below are running.
In the Azure Portal, have the “Resource Groups” page open and make sure that the subscription you created for these demos is selected to have all of its resources shown in the page.
In the AWS Console, have the “Cloud Formation” page open so that you can see all stacks for the user you created in the region in which resources will be created.
Reload each of the two web portal pages repeatedly while the commands below are running. This means manually refresh the list of resource groups in Azure and manually refresh the list of stacks in AWS Cloud Formation.
Arrange your screen so that the command line terminal and both the Azure and AWS portal pages are visible at the same time so that you can toggle back and forth between the three views of what the Agile Cloud Manager is doing when you run the following commands.
All of the demos can be run in parallel immediately by doing the following:
The workflows should take perhaps 30 minutes to all run in parallel.
Monitor the workflows by clicking on them to read the logs that github is creating.
That is all it takes!
These demos have been run for many thousands of hours, so they should work properly 99.9% of the time, assuming that you gave them valid key and config inputs.
In the normal day to day situations when a process breaks, you have access to the complete logs, segregated by step, inside the workflow that the Agile Cloud Manager creates for your uniquely modeled systems and appliances, so that you can immediately navigate to whatever needs to be adjusted in order to get the automation running again.
If the automation breaks anywhere in the process of running the above commands, follow the instructions given in the articles about how to use logging in the Agile Cloud Manager.
Once you identify where in the process something broke, you can decide how to address the problem.
Re-running the job in the GitHub UI resolves almost all problems, assuming that you gave valid values for the key and config inputs. Re-running the job resolves problems caused by things like:
If rerunning the job does not resolve the problem, you can usually trace the problem back to one or two single variable values in config.yaml or in keys.yaml. For example, if you made a typo in any of the cloud-provider sourced values, or if you made an illegal modification to one of the fixed values, or in the very rare event that one of the randomly generated variables may have been selected by another user elsewhere in the internet at the same precise moment.
If the error is caused by an invalid value in one of the variables, then:
Be careful to avoid creating orphaned resources, for example, if you change the name of a variable that feeds into a resource name.
Finally, make sure that everything has been destroyed in the Azure Portal and in the AWS Console before you finish with the demos.
If used properly, the Agile Cloud Manager will clean up every resource after commands are run so that you will not have orphaned resources in the cloud. But like any tool, the Agile Cloud Manager must be used properly in order to function properly.
One example is that if an “on” workflow breaks in the middle of its process for reasons caused by the underlying cloud provider or due to your variable inputs, you need to re-run the entire “on” command until every step in the workflow works as intended.
Another example is if you destroy the terraform backend while resources managed by that terraform backend still exist in the cloud.
The Agile Cloud Manager greatly simplifies the management of complex systems and greatly reduces the amount of code required in pipelines.
Leveraging the greatly simplified interface offered by the Agile Cloud Manager requires that you:
Pipeline design later on can embed all the CLI commands into governable operations you can utilize to control how all the different types of surgeries can be performed on your systems.
But for now, on the very first day of running the demos, if all else fails, you can manually delete any orphaned resources in the AWS Console or in the Azure Portal if time constraints prevent you from learning the Agile Cloud Manager’s CLI interface and object model well enough in order to be able to use the CLI to do any necessary cleanups if, for example, an Azure region is having an outage, or if you chose an illegal value for one of the variables in config.yaml.