November 2021
If you wish to provide feedback on this lab, there is an error, or you want to make a suggestion, please email: cloud-intelligence-dashboards@amazon.com
This scenario allows customers with multiple payer (management) accounts to deploy all the CUR dashboards on top of the aggregated data from multiple payers. To fulfill prerequisites customers should set up or have setup a new Governance Account. The payer account CUR S3 buckets will have S3 replication enabled, and will replicate to a new S3 bucket in your separate Governance account.
NOTE: These steps assume you’ve already setup the CUR to be delivered in each payer (management) account.
{
"Version": "2008-10-17",
"Id": "PolicyForCombinedBucket",
"Statement": [
{
"Sid": "Set permissions for objects",
"Effect": "Allow",
"Principal": {
"AWS": ["{PayerAccountA}","{PayerAccountB}"]
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::{GovernanceAccountBucketName}/*"
},
{
"Sid": "Set permissions on bucket",
"Effect": "Allow",
"Principal": {
"AWS": ["{PayerAccountA}","{PayerAccountB}"]
},
"Action": [
"s3:List*",
"s3:GetBucketVersioning",
"s3:PutBucketVersioning"
],
"Resource": "arn:aws:s3:::{GovernanceAccountBucketName}"
},
{
"Sid": "Set permissions to pass object ownership",
"Effect": "Allow",
"Principal": {
"AWS": ["{PayerAccountA}","{PayerAccountB}"]
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ObjectOwnerOverrideToBucketOwner",
"s3:ReplicateTags",
"s3:GetObjectVersionTagging",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::{GovernanceAccountBucketName}/*"
}
]
}
This policy supports objects encrypted with either SSE-S3 or not encrypted objects. For SSE-KMS encrypted objects additional policy statements and replication configuration will be needed: see https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-config-for-kms-objects.html
This step should be done in each payer (management) account.
This step should be done in each payer (management) account.
Sync existing objects from CUR S3 bucket to S3 bucket in Governance account.
aws s3 sync s3://{curBucketName} s3://{GovernanceAccountBucketName} --acl bucket-owner-full-control
After performing this step in each payer (management) account S3 bucket in Governance account will contain CUR data from all payer accounts under respective prefixes.
These actions should be done in Governance account
In Add another data store leave No by default. Click Next
In Choose an IAM role select Create an IAM role and provide role name. Click Next
In Create a schedule for this crawler select Daily and specify Hour and Minute for crawler to run
In Configure the crawler’s output choose Glue Database in which you’d like crawler to create a table or add new one. Select Create a single schema for each S3 path checkbox. Select Add new columns only and Ignore the change and don’t update the table in the data catalog in Configuration options. Click Next
Please make sure Database name doesn’t include ‘-’ character
Crawler configuration should look as on the screenshot below. Click Finish
Resume deployment methodoly of choice from previous page.
Do you want to give access to the dashboards to someone within your organization, but you only want them to see data from accounts or business units associated with their role or position? You can use row level seucirty in QuickSight to accomplish limiting access to data by user. In these steps below, we will define specific Linked Account IDs against individual users. Once the Row-Level Security is enabled, users will continue to load the same Dashboards and Analyses, but will have custom views that restrict the data to only the Linked Account IDs defined.
Video Tutorial
Considerations:
The permissions dataset can’t contain duplicate values. Duplicates are ignored when evaluating how to apply the rules.
Each user or group specified can see only the rows that match the field values in the dataset rules.
If you add a rule for a user or group and leave all other columns with no value (NULL), you grant them access to all the data.
If you don’t add a rule for a user or group, that user or group can’t see any of the data.
The full set of rule records that are applied per user must not exceed 999. This applies to the total number of rules that are directly assigned to a user name plus any rules that are assigned to the user through group names.
If a field includes a comma (,) Amazon QuickSight treats each word separated from another by a comma as an individual value in the filter. For example, in (‘AWS’, ‘INC’), AWS,INC is considered as two strings: AWS and INC. To filter with AWS,INC, wrap the string with double quotation marks in the permissions dataset.
Create a CSV file that looks something like this:
username,account_id
user1@amazon.co.uk,"123456123456"
user1@amazon.co.uk,"987654987654"
user2@amazon.fr,"123456123456"
user3@amazon.com,"789123456123"
Any Account IDs that you wish the given user to see should be defined in the account_id field of the CSV. Create a separate row for a single username having access to multiple account IDs. Ensure there are no spaces after your final quote character. Name this file something similar to CUDOS_Dataset_rules.csv
If you want to use QuickSight groups the CSV input file is slightly different. Instead of UserName as the initial field you have to use GroupName. Also, you can only use Users or Groups in the input file, not both. This page provides more details. QuickSight groups can only be created and managed via the QuickSight CLI. There is no UI for this in the QuickSight console.
You now have 2 options on how to proceed:
Create a new Dataset using the CSV file above as the Data Source: Click New Dataset and select Upload a file. Locate your CUDOS_Dataset_rules.csv and a Preview will appear.
Click Edit settings and prepare data. Verify that the account_id field is a String data type. If it appears as an Integer, change the data type for account_id to String.
The reason we need to do this is that we will lose any leading or trailing zeroes on an Account ID if it remains as an Integer value. This is also the same Data Type used for the account_id field in all the CUDOS Datasets.
Save the CUDOS_Dataset_rules.csv dataset.
Once you have applied the CUDOS_Dataset_rules S3 Dataset to all your CUDOS datasets, visit the CUDOS Dashboard as a user who is defined in the csv file, and confirm the Account IDs shown are only the ones specified in that file.
Upload your csv file to the Athena query location bucket. eg. aws-athena-query-results-123456123456-us-east-1
Create an S3 manifest file that looks something like this:
{
"entries": [
{"url":"s3://aws-athena-query-results-123456123456-us-east-1/CUDOS_Dataset_rules.csv", "mandatory":true},
]
}
This manifest file can be saved locally, or uploaded to the same S3 bucket where the csv file is stored. Save this file as something similar to CUDOS_manifest.json.
Back in the QuickSight Admin Console, click New Dataset and select S3. Name the Dataset CUDOS_Dataset_rules and locate your CUDOS_manifest.json file either by entering the S3 URL where it is stored, or choosing to upload from your local machine.
Click Edit/preview data.
Verify that the account_id field is a String data type. If it appears as an Integer, change the data type for account_id to String.
Save the CUDOS_Dataset_rules dataset.
On each of the datasets that the CUDOS dashboard is using, define Row Level Security by following these steps:
Once you have applied the CUDOS_Dataset_rules S3 Dataset to all your CUDOS datasets, visit the CUDOS Dashboard as a user who is defined in the csv file, and confirm the Account IDs shown are only the ones specified in that file.
When attempting to deploy the dashboard manually, some users get an error that states COLUMN_GEOGRAPHIC_ROLE_MISMATCH.
This error is caused by there being too many data source connectors in QuickSight with the same name. To check how many data source connectors you have, visit QuickSight datasets and click on new datasets. Scroll to the bottom and note how many Athena data connectors there are with the same name.
Unless you know which datasets are tied to which data sources, it is faster to simply delete all the Cloud Intelligence Dashboards data sources and data sets from QuickSight, and start adding them again, this time only using a single data source. This is described in detail in this lab under the manual deployment option as step 22. You should only have one data source for all your Cloud Intelligence Dashboard datasets, including customer_all. If you wish to use separate data sources, they must not have the same name.
When attempting to deploy the dashboard, some users get an error that states product_cache_engine
cannot be resolved.
This view is dependent on having or historically having an RDS database instance and an ElastiCache cache instance run in your organization. If you get the error that the column product_database_engine
or product_deployment_option
does not exist, then you do not have any RDS database instances running. There are two options to resolve this.
To make this column show up in the CUR spin up a database in the RDS service, let it run for a couple of minutes and in the next integration of the crawler the column will appear. If you get the error that the column product_cache_engine
does not exist, then you do not have any ElastiCache cache instances running. To make this column show up in the CUR spin up an ElastiCache cache instance in the ElastiCache service, let it run for a couple of minutes and in the next integration of the crawler the column will appear. You can verify this by running the Athena query: SHOW COLUMNS FROM tablename - and replace the tablename accordingly after selecting the correct CUR database in the dropdown on the left side in the Athena view.
Follow the below steps to remove the colum. Be aware this will mean if you do add ElastiCache instances to your accounts you should put this back.
kpi_instance_all
product_cache_engine
and remove the last Group by numberkpi_instance_all
datasetThis page will be updated regularly with new answers. If you have a FAQ you’d like answered here, please reach out to us here cloud-intelligence-dashboards@amazon.com.