Boundary
Create a storage bucket
User sessions can be recorded and audited using Boundary 0.13 or greater. A Boundary resource known as a storage bucket is used to store the recorded sessions. The storage bucket represents a bucket in an external storage provider. Before you can enable session recording, you must create one or more storage buckets in Boundary and associate them with the external store.
A storage bucket can only belong to the global scope or an org scope. A storage bucket that is associated with the global scope can be associated with any target. However, a storage bucket in an Org scope can only be associated with targets in a project from the same org scope. Any storage buckets associated with an Org scope are deleted when the org itself is deleted.
For more information about using session recording to audit user sessions, refer to Auditing.
Requirements
Before you create a storage bucket in Boundary, you must:
- Configure workers for storage
- Configure one of the following storage providers:
Create a storage bucket
Select a storage provider.
Complete the following steps to create a storage bucket in Boundary.
Log in to Boundary.
Click Storage Buckets in the navigation bar.
Click New Storage Bucket.
Complete the following fields to create the Boundary storage bucket:
- Name: (Optional) The name field is optional, but if you enter a name it must be unique.
- Description: (Optional) An optional description of the Boundary storage bucket for identification purposes.
- Scope: (Required) A storage bucket can belong to the Global scope or an Org scope. It can only associated with targets from the scope it belongs to.
- Provider: (Required) The external storage bucket provider.
- Bucket name: (Required) Name of the AWS bucket you want to associate with the Boundary storage bucket.
- Bucket prefix: (Optional) A base path where session recordings are stored.
- Region: (Required) The AWS region to use.
- Credential type: (Required) The type of credential you want to use to authenticate to the external storage.
The required fields for creating a storage bucket vary depending on whether you configured the Amazon S3 bucket with static or dynamic credentials:
- Static: Authenticates to the storage bucket using an access key that AWS generates.
- Dynamic: Authenticates to the storage bucket using credentials that were generated by AWS
AssumeRole
.
Access key ID: (Required) The access key ID that AWS generates for the IAM user to use with the storage bucket.
Secret access key: (Required) The secret access key that AWS generates for the IAM user to use with this storage bucket.
Worker filter: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.
Disable credential rotation: (Optional) Prevents the AWS plugin from automatically rotating credentials.
Although credentials are stored encrypted in Boundary, by default the AWS plugin attempts to rotate the credentials you provide. The given credentials are used to create a new credential, and then the original credential is revoked. After rotation, only Boundary knows the client secret the plugin uses.
- Role ARN: (Required) The ARN (Amazon Resource Name) role that is attached to the EC2 instance that the self-managed worker runs on.
- Role external ID: (Optional) A required value if you delegate third party access to your AWS resources. For more information, refer to the AWS documentation for How to use an external ID when granting access to your AWS resources to a third party.
- Role session name: (Optional) A unique identifier for the AWS session. You can use this value to control how IAM principals and applications name their role sesions when they assume an IAM role. By providing a session name, you enable tracking session actions in AWS CloudTrail logs. For more information, refer to the AWS documentation for Logging IAM and AWS STS API calls with AWS CloudTrail.
- Role tags: An object with key-value pair attributes that is passed when you assume an IAM role. For more information, refer to the AWS documentation for Passing session tags in AWS STS.
- Worker filter: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.
- Disable credential rotation: (Required) Prevents the AWS plugin from automatically rotating credentials. This option is required if you use dynamic credentials.
Click Save.
The required fields for creating a storage bucket depend on whether you configured the Amazon S3 bucket with static or dynamic credentials:
Note
The preferred way to authenticate to the S3 bucket is by using dynamic credentials. The example below uses static credentials instead.
Log in to Boundary.
Use the following command to create a storage bucket in Boundary:
$ boundary storage-buckets create \ -bucket-name mybucket1 \ -plugin-name aws \ -scope-id o_1234567890 \ -worker-filter ‘“aws-worker” in “/tags/type”’ \ -secret ‘{“access_key_id”: “123456789” , “secret_access_key” : “123/456789/12345678”}’ \ -attributes ‘{“region”:”us-east-1”,”disable_credential_rotation”:true}’
$ boundary storage-buckets create \ -bucket-name mybucket1 \ -plugin-name aws \ -scope-id o_1234567890 \ -worker-filter ‘“aws-worker” in “/tags/type”’ \ -secret ‘{“access_key_id”: “123456789” , “secret_access_key” : “123/456789/12345678”}’ \ -attributes ‘{“region”:”us-east-1”,”disable_credential_rotation”:true}’
Replace the values above with the following required AWS secrets and any attributes you want to associate with the Boundary storage bucket:
region
: (Required) The AWS region to use.bucket-name
: (Required) The name of the AWS bucket you want to associate with the Boundary storage bucket.plugin-name
: (Required) The name of the Boundary storage plugin.scope_id
: (Required) A storage bucket can belong to the Global scope or an Org scope.worker-filter
: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.secret
: (Required) The AWS credentials to use.access_key_id
: (Required) The AWS access key to use.secret_access_key_id
: (Required) The AWS secret access key to use. This attribute contains the secret access key for static credentials.
attributes
or-attr
: Attributes of the Amazon S3 storage bucket.shared_credentials_file
: (Optional) The shared credentials file to use.shared_credentials_profile
: (Optional) The profile name to use in the shared credentials file.disable_credential_rotation
: (Optional) Prevents the AWS plugin from automatically rotating credentials.
Although credentials are stored encrypted in Boundary, by default the AWS plugin attempts to rotate the credentials you provide. The given credentials are used to create a new credential, and then the original credential is revoked. After rotation, only Boundary knows the client secret the plugin uses.
Log in to Boundary.
Use the following command to create a storage bucket in Boundary:
$ boundary storage-buckets create \ -bucket-name mys3bucket \ -plugin-name aws \ -scope-id o_1234567890 \ -worker-filter ‘“s3” in “/tags/type”’ \ -attributes ‘{“region”:”us-east-1”,”disable_credential_rotation”:true,"role_arn":"arn:aws:iam::123456789012:role/S3Access"}’
$ boundary storage-buckets create \ -bucket-name mys3bucket \ -plugin-name aws \ -scope-id o_1234567890 \ -worker-filter ‘“s3” in “/tags/type”’ \ -attributes ‘{“region”:”us-east-1”,”disable_credential_rotation”:true,"role_arn":"arn:aws:iam::123456789012:role/S3Access"}’
Replace the values above with the following required AWS secrets and any attributes you want to associate with the Boundary storage bucket:
region
: (Required) The AWS region to use.bucket-name
: (Required) The name of the AWS bucket you want to associate with the Boundary storage bucket.plugin-name
: (Required) The name of the Boundary storage plugin.scope_id
: (Required) A storage bucket can belong to the Global scope or an Org scope.worker-filter
: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.attributes
or-attr
: Attributes of the Amazon S3 storage bucket.role_arn
: (Required) The ARN (Amazon Resource Name) role that is attached to the EC2 instance that the self-managed worker runs on.role_external_id
: (Optional) A required value if you delegate third party access to your AWS resources. For more information, refer to the AWS documentation for [How to use an external ID when granting access to your AWS resources to a third party](https://docs.aws.amazon.com/IAM/latest/UserGuide/ id_roles_create_for-user_externalid.html).role_session_name
: (Optional) A unique identifier for the AWS session. You can use this value to control how IAM principals and applications name their role sesions when they assume an IAM role. By providing a session name, you enable tracking session actions in AWS CloudTrail logs. For more information, refer to the AWS documentation for [Logging IAM and AWS STS API calls with AWS CloudTrail](https://docs.aws.amazon.com/IAM/ latest/UserGuide/cloudtrail-integration.html).role_tags
: (Optional) An object with key-value pair attributes that is passed when you assume an IAM role. For more information, refer to the AWS documentation for [Passing session tags in AWS STS](https://docs.aws.amazon.com/IAM/latest/UserGuide/ id_session-tags.html).
The HCL code for creating a storage bucket is different depending on whether you configured the AWS S3 bucket with static or dynamic credentials. This page provides example configurations for a generic Terraform deployment.
Refer to the Boundary Terraform provider documentation to learn about the requirements for the following example attributes.
Support for Amazon S3 storage providers leverages the Boundary AWS plugin.
Note
The preferred way to authenticate to the S3 bucket is by using dynamic credentials. The code below uses static credentials instead.
Apply the following Terraform policy:
resource "boundary_storage_bucket" "aws_static_credentials_example" { name = "My aws storage bucket with static credentials" description = "My first storage bucket" scope_id = "o_1234567890" plugin_name = "aws" bucket_name = "mybucket1" attributes_json = jsonencode({ "region" = "us-east-1" }) # recommended to pass in aws secrets using a file() or using environment variables # the secrets below must be generated in aws using an iam user with programmatic access secrets_json = jsonencode({ "access_key_id" = "aws_access_key_id_value", "secret_access_key" = "aws_secret_access_key_value" }) worker_filter = "\"aws-worker\" in \"/tags/type\"" } output "storage_bucket_id" { value = boundary_storage_bucket.aws_static_credentials_example.id }
resource "boundary_storage_bucket" "aws_static_credentials_example" {
name = "My aws storage bucket with static credentials"
description = "My first storage bucket"
scope_id = "o_1234567890"
plugin_name = "aws"
bucket_name = "mybucket1"
attributes_json = jsonencode({ "region" = "us-east-1" })
# recommended to pass in aws secrets using a file() or using environment variables
# the secrets below must be generated in aws using an iam user with programmatic access
secrets_json = jsonencode({
"access_key_id" = "aws_access_key_id_value",
"secret_access_key" = "aws_secret_access_key_value"
})
worker_filter = "\"aws-worker\" in \"/tags/type\""
}
output "storage_bucket_id" {
value = boundary_storage_bucket.aws_static_credentials_example.id
}
Apply the following Terraform policy:
resource "boundary_storage_bucket" "aws_dynamic_credentials_example" { name = "My aws storage bucket with dynamic credentials" description = "My first storage bucket" scope_id = "o_1234567890" plugin_name = "aws" bucket_name = "mybucket1" # the role_arn value should be the same arn used in the instance profile attached to the ec2 instance # https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html attributes_json = jsonencode({ "region" = "us-east-1" "role_arn" = "arn:aws:iam::123456789012:role/S3Access" "disable_credential_rotation" = true }) worker_filter = "\"s3-worker\" in \"/tags/type\"" } output "storage_bucket_id" { value = boundary_storage_bucket.aws_dynamic_credentials_example.id }
resource "boundary_storage_bucket" "aws_dynamic_credentials_example" {
name = "My aws storage bucket with dynamic credentials"
description = "My first storage bucket"
scope_id = "o_1234567890"
plugin_name = "aws"
bucket_name = "mybucket1"
# the role_arn value should be the same arn used in the instance profile attached to the ec2 instance
# https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
attributes_json = jsonencode({
"region" = "us-east-1"
"role_arn" = "arn:aws:iam::123456789012:role/S3Access"
"disable_credential_rotation" = true
})
worker_filter = "\"s3-worker\" in \"/tags/type\""
}
output "storage_bucket_id" {
value = boundary_storage_bucket.aws_dynamic_credentials_example.id
}
Complete the following steps to create a storage bucket in Boundary.
Note
MinIO requires a service account and its associated access keys to set up a Boundary storage bucket. Refer to the Configure MinIO page to learn more.
Log in to Boundary.
Click Storage Buckets in the navigation bar.
Click New Storage Bucket.
Complete the following fields to create the Boundary storage bucket:
- Name: (Optional) The name field is optional, but if you enter a name it must be unique.
- Description: (Optional) An optional description of the Boundary storage bucket for identification purposes.
- Scope: (Required) A storage bucket can belong to the Global scope or an Org scope. It can only be associated with targets from the scope it belongs to.
- Provider: (Required) The external storage bucket provider.
- Endpoint URL: (Required) The fully-qualified endpoint pointing to a MinIO S3 API, such as
https://my-minio-instance.dev:9000
. - Bucket name: (Required) Name of the AWS bucket you want to associate with the Boundary storage bucket.
- Region: (Optional) The region to configure the storage bucket for.
- Access key ID (Required): The MinIO service account's access key to use with this storage bucket.
- Secret access key (Required): The MinIO service account's secret key to use with this storage bucket.
- Worker filter: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.
- Disable credential rotation: (Optional) Controls whether the plugin will rotate the incoming credentials and manage a new MinIO service account. If this attribute is set to false, or not provided, the plugin will rotate the incoming credentials, using them to create a new MinIO service account, then delete the incoming credentials.
Click Save.
Log in to Boundary.
Use the following command to create a storage bucket in Boundary:
$ boundary storage-buckets create \ -bucket-name myminiobucket \ -plugin-name minio \ -scope-id o_1234567890 \ -bucket-prefix="foo/bar/zoo" \ -worker-filter '"minio-worker" in "/tags/type"' \ -attr endpoint_url="https://my-minio-instance.dev:9000" \ -attr region="REGION" \ -attr disable_credential_rotation=true \ -secret access_key_id="KEY" \ -secret secret_access_key="SECRET"
$ boundary storage-buckets create \ -bucket-name myminiobucket \ -plugin-name minio \ -scope-id o_1234567890 \ -bucket-prefix="foo/bar/zoo" \ -worker-filter '"minio-worker" in "/tags/type"' \ -attr endpoint_url="https://my-minio-instance.dev:9000" \ -attr region="REGION" \ -attr disable_credential_rotation=true \ -secret access_key_id="KEY" \ -secret secret_access_key="SECRET"
Replace the values above with the following required secrets and any optional attributes you want to associate with the Boundary storage bucket:
bucket-name
: (Required) Name of the MinIO bucket you want to associate with the Boundary storage bucket.plugin-name
: (Required) The name of the Boundary storage plugin.scope_id
: (Required) A storage bucket can belong to the Global scope or an Org scope.worker-filter
: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.secret
: (Required) The MinIO credentials to use.access_key_id
(Required): The MinIO service account's access key to use with this storage bucket.secret_access_key
(Required): The MinIO service account's secret key to use with this storage bucket.
attributes
or-attr
: Attributes of the MinIO storage bucket.endpoint_url
(Required): Fully-qualified endpoint pointing to a MinIO S3 API.region
: (Optional) The region to configure the storage bucket for.disable_credential_rotation
: (Optional) Controls whether the plugin will rotate the incoming credentials and manage a new MinIO service account. If this attribute is set tofalse
, or not provided, the plugin will rotate the incoming credentials to create a new MinIO service account, then delete the incoming credentials.This option must be set to
true
if you use dynamic credentials.
This page provides example configurations for a generic Terraform deployment.
Refer to the Boundary Terraform provider documentation to learn about the requirements for the following example attributes.
Support for MinIO storage providers leverages the Boundary MinIO plugin.
Apply the following Terraform policy:
resource "boundary_storage_bucket" "minio_credentials_example" { name = "My MinIO storage bucket" description = "My first storage bucket" scope_id = "o_1234567890" plugin_name = "minio" bucket_name = "mybucket1" attributes_json = jsonencode({ "endpoint_url" = "minio_access_key_id_value", "disable_credential_rotation" = true }) secrets_json = jsonencode({ "access_key_id" = "minio_access_key_id_value", "secret_access_key" = "minio_secret_access_key_value" }) worker_filter = "\"minio-worker\" in \"/tags/type\"" } output "storage_bucket_id" { value = boundary_storage_bucket.minio_credentials_example.id }
resource "boundary_storage_bucket" "minio_credentials_example" {
name = "My MinIO storage bucket"
description = "My first storage bucket"
scope_id = "o_1234567890"
plugin_name = "minio"
bucket_name = "mybucket1"
attributes_json = jsonencode({
"endpoint_url" = "minio_access_key_id_value",
"disable_credential_rotation" = true
})
secrets_json = jsonencode({
"access_key_id" = "minio_access_key_id_value",
"secret_access_key" = "minio_secret_access_key_value"
})
worker_filter = "\"minio-worker\" in \"/tags/type\""
}
output "storage_bucket_id" {
value = boundary_storage_bucket.minio_credentials_example.id
}
Complete the following steps to create a storage bucket in Boundary using an S3-compliant storage provider. Hitachi Content Platform is used as an example below.
Note
S3-compliant storage requires a service account and its associated access keys to set up a Boundary storage bucket. Refer to the Configure S3-compliant storage page to learn more.
Log in to Boundary.
Click Storage Buckets in the navigation bar.
Click New Storage Bucket.
Complete the following fields to create the Boundary storage bucket:
Name: (Optional) The name field is optional, but if you enter a name it must be unique.
Description: (Optional) An optional description of the Boundary storage bucket for identification purposes.
Scope: (Required) A storage bucket can belong to the Global scope or an Org scope. It can only be associated with targets from the scope it belongs to.
Provider: (Required) The external storage bucket provider. For S3-compliant storage, select MinIO.
Endpoint URL: (Required) The fully-qualified endpoint pointing to a storage provider's S3 API, such as
https://my-hitachi-instance.dev:9000
.Bucket name: (Required) Name of the S3-compliant storage bucket you want to associate with the Boundary storage bucket.
Region: (Optional) The region to configure the storage bucket for.
Access key ID (Required): The storage provider's service account's access key to use with this storage bucket.
Secret access key (Required): The storage provider's service account's secret key to use with this storage bucket.
Worker filter: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.
Disable credential rotation: (Optional) Controls whether the plugin will rotate the incoming credentials and manage a new storage service account. If this attribute is set to false, or not provided, the plugin will rotate the incoming credentials, using them to create a new storage service account, then delete the incoming credentials.
Note that credential rotation is not supported for Hitachi Content Platform, and it may not function for other S3-compatible providers.
Click Save.
Log in to Boundary.
Use the following command to create a storage bucket in Boundary:
$ boundary storage-buckets create \ -bucket-name mystoragebucket \ -plugin-name minio \ -scope-id o_1234567890 \ -bucket-prefix="foo/bar/zoo" \ -worker-filter '"storage-worker" in "/tags/type"' \ -attr endpoint_url="https://my-hitachi-instance.dev:9000" \ -attr region="REGION" \ -attr disable_credential_rotation=true \ -secret access_key_id="KEY" \ -secret secret_access_key="SECRET"
$ boundary storage-buckets create \ -bucket-name mystoragebucket \ -plugin-name minio \ -scope-id o_1234567890 \ -bucket-prefix="foo/bar/zoo" \ -worker-filter '"storage-worker" in "/tags/type"' \ -attr endpoint_url="https://my-hitachi-instance.dev:9000" \ -attr region="REGION" \ -attr disable_credential_rotation=true \ -secret access_key_id="KEY" \ -secret secret_access_key="SECRET"
Replace the values above with the following required secrets and any optional attributes you want to associate with the Boundary storage bucket:
bucket-name
: (Required) Name of the S3-compliant storage bucket you want to associate with the Boundary storage bucket.plugin-name
: (Required) The name of the Boundary storage plugin. Use theminio
plugin for S3-compatible storage.scope_id
: (Required) A storage bucket can belong to the Global scope or an Org scope.worker-filter
: (Required) A filter expression that indicates which Boundary workers have access to the storage. The filter must match an existing worker in order to create a Boundary storage bucket. Refer to filter examples to learn about worker tags and filters.secret
: (Required) The storage provider's credentials to use.access_key_id
(Required): The storage provider's service account's access key to use with this storage bucket.secret_access_key
(Required): The storage provider's service account's secret key to use with this storage bucket.
attributes
or-attr
: Attributes of the S3-compliant storage bucket.endpoint_url
(Required): Fully-qualified endpoint pointing to an S3-compliant API. This example uses Hitachi, but you should substitute your storage provider's endpoint.region
: (Optional) The region to configure the storage bucket for.disable_credential_rotation
: (Optional) Controls whether the plugin will rotate the incoming credentials and manage a new storage service account. If this attribute is set tofalse
, or not provided, the plugin will rotate the incoming credentials to create a new storage service account, then delete the incoming credentials.Note that credential rotation is not supported for Hitachi Content Platform, and it may not function for other S3-compatible providers.
This page provides example configurations for a generic Terraform deployment.
Refer to the Boundary Terraform provider documentation to learn about the requirements for the following example attributes.
Support for S3-compliant storage providers leverages the Boundary MinIO plugin.
Apply the following Terraform policy:
resource "boundary_storage_bucket" "storage_credentials_example" { name = "My storage bucket" description = "My first storage bucket" scope_id = "o_1234567890" plugin_name = "minio" bucket_name = "mybucket1" attributes_json = jsonencode({ "endpoint_url" = "minio_access_key_id_value", "disable_credential_rotation" = true }) secrets_json = jsonencode({ "access_key_id" = "storage_access_key_id_value", "secret_access_key" = "storage_secret_access_key_value" }) worker_filter = "\"storage-worker\" in \"/tags/type\"" } output "storage_bucket_id" { value = boundary_storage_bucket.storage_credentials_example.id }
resource "boundary_storage_bucket" "storage_credentials_example" {
name = "My storage bucket"
description = "My first storage bucket"
scope_id = "o_1234567890"
plugin_name = "minio"
bucket_name = "mybucket1"
attributes_json = jsonencode({
"endpoint_url" = "minio_access_key_id_value",
"disable_credential_rotation" = true
})
secrets_json = jsonencode({
"access_key_id" = "storage_access_key_id_value",
"secret_access_key" = "storage_secret_access_key_value"
})
worker_filter = "\"storage-worker\" in \"/tags/type\""
}
output "storage_bucket_id" {
value = boundary_storage_bucket.storage_credentials_example.id
}
Boundary creates the storage bucket resource and provides you with the bucket's ID.
Next steps
After the storage bucket is created in Boundary, you can use the bucket's ID to enable session recording on targets.
Resources
The following docs are relevant to configuring storage buckets: