Skip to content

Service Constructs

Complete service infrastructure including compute and storage.

service

Modules

compute

Compute service constructs for AWS Batch.

This module provides high-level constructs for creating Batch compute environments with various configurations.

Classes
BaseBatchComputeConstruct
BaseBatchComputeConstruct(
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: Vpc,
    batch_name: str,
    buckets: Iterable[Bucket] | None = None,
    file_systems: Iterable[FileSystem | IFileSystem]
    | None = None,
    mount_point_configs: Iterable[MountPointConfiguration]
    | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[PolicyStatement]
    | None = None,
    **kwargs
)

Bases: EnvBaseConstruct

Base class for Batch compute constructs.

Abstract base class that provides common functionality for creating and managing AWS Batch compute environments.

Initialize a Batch compute construct.

Parameters:

Name Type Description Default
scope Construct

The construct scope.

required
id Optional[str]

The construct ID.

required
env_base EnvBase

Environment base for resource naming.

required
vpc Vpc

VPC for the compute environments.

required
batch_name str

Name for the batch infrastructure.

required
buckets Optional[Iterable[Bucket]]

S3 buckets to grant access to.

None
file_systems Optional[Iterable[Union[FileSystem, IFileSystem]]]

EFS file systems to grant access to.

None
mount_point_configs Optional[Iterable[MountPointConfiguration]]

Mount point configurations for EFS.

None
instance_role_name Optional[str]

Name for the instance role.

None
instance_role_policy_statements Optional[List[PolicyStatement]]

Additional IAM policy statements.

None
**kwargs

Additional arguments passed to parent.

{}
Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
def __init__(
    self,
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: ec2.Vpc,
    batch_name: str,
    buckets: Iterable[s3.Bucket] | None = None,
    file_systems: Iterable[efs.FileSystem | efs.IFileSystem] | None = None,
    mount_point_configs: Iterable[MountPointConfiguration] | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[iam.PolicyStatement] | None = None,
    **kwargs,
) -> None:
    """Initialize a Batch compute construct.

    Args:
        scope (Construct): The construct scope.
        id (Optional[str]): The construct ID.
        env_base (EnvBase): Environment base for resource naming.
        vpc (ec2.Vpc): VPC for the compute environments.
        batch_name (str): Name for the batch infrastructure.
        buckets (Optional[Iterable[s3.Bucket]]): S3 buckets to grant access to.
        file_systems (Optional[Iterable[Union[efs.FileSystem, efs.IFileSystem]]]):
            EFS file systems to grant access to.
        mount_point_configs (Optional[Iterable[MountPointConfiguration]]):
            Mount point configurations for EFS.
        instance_role_name (Optional[str]): Name for the instance role.
        instance_role_policy_statements (Optional[List[iam.PolicyStatement]]):
            Additional IAM policy statements.
        **kwargs: Additional arguments passed to parent.
    """
    super().__init__(scope, id, env_base, **kwargs)
    self.batch_name = batch_name
    self.batch = Batch(
        self,
        batch_name,
        self.env_base,
        vpc=vpc,
        instance_role_name=instance_role_name,
        instance_role_policy_statements=instance_role_policy_statements,
    )

    self.create_batch_environments()

    bucket_list = list(buckets or [])

    file_system_list = list(file_systems or [])

    if mount_point_configs:
        mount_point_config_list = list(mount_point_configs)
        file_system_list = self._update_file_systems_from_mount_point_configs(
            file_system_list, mount_point_config_list
        )
    else:
        mount_point_config_list = self._get_mount_point_configs(file_system_list)

    # Validation to ensure that the file systems are not duplicated
    self._validate_mount_point_configs(mount_point_config_list)

    self.grant_storage_access(*bucket_list, *file_system_list)
Attributes
primary_batch_environment abstractmethod property
primary_batch_environment: BatchEnvironment

Get the primary batch environment.

Returns:

Type Description
BatchEnvironment

The primary BatchEnvironment for this compute construct.

name property
name: str

Get the batch name.

Returns:

Type Description
str

The batch infrastructure name.

Functions
create_batch_environments abstractmethod
create_batch_environments() -> None

Create the batch environments.

Subclasses must implement this to create their specific batch environment configurations.

Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
@abstractmethod
def create_batch_environments(self) -> None:
    """Create the batch environments.

    Subclasses must implement this to create their specific
    batch environment configurations.
    """
    raise NotImplementedError()
grant_storage_access
grant_storage_access(
    *resources: Bucket | FileSystem | IFileSystem,
) -> None

Grant access to storage resources.

Parameters:

Name Type Description Default
*resources Union[Bucket, FileSystem, IFileSystem]

Variable number of storage resources to grant access to.

()
Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
def grant_storage_access(
    self, *resources: s3.Bucket | efs.FileSystem | efs.IFileSystem
) -> None:
    """Grant access to storage resources.

    Args:
        *resources (Union[s3.Bucket, efs.FileSystem, efs.IFileSystem]):
            Variable number of storage resources to grant access to.
    """
    self.batch.grant_instance_role_permissions(read_write_resources=list(resources))

    for batch_environment in self.batch.environments:
        for resource in resources:
            if isinstance(resource, efs.FileSystem):
                batch_environment.grant_file_system_access(resource)
BatchCompute
BatchCompute(
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: Vpc,
    batch_name: str,
    buckets: Iterable[Bucket] | None = None,
    file_systems: Iterable[FileSystem | IFileSystem]
    | None = None,
    mount_point_configs: Iterable[MountPointConfiguration]
    | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[PolicyStatement]
    | None = None,
    **kwargs
)

Bases: BaseBatchComputeConstruct

Standard Batch compute construct with on-demand, spot, and Fargate environments.

Provides a complete Batch compute setup with three environment types: on-demand, spot, and Fargate.

Attributes:

Name Type Description
on_demand_batch_environment

On-demand compute environment.

spot_batch_environment

Spot compute environment.

fargate_batch_environment

Fargate compute environment.

Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
def __init__(
    self,
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: ec2.Vpc,
    batch_name: str,
    buckets: Iterable[s3.Bucket] | None = None,
    file_systems: Iterable[efs.FileSystem | efs.IFileSystem] | None = None,
    mount_point_configs: Iterable[MountPointConfiguration] | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[iam.PolicyStatement] | None = None,
    **kwargs,
) -> None:
    """Initialize a Batch compute construct.

    Args:
        scope (Construct): The construct scope.
        id (Optional[str]): The construct ID.
        env_base (EnvBase): Environment base for resource naming.
        vpc (ec2.Vpc): VPC for the compute environments.
        batch_name (str): Name for the batch infrastructure.
        buckets (Optional[Iterable[s3.Bucket]]): S3 buckets to grant access to.
        file_systems (Optional[Iterable[Union[efs.FileSystem, efs.IFileSystem]]]):
            EFS file systems to grant access to.
        mount_point_configs (Optional[Iterable[MountPointConfiguration]]):
            Mount point configurations for EFS.
        instance_role_name (Optional[str]): Name for the instance role.
        instance_role_policy_statements (Optional[List[iam.PolicyStatement]]):
            Additional IAM policy statements.
        **kwargs: Additional arguments passed to parent.
    """
    super().__init__(scope, id, env_base, **kwargs)
    self.batch_name = batch_name
    self.batch = Batch(
        self,
        batch_name,
        self.env_base,
        vpc=vpc,
        instance_role_name=instance_role_name,
        instance_role_policy_statements=instance_role_policy_statements,
    )

    self.create_batch_environments()

    bucket_list = list(buckets or [])

    file_system_list = list(file_systems or [])

    if mount_point_configs:
        mount_point_config_list = list(mount_point_configs)
        file_system_list = self._update_file_systems_from_mount_point_configs(
            file_system_list, mount_point_config_list
        )
    else:
        mount_point_config_list = self._get_mount_point_configs(file_system_list)

    # Validation to ensure that the file systems are not duplicated
    self._validate_mount_point_configs(mount_point_config_list)

    self.grant_storage_access(*bucket_list, *file_system_list)
Attributes
primary_batch_environment property
primary_batch_environment: BatchEnvironment

Get the primary batch environment.

Returns:

Type Description
BatchEnvironment

The on-demand batch environment.

Functions
create_batch_environments
create_batch_environments() -> None

Create on-demand, spot, and Fargate batch environments.

Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
def create_batch_environments(self) -> None:
    """Create on-demand, spot, and Fargate batch environments."""
    lt_builder = BatchLaunchTemplateBuilder(
        self, f"{self.name}-lt-builder", env_base=self.env_base
    )
    self.on_demand_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-on-demand"),
        config=BatchEnvironmentConfig(
            allocation_strategy=batch.AllocationStrategy.BEST_FIT,
            instance_types=[ec2.InstanceType(_) for _ in ON_DEMAND_INSTANCE_TYPES],
            use_spot=False,
            use_fargate=False,
            use_public_subnets=False,
        ),
        launch_template_builder=lt_builder,
    )

    self.spot_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-spot"),
        config=BatchEnvironmentConfig(
            allocation_strategy=batch.AllocationStrategy.SPOT_PRICE_CAPACITY_OPTIMIZED,
            instance_types=[ec2.InstanceType(_) for _ in SPOT_INSTANCE_TYPES],
            use_spot=True,
            use_fargate=False,
            use_public_subnets=False,
        ),
        launch_template_builder=lt_builder,
    )

    self.fargate_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-fargate"),
        config=BatchEnvironmentConfig(
            allocation_strategy=None,
            instance_types=None,
            use_spot=False,
            use_fargate=True,
            use_public_subnets=False,
        ),
        launch_template_builder=lt_builder,
    )
LambdaCompute
LambdaCompute(
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: Vpc,
    batch_name: str,
    buckets: Iterable[Bucket] | None = None,
    file_systems: Iterable[FileSystem | IFileSystem]
    | None = None,
    mount_point_configs: Iterable[MountPointConfiguration]
    | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[PolicyStatement]
    | None = None,
    **kwargs
)

Bases: BatchCompute

Lambda-optimized Batch compute construct.

Provides Batch environments optimized for Lambda-like workloads with small, medium, and large instance type configurations.

Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
def __init__(
    self,
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: ec2.Vpc,
    batch_name: str,
    buckets: Iterable[s3.Bucket] | None = None,
    file_systems: Iterable[efs.FileSystem | efs.IFileSystem] | None = None,
    mount_point_configs: Iterable[MountPointConfiguration] | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[iam.PolicyStatement] | None = None,
    **kwargs,
) -> None:
    """Initialize a Batch compute construct.

    Args:
        scope (Construct): The construct scope.
        id (Optional[str]): The construct ID.
        env_base (EnvBase): Environment base for resource naming.
        vpc (ec2.Vpc): VPC for the compute environments.
        batch_name (str): Name for the batch infrastructure.
        buckets (Optional[Iterable[s3.Bucket]]): S3 buckets to grant access to.
        file_systems (Optional[Iterable[Union[efs.FileSystem, efs.IFileSystem]]]):
            EFS file systems to grant access to.
        mount_point_configs (Optional[Iterable[MountPointConfiguration]]):
            Mount point configurations for EFS.
        instance_role_name (Optional[str]): Name for the instance role.
        instance_role_policy_statements (Optional[List[iam.PolicyStatement]]):
            Additional IAM policy statements.
        **kwargs: Additional arguments passed to parent.
    """
    super().__init__(scope, id, env_base, **kwargs)
    self.batch_name = batch_name
    self.batch = Batch(
        self,
        batch_name,
        self.env_base,
        vpc=vpc,
        instance_role_name=instance_role_name,
        instance_role_policy_statements=instance_role_policy_statements,
    )

    self.create_batch_environments()

    bucket_list = list(buckets or [])

    file_system_list = list(file_systems or [])

    if mount_point_configs:
        mount_point_config_list = list(mount_point_configs)
        file_system_list = self._update_file_systems_from_mount_point_configs(
            file_system_list, mount_point_config_list
        )
    else:
        mount_point_config_list = self._get_mount_point_configs(file_system_list)

    # Validation to ensure that the file systems are not duplicated
    self._validate_mount_point_configs(mount_point_config_list)

    self.grant_storage_access(*bucket_list, *file_system_list)
Attributes
primary_batch_environment property
primary_batch_environment: BatchEnvironment

Get the primary batch environment.

Returns:

Type Description
BatchEnvironment

The main Lambda batch environment.

Functions
create_batch_environments
create_batch_environments() -> None

Create Lambda-optimized batch environments.

Source code in src/aibs_informatics_cdk_lib/constructs_/service/compute.py
def create_batch_environments(self) -> None:
    """Create Lambda-optimized batch environments."""
    lt_builder = BatchLaunchTemplateBuilder(
        self, f"{self.name}-lt-builder", env_base=self.env_base
    )
    self.lambda_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-lambda"),
        config=BatchEnvironmentConfig(
            allocation_strategy=batch.AllocationStrategy.BEST_FIT,
            instance_types=[
                *LAMBDA_SMALL_INSTANCE_TYPES,
                *LAMBDA_MEDIUM_INSTANCE_TYPES,
                *LAMBDA_LARGE_INSTANCE_TYPES,
            ],
            use_spot=False,
            use_fargate=False,
            use_public_subnets=False,
            minv_cpus=2,
        ),
        launch_template_builder=lt_builder,
    )

    self.lambda_small_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-lambda-small"),
        config=BatchEnvironmentConfig(
            allocation_strategy=batch.AllocationStrategy.BEST_FIT,
            instance_types=[*LAMBDA_SMALL_INSTANCE_TYPES],
            use_spot=False,
            use_fargate=False,
            use_public_subnets=False,
        ),
        launch_template_builder=lt_builder,
    )

    self.lambda_medium_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-lambda-medium"),
        config=BatchEnvironmentConfig(
            allocation_strategy=batch.AllocationStrategy.BEST_FIT,
            instance_types=[*LAMBDA_MEDIUM_INSTANCE_TYPES],
            use_spot=False,
            use_fargate=False,
            use_public_subnets=False,
            minv_cpus=2,
        ),
        launch_template_builder=lt_builder,
    )

    self.lambda_large_batch_environment = self.batch.setup_batch_environment(
        descriptor=BatchEnvironmentDescriptor(f"{self.name}-lambda-large"),
        config=BatchEnvironmentConfig(
            allocation_strategy=batch.AllocationStrategy.BEST_FIT,
            instance_types=[*LAMBDA_LARGE_INSTANCE_TYPES],
            use_spot=False,
            use_fargate=False,
            use_public_subnets=False,
        ),
        launch_template_builder=lt_builder,
    )

debug

Classes
DebugInstanceConstruct
DebugInstanceConstruct(
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: Vpc,
    name: str = "DebugInstance",
    efs_filesystems: list[IFileSystem | EnvBaseFileSystem]
    | None = None,
    instance_type: InstanceType = InstanceType("t3.medium"),
    machine_image: IMachineImage | None = None,
    instance_name: str | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[PolicyStatement]
    | None = None,
    **kwargs
)

Bases: EnvBaseConstruct

DebugInstanceConstruct is a CDK construct that creates an EC2 instance pre-configured for debugging and troubleshooting purposes within a given VPC. This instance is designed primarily to facilitate runtime inspection, diagnostics, and interactions with attached resources, including optional file system mounts (EFS) for scenarios such as shared storage debugging or configuration verification.

The construct provisions
  • A dedicated security group for the instance.
  • An IAM role with necessary policies (including AmazonSSMManagedInstanceCore and AmazonS3ReadOnlyAccess), optionally supplemented by user-specified inline policies.
  • A Linux-based EC2 instance (defaulting to the latest Amazon Linux 2) with user data commands to perform system updates, install EFS utilities, and debugging tools (e.g., jq, tree).

Parameters:

Name Type Description Default
- scope Construct

The scope in which this construct is defined.

required
- id Optional[str]

The unique identifier for the construct.

required
- env_base EnvBase

The environment configuration required for the construct.

required
- vpc Vpc

The VPC within which the instance is launched.

required
- name str

Base name for resources. Defaults to "DebugInstance".

required
- efs_filesystems Optional[List[Union[IFileSystem, EnvBaseFileSystem]]]

A list of EFS file systems to mount on the instance. If provided, each file system is mounted at a dedicated path under /mnt/efs/.

required
- instance_type InstanceType

The EC2 instance type for the debug instance. Defaults to t3.medium.

required
- machine_image Optional[IMachineImage]

The machine image used for the instance. Defaults to the latest Amazon Linux 2 image.

required
- instance_name Optional[str]

A custom name for the EC2 instance. If not provided, a name is generated based on the base name.

required
- instance_role_name Optional[str]

The name of the IAM role assigned to the instance. If not provided, a default name derived from the base name is used.

required
- instance_role_policy_statements Optional[List[PolicyStatement]]

Additional IAM policy statements to attach inline to the instance role.

required
Usage Examples
  1. Minimal usage:

    instance = DebugInstanceConstruct( scope=app, id="DebugInstance", env_base=env, vpc=my_vpc ) This creates an EC2 instance with default parameters and without mounting any EFS file systems.

  2. Advanced usage with custom instance details and EFS mounting:

    instance = DebugInstanceConstruct( scope=app, id="CustomDebugInstance", env_base=env, vpc=my_vpc, name="CustomDebug", instance_type=ec2.InstanceType("t3.large"), instance_name="CustomDebugEC2", instance_role_policy_statements=[custom_policy_statement], efs_filesystems=[my_efs] ) This creates an EC2 instance with a custom instance type, a specified instance name, an inline policy, and mounts the provided EFS file system.

  3. Using a custom machine image:

    custom_image = ec2.MachineImage.from_lookup(name="MyCustomAMI") instance = DebugInstanceConstruct( scope=app, id="CustomImageInstance", env_base=env, vpc=my_vpc, machine_image=custom_image ) This example shows how to specify a custom machine image instead of using the default Amazon Linux 2 image.

Source code in src/aibs_informatics_cdk_lib/constructs_/service/debug.py
def __init__(
    self,
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    vpc: ec2.Vpc,
    name: str = "DebugInstance",
    efs_filesystems: list[efs.IFileSystem | EnvBaseFileSystem] | None = None,
    instance_type: ec2.InstanceType = ec2.InstanceType("t3.medium"),
    machine_image: ec2.IMachineImage | None = None,
    instance_name: str | None = None,
    instance_role_name: str | None = None,
    instance_role_policy_statements: list[iam.PolicyStatement] | None = None,
    **kwargs,
) -> None:
    super().__init__(scope, id, env_base, **kwargs)

    # Security group for the instance
    self.debug_sg = ec2.SecurityGroup(
        self,
        f"{name}-SG",
        vpc=vpc,
        allow_all_outbound=True,
        description="Security group for Debug EC2 Instance",
    )

    # IAM role for the instance
    self.instance_role = iam.Role(
        self,
        f"{name}-InstanceRole",
        assumed_by=cast(iam.IPrincipal, iam.ServicePrincipal("ec2.amazonaws.com")),
        role_name=self.get_resource_name(instance_role_name or f"{name}-InstanceRole"),
        description="Role for Debug EC2 Instance",
        managed_policies=[
            iam.ManagedPolicy.from_aws_managed_policy_name("AmazonSSMManagedInstanceCore"),
            iam.ManagedPolicy.from_aws_managed_policy_name("AmazonS3ReadOnlyAccess"),
        ],
        inline_policies={
            "UserSpecifiedPolicies": iam.PolicyDocument(
                statements=instance_role_policy_statements,
            )
        }
        if instance_role_policy_statements
        else None,
    )

    if machine_image is None:
        machine_image = ec2.MachineImage.latest_amazon_linux2(
            storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE,
            cpu_type=ec2.AmazonLinuxCpuType.X86_64,
            edition=ec2.AmazonLinuxEdition.STANDARD,
            user_data=(user_data := ec2.UserData.for_linux()),
        )
        user_data.add_commands(
            "yum -y update",
            # These are necessary for EFS mounting
            "yum -y install amazon-efs-utils",
            # These are useful for debugging
            "yum -y install jq tree",
        )

    # Create the EC2 Instance
    self.instance = ec2.Instance(
        self,
        f"{name}-Instance",
        instance_type=instance_type,
        machine_image=cast(ec2.IMachineImage, machine_image),
        instance_name=self.get_resource_name(instance_name or f"{name}-Instance"),
        vpc=vpc,
        role=cast(iam.IRole, self.instance_role),
        vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC),
        security_group=self.debug_sg,
    )

    # If any EFS file systems are passed in, mount each one
    if efs_filesystems:
        for i, filesystem in enumerate(efs_filesystems):
            # Allow our instance to connect on the EFS mount target port
            filesystem.connections.allow_default_port_from(self.debug_sg)

            # A path for each EFS mount
            if isinstance(filesystem, EnvBaseFileSystem):
                mount_path = f"/mnt/efs/{filesystem.file_system_name}"
            else:
                mount_path = f"/mnt/efs/{filesystem.file_system_id}"

            self.instance.user_data.add_commands(
                f"mkdir -p {mount_path}",
                f"mount -t efs -o tls {filesystem.file_system_id}:/ {mount_path}",
            )
            grant_connectable_file_system_access(filesystem, self.instance, "rw")
Functions

lims2_connection

Classes
LimsConnectionConstruct
LimsConnectionConstruct(
    scope: Construct,
    id: str | None,
    env_base: EnvBase,
    target_vpc: Vpc,
    vpc_endpoint_service_name: str,
    **kwargs
)

Bases: EnvBaseConstruct

This construct takes as input an AWS ec2.VPC and attaches a "VPC interface endpoint" that allows connections to another account/vpc with an on-prem LIMS2 connection.

vpc_endpoint_service_name should be the DNS name of the service running the LIMS2 connection and should look something like: "com.amazonaws.vpce.{region}.vpce-svc-{service_id}"

Source code in src/aibs_informatics_cdk_lib/constructs_/service/lims2_connection.py
def __init__(
    self,
    scope: constructs.Construct,
    id: str | None,
    env_base: EnvBase,
    target_vpc: ec2.Vpc,
    vpc_endpoint_service_name: str,
    **kwargs,
):
    super().__init__(scope, id, env_base, **kwargs)

    self.vpc_endpoint_service_name = vpc_endpoint_service_name
    self.target_vpc = target_vpc
    self.add_lims_vpc_endpoint()
    self.add_lims_vpc_endpoint_dns_alias()
Functions
add_lims_vpc_endpoint
add_lims_vpc_endpoint()

Add a VPC endpont to our target_vpc that connects to the LIMS2 endpoint service (located in another AWS account/VPC managed by the cloud infra team).

Useful documentation: https://alleninstitute.atlassian.net/wiki/spaces/IT/pages/740360228/Accessing+LIMS2+from+AWS https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html#connect-to-endpoint-service https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2-readme.html#vpc-endpoints

Pricing: https://aws.amazon.com/privatelink/pricing/ (see: Interface Endpoint pricing section)

Source code in src/aibs_informatics_cdk_lib/constructs_/service/lims2_connection.py
def add_lims_vpc_endpoint(self):
    """Add a VPC endpont to our target_vpc that connects to the LIMS2 endpoint
    service (located in another AWS account/VPC managed by the cloud infra team).

    Useful documentation:
    https://alleninstitute.atlassian.net/wiki/spaces/IT/pages/740360228/Accessing+LIMS2+from+AWS
    https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html#connect-to-endpoint-service
    https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2-readme.html#vpc-endpoints

    Pricing:
    https://aws.amazon.com/privatelink/pricing/ (see: Interface Endpoint pricing section)
    """

    # NOTE: Currently this endpoint only gets associated with 1 subnet because the
    #       endpoint we deploy in our VPC can only be deployed in the same AZs as the
    #       source Service Endpoint. If we would like to have our InterfaceVpcEndpoint
    #       available in multiple AZs, we would need to request cloud infra team to
    #       increase the number of AZs that the source service is deployed to.
    #       See this related question: https://stackoverflow.com/questions/60081850/
    self.lims_vpc_endpoint = ec2.InterfaceVpcEndpoint(
        scope=self,
        id="External LIMS2 Network Load Balancer VPC Endpoint",
        # Obtained from cloud infra team, any changes on their end we also need to update here
        service=ec2.InterfaceVpcEndpointService(name=self.vpc_endpoint_service_name),
        vpc=self.target_vpc,
        open=True,
        lookup_supported_azs=True,
        subnets=ec2.SubnetSelection(one_per_az=True, subnets=self.target_vpc.private_subnets),
    )

    # Need to enable port 80 and 5432 (postgres) for this endpoint
    # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2-readme.html#allowing-connections
    self.lims_vpc_endpoint.connections.allow_from_any_ipv4(
        port_range=ec2.Port.POSTGRES, description="Postgres port 5432"
    )
    self.lims_vpc_endpoint.connections.allow_from_any_ipv4(
        port_range=ec2.Port.HTTP, description="HTTP port 80"
    )
add_lims_vpc_endpoint_dns_alias
add_lims_vpc_endpoint_dns_alias()

Add a route53 private hosted zone DNS resolver that will allow us to contact the LIMS2 VPC endpoint service using a less unwieldy DNS name.

Useful documentation: https://stackoverflow.com/a/78258885

Pricing: https://aws.amazon.com/route53/pricing/

Source code in src/aibs_informatics_cdk_lib/constructs_/service/lims2_connection.py
def add_lims_vpc_endpoint_dns_alias(self):
    """Add a route53 private hosted zone DNS resolver that will allow us to contact the
    LIMS2 VPC endpoint service using a less unwieldy DNS name.

    Useful documentation:
    https://stackoverflow.com/a/78258885

    Pricing:
    https://aws.amazon.com/route53/pricing/
    """

    # This costs money, but at our usage levels it shouldn't be an issue
    self.private_hosted_zone = aws_route53.PrivateHostedZone(
        scope=self,
        id="Lims2VpcEndpointPrivateHostedZone",
        zone_name="lims2.corp.alleninstitute.org",
        vpc=self.target_vpc,
        comment=(
            "A route53 private hosted zone that contains an Alias that can resolve DNS "
            "queries to `lims2.corp.alleninstitute.org` to the InterfaceVpcEndpoint hosting "
            "the LIMS2 VPC endpoint service"
        ),
    )

    # Use an ARecord (Alias Record) instead of a CRecord because those cost money per query
    alias_record_target = cast(
        aws_route53.IAliasRecordTarget,
        aws_route53_targets.InterfaceVpcEndpointTarget(self.lims_vpc_endpoint),
    )
    self.lims2_vpc_endpoint_arecord = aws_route53.ARecord(
        scope=self,
        id="Lims2VpcEndpointAliasRecord",
        target=aws_route53.RecordTarget.from_alias(alias_target=alias_record_target),
        zone=self.private_hosted_zone,
        delete_existing=True,
    )