← Case Studies

GovCloud Aerospace Platform — Multi-Account AWS Infrastructure

Designed and built a multi-account AWS GovCloud (US) foundation for a NASA mission support program. STIG-hardened, NIST 800-53 compliant, zero public internet exposure.

Client
Aerospace Defense Contractor
Duration
8 months
Technologies
AWS GovCloud (US)AWS OrganizationsTransit GatewayNetwork FirewallEC2ECSVPCIAMCloudTrailTerraformAnsibleSTIGNIST 800-53

Project Overview

An aerospace defense contractor supporting a NASA science mission required a secure cloud infrastructure capable of meeting federal compliance requirements. The program involved sensitive ground system software and flight data processing — requiring GovCloud isolation, STIG hardening, and NIST 800-53 controls from day one.

Key Stats:

The Challenge

Federal Compliance from the Ground Up

The program had no existing cloud infrastructure. Ground system software was running on on-premises servers with no automation, no audit trail, and no path to federal ATO. The team needed:

Technical Constraints

The Solution

Account Structure

AWS Organizations (GovCloud)
├── Management Account (billing, org policies, IAM Identity Center)
├── Security Account (CloudTrail aggregation, GuardDuty, Config)
├── Infrastructure Account (Transit Gateway, Network Firewall, shared services)
├── Workloads-Dev (development and integration testing)
├── Workloads-Test (formal testing / IV&V)
└── Workloads-Prod (production flight data processing)

Networking: Defense-in-Depth

All traffic flowed through a centralized inspection layer before reaching workloads:

Internet → (blocked)
Internal → Transit Gateway → Network Firewall → Workload VPCs
Egress   → Network Firewall → NAT Gateway → (whitelist-only domains)

STIG Hardening (DISA RHEL 8)

Implemented STIG hardening via Ansible, applied at AMI build time and enforced via AWS Config rules:

Key controls applied:

Ansible roles for idempotent application across all instances, with a custom stig-remediation role that:

  1. Ran SCAP scans (OpenSCAP) to baseline the instance
  2. Applied remediations in order of STIG severity (CAT I → II → III)
  3. Re-ran scans to confirm findings reduced to acceptable level
  4. Generated report artifacts for ATO documentation

IAM and Identity

Containerization: Lift-and-Shift to ECS

Legacy ground system applications were containerized using a pragmatic approach:

  1. Dockerized each application with minimal changes (no 12-factor refactor — scope was infrastructure, not app rewrite)
  2. ECS on EC2 (not Fargate — GovCloud Fargate was more limited at time of engagement)
  3. Task roles for fine-grained S3 and SQS access per service
  4. Parameter Store for runtime secrets injection (no plaintext in task definitions)
  5. CloudWatch for container logs centralized to Security account

Compliance Documentation

Maintained a living ATO package:

Results & Impact

Compliance Outcomes

Operational Improvements

Engineering Benefits

Key Takeaways

What Worked

  1. Ansible for STIG at AMI build time: Baking compliance in beats trying to remediate running instances
  2. Network Firewall domain whitelist: Simpler to maintain than IP-based egress lists; blocks supply chain attacks
  3. NIST control mapping from day one: Retrofitting compliance documentation is painful; building it in parallel with implementation saves weeks
  4. ECS lift-and-shift: Containerizing without refactoring was the right scope — got workloads into cloud quickly without blocking on app changes
  5. Break-glass accounts offline: Auditors appreciated the documented emergency access procedure

What I’d Do Differently

  1. Fargate over EC2 for ECS: Eliminates EC2 patching burden; worth revisiting as GovCloud Fargate support has improved
  2. AWS Config conformance packs earlier: Custom Config rules per control are tedious; AWS Security Hub + conformance packs now cover much of NIST 800-53 automatically
  3. More Terraform modules, less copy-paste: Per-account Terraform directories shared too much boilerplate; Terragrunt would have helped here

Lessons Learned

Technical Deep Dive

Network Firewall Rule Groups

Domain-based egress filtering prevented data exfiltration while allowing necessary software updates:

# Stateful domain allow list
resource "aws_networkfirewall_rule_group" "egress_allow" {
  name     = "egress-domain-allowlist"
  type     = "STATEFUL"
  capacity = 100

  rule_group {
    rules_source {
      rules_source_list {
        generated_rules_type = "ALLOWLIST"
        target_types         = ["HTTP_HOST", "TLS_SNI"]
        targets = [
          ".amazonaws.com",
          ".amazonlinux.com",
          "rhui.redhat.com",
        ]
      }
    }
  }
}

Effect: Instances can reach AWS APIs and RHUI for patches. All other egress dropped.

Terraform Module Structure

Separate state per account, shared modules:

infrastructure/
├── modules/
│   ├── vpc/              # VPC, subnets, route tables
│   ├── ecs-cluster/      # ECS cluster, capacity providers
│   ├── ecs-service/      # Task definition, service, task role
│   └── stig-ami/         # Packer + Ansible AMI pipeline
├── accounts/
│   ├── management/       # Org, IAM Identity Center
│   ├── security/         # CloudTrail, GuardDuty, Config
│   ├── infrastructure/   # TGW, NFW, shared networking
│   ├── workloads-dev/
│   ├── workloads-test/
│   └── workloads-prod/

State in S3 (GovCloud), locked via DynamoDB. No Terraform Cloud — all state must remain in GovCloud partition.


Building federal cloud infrastructure? Schedule a free consultation to discuss your compliance and architecture challenges.

Technologies: AWS GovCloud (US) | AWS Organizations | Transit Gateway | Network Firewall | ECS | EC2 | Terraform | Ansible | STIG | NIST 800-53 | OpenSCAP | CloudTrail | GuardDuty

Working on a similar challenge?

Multi-account AWS architecture, container migration, Terraform adoption — this is the work I do as a fractional engagement.