Infrastructure documentation has a freshness problem. The diagram was accurate when the principal architect drew it eighteen months ago. Since then: three migrations, two cloud provider changes, a Kubernetes upgrade, and a restructuring of the VPC that nobody updated in Confluence because the sprint was already over capacity. The diagram is now a historical artefact masquerading as a reference. Claude Cowork for infrastructure documentation solves this by reading the actual state from your infrastructure code and generating documentation that reflects what's deployed, not what was planned.

This article is part of the Claude Cowork for DevOps engineers series. It focuses specifically on infrastructure documentation — architecture docs, configuration references, and change documentation. For the related DevOps workflows, see the guides on runbook generation, incident post-mortems, and DevOps automation workflows.

The Infrastructure Documentation Problem

Infrastructure documentation fails for the same reason all documentation fails: it's a separate activity from the work, it has no immediate feedback mechanism when it becomes wrong, and it competes for time against tasks with visible, immediate consequences. The engineer who just deployed the VPC change knows documentation is important. They also have 14 open Jira tickets and a sprint review in 40 minutes.

Claude Cowork approaches infrastructure documentation differently — it reads your infrastructure code and generates the documentation from it, rather than requiring engineers to write documentation separately. The source of truth is your Terraform state and your Kubernetes manifests. The documentation is derived from that truth, automatically.

Architecture Documentation

Generate accurate architecture docs from Terraform modules, VPC configs, and service dependencies. Readable by engineers who didn't design the system.

Configuration References

Document every environment's configuration — what runs where, with what settings, connected to what. Derived from actual configs, not from memory.

Change Documentation

Every infrastructure change gets a human-readable change record. Generated from the Terraform plan before apply; updated after apply with the actual outcome.

Dependency Maps

Document service-to-service and service-to-infrastructure dependencies. Useful for incident blast radius assessment and for capacity planning.

The Infrastructure Documentation Workflow

Baseline documentation generation

The first run produces the baseline. Feed Cowork your complete Terraform repository (or the relevant modules), your Kubernetes manifests, and any existing architecture diagrams or notes. Cowork reads the infrastructure definition and produces a structured documentation set: architecture overview, component inventory, network topology description, and configuration reference. This is the before state — accurate as of today.

Per-deployment documentation updates

After each infrastructure deployment, run the change documentation prompt with the Terraform plan output and the diff from the previous state. Cowork generates the change record and identifies which sections of the baseline documentation need updating. This keeps documentation current with each deployment cycle rather than drifting over time.

Quarterly reconciliation

Quarterly, run the full baseline generation against the current state and compare it to the existing documentation. Cowork identifies what changed, what drifted, and what's missing. The quarterly reconciliation catches documentation that was skipped during high-velocity periods.

Infrastructure Documentation Prompt Templates

Architecture Overview Generation
Generate architecture documentation for our [ENVIRONMENT NAME] environment.

Source files:
- Terraform modules [attached or linked repository]
- Kubernetes manifests [attached]
- Existing architecture notes [attached if available]

Produce:
1. ARCHITECTURE OVERVIEW
   - What this environment does (business purpose)
   - Cloud provider, region(s), availability zones
   - High-level component diagram (described in text — include a Mermaid diagram if you can)

2. COMPONENT INVENTORY
   Table: Component name | Type | Purpose | Size/Scale | Key Config

3. NETWORK TOPOLOGY
   - VPC/network structure
   - Subnets (public, private, database tiers)
   - Security groups and their purposes
   - Ingress/egress rules (summarised, not exhaustive)

4. COMPUTE AND SERVICES
   - What runs on EC2/GKE/AKS (or equivalent)
   - Container orchestration setup
   - Auto-scaling configuration

5. DATA SERVICES
   - Databases (type, version, size, backup config)
   - Caches (Redis, Memcached, etc.)
   - Storage (S3 buckets and their purposes)
   - Message queues

6. EXTERNAL DEPENDENCIES
   - Third-party services this environment calls
   - External APIs and their criticality
   - CDN configuration

7. KNOWN GAPS
   List anything you couldn't derive from the Terraform alone and would need a human to verify.

Audience: Platform engineers who are new to this environment. Assume they know Kubernetes and Terraform but don't know our specific setup.
Change Documentation (Post-Deploy)
Generate change documentation for the infrastructure deployment that completed at [DATE/TIME].

Pre-deploy state documentation [attached]
Terraform plan that was applied [attached]
Terraform apply output [attached if available]

Produce:
1. CHANGE SUMMARY (2-3 sentences, non-technical)
2. WHAT CHANGED (resource by resource, with before/after values for key attributes)
3. WHAT DIDN'T CHANGE (confirm what stayed stable — useful for blast radius assessment)
4. CONFIGURATION DRIFT DETECTED (anything in the apply output that differed from the plan)
5. DOCUMENTATION UPDATES REQUIRED (which sections of the baseline docs need updating, with the specific changes)

Format as a Confluence change record. Include a table of changes with resource, change type (create/modify/destroy), and impact assessment.
Dependency Map Generation
Map the service dependencies for [SERVICE or ENVIRONMENT NAME].

Sources:
- Service repository [attached]
- Terraform configs [attached]
- Network/security group configs [attached]

Map:
1. Services this component calls (outbound dependencies)
   - Service name, protocol, port, criticality (blocking vs. non-blocking)
2. Services that call this component (inbound dependencies)
3. Infrastructure dependencies (databases, caches, queues, storage)
4. External dependencies (third-party APIs, CDNs)

For each dependency, include:
- Failure impact: what happens to this service if the dependency fails
- Timeout/retry configuration (if detectable from code)
- Circuit breaker or fallback (if configured)

Output as a Confluence page with a Mermaid dependency diagram.
This will be used for incident blast radius assessment.

Keeping Infrastructure Docs Current: The CI/CD Integration

The most durable approach to infrastructure documentation currency is integrating it into the deployment pipeline. When a Terraform apply completes, trigger the Cowork documentation update automatically. This is achievable through the Terraform Cloud webhook integration with Cowork's MCP connector.

The pipeline integration works as follows:

  • Terraform Cloud webhook fires on successful apply → triggers Cowork skill
  • Cowork reads the plan output and the existing documentation in Confluence
  • Cowork generates the change record and identifies documentation sections to update
  • Cowork creates a Confluence page with the change record and a PR-style comment on the documentation page with proposed updates
  • An engineer approves the documentation update (taking 2–3 minutes)

The approval step is intentional. Fully automated documentation updates carry the risk of auto-generating inaccurate content that then sits in Confluence looking authoritative. The human review step keeps the quality bar while eliminating the effort of writing from scratch.

For teams using GitOps: If your infrastructure definitions live in a git repository managed by ArgoCD or Flux, you can trigger the documentation update on merge to main rather than on Terraform apply. The Cowork GitHub/GitLab connector supports this workflow. The documentation PR can be created automatically in the same repository as your infrastructure code — docs-as-code, generated from code.

For teams that need custom integration between Cowork and non-standard CI/CD platforms, our MCP server development service builds the connectors for your specific stack. The architecture is documented in the MCP Protocol Guide.

Frequently Asked Questions

Can Cowork generate Mermaid or other diagram formats from Terraform?

Yes. Claude understands Terraform HCL and can produce Mermaid diagram syntax describing the architecture — service relationships, network topology, data flow. These Mermaid diagrams render directly in Confluence, Notion, and GitHub READMEs. The diagrams are text-based, which means they're version-controllable and can be updated automatically as infrastructure changes. They're not pixel-perfect architecture diagrams, but they accurately represent the structure and are far better than no diagram or an outdated Visio file.

Does Cowork understand multi-cloud or hybrid environments?

Yes. Cowork reads the Terraform configurations for AWS, GCP, Azure, and Kubernetes providers. For hybrid environments — where some infrastructure is in a public cloud and some on-premises — feed Cowork whatever infrastructure-as-code or configuration files describe the on-premises components. Cowork will document what you give it and flag what it can't derive from the provided files. Multi-cloud documentation is increasingly common for organisations that use AWS for primary workloads and Azure for Microsoft 365 integrations, for example.

How does Cowork handle sensitive values in Terraform (passwords, API keys)?

Cowork reads Terraform configurations, but it should not be given files containing sensitive values (terraform.tfvars with real secrets, .env files with credentials). The standard practice is to feed Cowork the infrastructure code with variable references (e.g., var.db_password) rather than the values file. Cowork will document that a variable exists and its purpose without needing to know the value. Cowork runs within Claude Enterprise's security boundary, which includes no external data sharing, but best practice is to keep secret values out of the Cowork canvas entirely.

Can we use Cowork to generate documentation for Kubernetes custom resources and operators?

Yes. If you provide the CRD (Custom Resource Definition) files alongside the manifests that use them, Cowork will document the custom resources accurately. For well-known operators (Cert-Manager, External-DNS, ArgoCD, Istio), Cowork has built-in understanding of the resource types and their purposes. For internal operators, the CRD and operator documentation help Cowork produce accurate descriptions.

What about documentation for infrastructure that isn't defined in code (legacy systems, manual configurations)?

This is the hardest case. For legacy infrastructure without infrastructure-as-code, use a hybrid approach: describe the system in a structured format (even a well-organised text document or spreadsheet), feed that to Cowork, and use the architecture generation prompt. The output will be less precise than code-derived documentation, but it still produces a structured starting point. Pairing this with a knowledge transfer session (recorded and transcribed) gives Cowork enough to work with. The longer-term answer is migrating legacy systems to IaC — at which point the documentation generation becomes fully automated.

Infrastructure Documentation

Documentation That Reflects What's Actually Deployed

We deploy Claude Cowork for platform engineering teams, including infrastructure documentation generation as part of the onboarding. Your architecture docs will be current from day one.