From a Single Snapshot to a History
This is the fifth post in a series documenting the build-out of a Canadian economic indicators dashboard. Stage 1 covered the original problem. Stage 2 moved the dashboard into the Hugo site. Stage 3 moved all data fetching into AWS Lambda. Stage 4 replaced ETF proxies with direct data sources.
Stage 4 left the pipeline in a clean state: Lambda runs every 30 minutes, pulls eight indicators from five sources, and writes a single JSON snapshot to S3. The dashboard loads that snapshot and renders everything from it.
The snapshot is exactly that — a single point in time. Every 30 minutes it gets overwritten. Nothing is retained.
Stage 5 changes that.
What Changes
Each Lambda run now does two things:
- Write the current snapshot to
data/indicators.jsonas before — unchanged. - Write the same snapshot as a timestamped record to a DynamoDB table.
Once records accumulate, a second Lambda task aggregates them into pre-built history files — one per period — and writes those to S3. The dashboard gains a period selector: 30D (current behaviour), 3M, and 6M. Selecting a longer period fetches the corresponding file and re-renders the sparklines.
No API Gateway. No new CloudFront behaviors. History files are static JSON served through the existing data/* path.
DynamoDB Table Design
A single table: econ-indicators-history.
| Attribute | Type | Role |
|---|---|---|
pk | String | Partition key — always "SNAPSHOT" |
ts | String | Sort key — ISO timestamp e.g. "2026-04-05T00:30:00Z" |
ttl | Number | Unix epoch, 1 year from write — DynamoDB deletes expired items automatically |
Single partition key keeps queries simple: pk = "SNAPSHOT" AND ts BETWEEN start AND end. At 48 writes per day, the table never grows beyond ~17,500 live items (365 days × 48 runs, with TTL removing anything older than a year).
Each item carries the same indicator values as indicators.json — roughly 2–3 KB per record.
History File Schema
Lambda aggregates the DynamoDB records into daily snapshots — one entry per calendar day, taking the latest run of that day. The output written to S3:
{
"generated_at": "2026-04-05T00:30:00Z",
"days": 90,
"snapshots": [
{
"date": "2026-01-06",
"sp": 540.2, "tsx": 24800, "oil": 72.5,
"cad": 0.718, "b5": 2.85, "b10": 3.22,
"boc": 4.25, "cpi_yoy": 2.1
}
]
}
Two files: data/history-90d.json and data/history-180d.json. Oldest to newest.
Regeneration Cadence
The history files only need to be regenerated once per day — the sparklines show daily granularity, not 30-minute granularity. Regenerating on every Lambda run (48 times per day) would query DynamoDB 48 times for data that changes once a day.
The Lambda detects whether it is the first run after midnight UTC and regenerates the history files only then. All other runs write only the DynamoDB record and the current indicators.json.
Cost
This stage adds one new AWS service: DynamoDB. Everything else — Lambda, S3, CloudFront, EventBridge — is unchanged.
| Item | Monthly estimate |
|---|---|
| DynamoDB writes (48/day × 30 days) | < $0.01 |
| DynamoDB storage (44 MB peak, within 25 GB free tier) | < $0.02 |
| DynamoDB reads (once-daily history regeneration) | ~$0.12 |
| S3 PutObject (2 files/day) | < $0.01 |
| Total addition | ~$0.15/month |
The free tier covers storage entirely (25 GB included always-free). Reads are the variable cost — kept to ~$0.12/month by regenerating the history files once per day rather than on every Lambda run.
Current site cost before Stage 5 is approximately $1–3/month. Stage 5 adds roughly 15 cents.
AWS Infrastructure Required
Two changes to the existing setup before the code can run:
- DynamoDB table —
econ-indicators-history, on-demand capacity, PKpk(String), SKts(String), TTL attributettl. - Lambda execution role — add
dynamodb:PutItemanddynamodb:Querypermissions scoped to the new table.
The rest of the infrastructure — Lambda function, EventBridge rule, S3 bucket, CloudFront distribution — is unchanged.
Backfilling Historical Data
Going live with an empty table means the 3M and 6M sparklines would be blank for months while the Lambda accumulated records. That is a bad first impression of a feature that is supposed to show history.
Each upstream API supports historical date ranges. A one-time backfill script (scripts/backfill_history.py) fetches 180 days of history from each source:
| Source | How far back | Notes |
|---|---|---|
| Yahoo Finance | 1 year (range=1y) | TSX daily closes |
| FRED | Arbitrary date range | WTI spot price, skips weekends |
| Twelve Data | outputsize=200 | SPY and CAD/USD — 200 trading days |
| BoC Valet | Arbitrary date range | Rate and bond yields; rate forward-filled between decisions |
| Statistics Canada | latestN=30 | CPI monthly; YoY rate forward-filled across days in each month |
The script builds a date spine from trading days, aligns all sources, and writes one DynamoDB item per calendar day with a {date}T12:00:00Z timestamp — clearly distinguishable from live 30-minute run records. It checks existing items before writing and skips any date already present, so it is safe to re-run.
Result: 122 daily records written covering October 2025 through April 2026. The 3M and 6M sparklines were populated immediately on launch.
One issue surfaced during setup: the history file generation uses a force_history event flag ({"force_history": true}) to allow immediate triggering without waiting for the midnight UTC run. This was added as a permanent testing convenience.
What Comes Next
Stage 6 adds threshold alerting — Lambda detects when an indicator crosses a meaningful threshold (5yr yield up >0.3% in a week, CPI above 3%, yield curve inversion) and publishes to SNS for email notification.
Next in the series: Stage 6 — threshold alerting — adding SNS email alerts on indicator crossings, with a 24-hour deduplication window to prevent repeated alerts when a value sits above a threshold for an extended period.
This dashboard is an informational tool for personal use. It is not financial advice. Mortgage decisions depend on personal circumstances that no dashboard can capture. Consult a mortgage broker or financial advisor before making rate decisions.