STRC Site — Per-Component i18n Architecture
Architecture decision for internationalization on the STRC Research Portal (strc.egor.lol).
Decision: Per-Component JSON Files
Instead of monolithic en.json / zh.json, split translations into per-component files.
Pattern:
src/components/mini-strc/i18n/
ComponentName.en.json
ComponentName.zh.json
Helper: src/lib/component-i18n.ts exports useComponentTranslations(lang, translations)
Why the Monolith Failed
- Chinese translation required 716 keys in one file — huge context for LLMs
- Sub-agents hit token limits on large translation batches
- Required chunking logic, JSON escape bugs, complex merge pipelines
- Hard to maintain: one wrong key breaks entire locale
Why Per-Component Works
- Each translation file scoped to one component (~20-50 keys vs 700+)
- LLM can translate each file cleanly in one pass, no chunking
- Component-level diffs are reviewable
- New components don’t pollute global namespace
- Cache invalidation: change one component’s translations, only that file regenerates
Migration Status (2026-03-20)
- Full refactor spawned as Opus sub-agent (in progress at session end)
- Reverted a broken partial refactor attempt (commit
acd130d) back to088510c - Chinese (zh) is complete in monolith: 998 keys, deployed and working
- Refactor needs to happen before adding new languages (FR, ES, JA)
Lesson Learned
Always decide i18n architecture before translating. Translating 716 keys into a monolith = technical debt that costs a full refactor. The right architecture is per-component from day one.
Connections
[see-also]SwiftUI @Published dict re-render bug (Brain) — same root pattern: keeping state scoped to the component (local @State / per-component translation file) is more reliable than propagating changes through a monolithic parent; atomicity at the component level[see-also]Atomicity Principle (Brain) — per-component i18n is the Atomicity Principle applied to translation architecture[part-of]STRC Research Portal[see-also]Lessons Learned — iOS (Brain)[see-also]LLM Multilingual Generation Failures (Brain) — this architecture solves failure mode 2 (token-limit collapse on large translation batches)[see-also]MISHA Foundation