devpad #8 - Convergence

Published

here's the situation: I currently have 3 projects all slightly integrated which each other but on independent tech stacks & domains.

these accomplish a couple different things independently of each other, but i did some brainstorming last night and realised that really - they all tie together. devpad (in its current form) owns projects & task management, blog.forbit.dev owns a "journal" of work on these projects, and the media timeline is operates as a self-auditor of time spent on each project and a system that provides frictionless answers to "what am i working on at the moment".

i think it makes sense for these to all live under the 1 domain (devpad). this allows us to re-use auth across all 3 (github primarily), which is a major headache when it comes to any software piece. it also allows for more seamless integration. my goal here is to have blog posts tied directly to projects, and have an entire ecosystem around rendering project-based blogs inside the actual projects.

this is mainly for chamber.net.au - which I want to have a "changelog"/"blog" embedded in the site itself - which would read from my current blogging system but only for the "chamber" project (which is stored + managed in devpad.tools).

convergence is a nice word that describes what i'm looking for here. i think there's also room to improve the codebase using my new corpus library, and given my recent experience this year at amazon, i feel a lot more comfortable expanding this project to be cloud-native (ironically, i'm going to use cloudflare workers for this - support the underdog!)

this also means i'll be diving deeper into monolithic architecture, the tech stack will still mainly revolve around Astro, but i want to move towards more solid-based approaches, with the idea of eventually migrating to tan-stack solid.

i'm still using a very ai-driven approach, but the great thing about the corpus library is it provides real ways of testing real production workflows, but with an in-memory backend as opposed to a full system test (as well as file-system). testability i've found is one of the core concepts you want to think about when designing a new software architecture that is going to be driven by ai-development. relying on compiler errors & testing that mimics production as close as possible will always produce more correct results. on ai-driven development, i've also found exponential benefits from after completing a feature, getting it to go back a couple times and identify "refactoring opportunities" by getting a sub-agent to analyse the entire project with fresh eyes. doing this a couple times really reduces the amount of "slop" that opus-4.5 tends to produce (mainly over-engineering).