Build For Yourself
Building personal micro apps has never been easier
Like most developers, I have a million ideas but only so much time to build them. I keep a document in Notion that lists these ideas — and it grows way faster than I could ever implement.
That’s doubly true for me thanks to my ADHD: I’ll start five projects at once, wildly underestimate how long things will take (“It’ll take a month, tops!”), and before I know it, six months have passed and interest has evaporated.
In this post, I’ll share how I experimented with using AI not just for repetitive tasks, but to practically build personalised apps tailored to my problems. I’ll walk through how I built overpayyourmortgage.co.uk after getting frustrated that no tool existed that actually solved my problem.
What is a “Micro App”?
I’ve used AI as a developer assistant since the start of the AI hype cycle, but only recently began experimenting with using AI to almost entirely build applications.
I’ve been skeptical of this “vibe coding” movement, as a lot of people have. I dread how much code is now running in “production” without being tested, or even read. You often see reports of non-technical people using AI to vibe code enitre applications, sometimes with disastrous results.
But as software engineers, I think we’re in a unique position to use vibe coding for good — as long as we treat AI like a junior developer: useful for low-risk areas, but always under review.
One interesting trend is people building micro apps — lightweight tools that solve niche problems, often for a tiny audience (in some cases, just one person: the builder).
They’re useful because we’ve all hit situations where:
an app almost does what we want, but not quite,
a manual process is annoying but not worth the time to automate,
or we know exactly what a tool should do — it’s just tedious to build.
An interesting trend is people building “micro apps” - these are applications that solve niche targeted problems, with the target audience being small, sometimes just one user - the builder themselves. We’ve all faced issues with apps that don’t quite do what we want them to do, e.g. a hobby tracker that doesn’t track negative habits, or we’ve had problems which are arduous to solve manually but probably aren’t worth our time to solve programmatically.
This is where I believe vibe coding can be extraordinarily useful - outsourcing the initial build to AI can make the time investment worthwhile.
Whilst there isn’t a set definition of micro apps, in my view they generally have:
A specific niche problem, often for a small audience
A short build time — a few days to a week
Potentially temporal use — useful for a time, not forever
Possibly private — maybe it lives only on your machine
What Niche Problems Do You Have?
Before building, think about problems you care about solving but haven’t gotten around to because of time or friction.
For example, here are some of mine:
Keeping track of every Toby Carvery that my wife and I visit - we’re trying to visit all of them in the UK
Knowing which Pokémon games give the best catch rates for my living dex.
These are projects that I have started in the past but never finished. I know exactly what they should do, but they became chores because the scope ballooned and interest faded.
Your ideas might be useful to other people, tracking pokémon catch rates could be useful to others, tracking Toby Carvery’s… probably not so much. Either way, design them as if you are the only user.
I say this because:
It eliminates feature creep.
You won’t waste time building stuff nobody needs.
You don’t have to wrestle with broad use-cases like multi-language UI, accessibility for strangers, or authentication.
These can end up killing your passion for a project, causing you to abandon it. Build what you would use. You can always extend it later.
And yes — if you’re the only user, you might not even need authentication, fancy UI libraries, or all that extra scaffolding. Be brutal about chopping out features. This isn’t your day job — chances are nobody else will read this code!
Do You Need To Build It?
Before you jump in, it’s worth checking whether what you need already exists. There are a number of ways you can do this:
Google around
Use dedicated tool finders, e.g. https://www.microapp.io/
Ask ChatGPT
I went with a combination of googling around and asking chatGPT.
Here is an example of a prompt you can use for this:
Look for existing software applications (web, mobile, or desktop) that match the following product description, use case, or problem to solve. For each app you find:
• Provide the app name
• A short description of what it does
• Key features relevant to the description
• Platform(s) (web, iOS, Android, Windows, macOS, etc.)
• A note on the closest match and why
Here is the description:
[PASTE YOUR DESCRIPTION HERE]
For my mortgage overpayment calculator, I pasted:
An application that shows you how much money you could save by overpaying your mortgage, and how much it would cost each year. The app should respect the common 10% of the remaining balance rule and allow you to see results for a percentage based overpayment.
ChatGPT gave me a number of results, but looking closer at them none of them actually did what I wanted them to do.
Whilst I could have gone more into depth, or used a more sophisticated tool, such as aicofounder.com, this was good enough for me and I felt vindicated enough to build my own.
Building a Micro App
Before we get started on building a micro app, we’ll first need to set up our own AI ecosystem and decide on our own boundaries.
Pick Your Tools
There are numerous tools that you can use for building micro apps, depending on your appetite for control. You could use Claude Code or OpenAI Codex, or lean more into low-code solutions with Loveable or Replit.
Personally I use Claude Code as I still like to feel in control of the project and delegate tasks to Claude Code.
Decide how much you want to do yourself
In theory, AI can write the entire app without you touching a line of code. In reality, this often gets frustrating — the agent can mess up, loop on itself, hit token limits, or just go off track.
What feels “right” will be different for all of us. For example, I’m perfectly happy letting Claude Code write unit tests and I review them afterwards. I’m also happy letting Claude Code add in UI elements.
No matter what you decide, what is important is that you review everything that the AI agent generates. Incidious bugs can be generated by AI that we can easily miss, that we likely wouldn’t have introduced ourselves. For micro apps this is less important as the audience is small, but I believe it’s a bad habit to fall into, plus it’s a good learning experience!
Know your strengths and your weaknesses
We all have areas of software engineering that we’re not great at. In my case, I’m terrible at UX. I don’t have a good sense of what looks “good”, no matter how many design books I read. However what I am good at is architecture and writing readible and efficient code.
UX has put me off many personal projects so I’m more than happy to outsource it and focus on the bits that bring me joy and keep me interested in the project. I’d much rather write code at a slower pace than have to do extensive PRs for generated code. Of course we can flip this and I can use Claude Code to critic my implementation and essentially do the PR for me, which has helped me find gaps in my implementation in the past.
Specialised Prompts, Instructions, Skills, and Agents
A big part of using AI agents is knowing how to prompt and instruct them to get the result you want. Depending on what you use these can have different names, but there is movement to try to homogenise these, such as Agent Skills and Agents.md.
Whilst you’ll most likely build up your own catalogue as time goes by (I have my own project engineer subagent for example), a good place to start is Awesome Claude Code if you’re using Claude Code, or look for an equivalent “Awesome <Tool>” repository depending on your choice of tool.
Lets spend some time here going over the different types.
Specialised Prompts
These are carefully crafted, one-off or reusable prompts designed to get a specific kind of output from The AI agent.
Example
You are a senior Java engineer.
Review the following Spring Boot service for:
- concurrency issues
- transaction boundaries
- performance problems
Return findings as a table with severity and fix.
Instructions
Instructions are persistent or semi-persistent rules that shape how the AI Agent behaves across messages. We set these once, and they are applied repeatedly. They do not execute actions, but rather act as guardrails.
Example
Instructions:
- Always assume production-grade code
- Prefer explicit types over inference
- Never suggest libraries unless asked
Skills
Skills are defined capabilities or procedures which the AI agent can repeatedly perform when invoked. These are good for repeatable small tasks.
Example
Skill: Backend Code Review
When invoked:
1. Identify architectural smells
2. Check concurrency and threading
3. Review error handling
4. Suggest refactors with examples
You might invoke it like:
Use the Backend Code Review skill on this service:
Subagents
Subagents are specialised AI assistants that handle speciifc types of tasks. Each subagent runs in its own context window with a custom system prompt, specific tool access, and independent permissions. They are good for large open-ended problems and tasks, such as refactors or migrations.
Example
Agent: Spring Boot Migration Agent
Goal:
Migrate this service from blocking MVC to virtual threads.
Capabilities:
- Analyze current code
- Identify blocking calls
- Apply refactors
- Validate correctness
You might invoke it like:
Use the Spring Boot Migration Agent to migrate to virtual threads in this project
The Return of TDD
Most developers will have at least heard of TDD - Test-Driven Development. Whilst most developers will agree in theory that it’s a good idea, in practice not many people actually do it. However in my experience it’s essential when you’re instructing an AI agent to implement a feature for you.
One problem I would often have is that I would ask Claude Code to implement something specific. It would do the work, and then come back to me to say it was done. I would then look and it had either not implemented what I asked, had missed edge cases, or it had broken something else. This would then be followed with a series of prompts by me to get it to implement what I wanted without breaking existing functionality or ignoring edge cases - which is time-consuming and frustrating to say the least.
I instead switched to asking Claude Code to implement features using TDD. I would explain the feature, ask it first to write tests regarding the scenarios I outlined, implement the feature, and not to return until the new tests and existing tests passed.
I found that this could burn through quite a few tokens, especially if it was a complex feature, but it meant I didn’t need to intervene as much and it could use the tests to automate checking it’s correctness.
For example here is a prompt I used:
The net position is not £0 when the savings rate and the mortgage rate are the same percentage.
Add tests to mortgage-calculator.test.ts to verify that it is £0 when the savings and mortgage rates
are the same for different percentage values.
Do not return until this is fixed and all tests pass.
I’ve found this gives a better experience overall.
My Micro App - An Overpayment Mortgage Calculator Based On Remaining Balance
One problem I was facing was that I was not able to find a mortgage overpayment calculator that let me calculate overpayments based on my overpayment allowance.
In the UK, many lenders let you overpay up to 10% of the remaining balance each year without Early Repayment Charges (ERCs), which can dramatically reduce total interest without incurring extra fees.
Here is an example of the overpayment allowance each year:
| Year | Mortgage Balance | 10% Overpayment |
| 1 | £200,000 | £20,000 |
| 2 | £175,000 | £17,500 |
However the calculators I found only let you:
Overpay a fixed amount per month, or
Use percentage of initial balance
Neither respected the yearly allowance on the remaining balance.
Additionally, my fixed deal is due to expire this October and I would most likely be able to secure a cheaper rate, as well as potentially add a lump sum overpayment when my mortgage switched to the Standard Variable Rate (SVR) and there were no ERCs. I couldn’t find any calculator that let me do this.
So I built my own micro app using AI!
You can see it live at overpayyourmortgage.co.uk.
How I Automated It
My workflow:
Brain dump into ChatGPT will all app details, architecture, features, tools, etc. and ask for a PRD.md (Product Requirements Document).
Feed the PRD to my “AI Project Engineer” user agent, which generates a workplan, architecture documents, and readmes and iterates over the workplan until everything is implemented.
Review this first iteration of code, create a V2 of the PRD.md based on things that have been missed either by myself or Claude, and keep doing this until I’m mostly happy.
For smaller points I ask Claude Code directly to implement specific features.
For me, this got me about 80% of the way there.
What I Did Manually
There are certain steps that I’m sure if I tried hard enough I could set up with an agent, but for now at least I did manually:
Created the initial project in GitHub
Commit changes and add meaningful messages
Configure deployment to Netlify
Bought a domain via Cloudflare and attached it to my Netlify project
What I Ended Up With
At overpayyourmortgage.co.uk you get:
A landing page explaining the calculator
Guides on overpayments, interest savings, and ERCs
A calculator that lets you:
Compare overpaying vs not overpaying
Use percentage- or fixed-amount overpayments that respect and cap at the annual overpayment allowance
Compare mortgage vs savings
Add remortgage scenarios with new rates, terms, and lump sums



I’m pretty happy with it! And importantly it does exactly what I needed from a mortgage calculator.
I made the choice to host it and buy a domain as I imagine this isn’t a problem isn’t unique to me. I believe others could benefit from this, so perhaps it will graduate from a micro app in the future!
Conclusion
Using AI to build micro apps allow us to build tools that are tailored to exactly what we need with a much lower time investment cost.
Whilst I have gone over what I consider the advantages of this approach - reducing boredom, lack of momentum, and time investment, which are the biggest project killers - there are a number of disadvantages that I didn’t expect.
When I’m waiting for Claude Code to respond, I often drift off and start looking at something else (most likely Reddit) which is concerning for my overall attention span but it often means I don’t get “into the zone”, meaning I’m more likely to make mistakes on the project. In future, I’m going to try tools like OpenClaw to experiment with letting Claude generate features overnight to cut down on “waiting” downtime and keep my focus where it matters most during the day.
Additionally, despite reviewing the code that Claude generates, I still have significantly less understanding than if I had built this myself. As an engineer this makes me nervous, even for a micro app, as there could be subtle but dangerous bugs that I’ve glossed over.
Finally, I worry that if I rely on it too much my skills will atrophy or I’ll become so dependent on it that I will struggle to solve problems without the the use of AI.
All of these are real problems that we as software engineers face - and it is up to each of us to find a balance that works.




