MK — PORTFOLIO
System Design Engineering Perspective

SYSTEM DESIGN
TRADE-OFFS every senior engineer should know

MK
Mandeep Kaur
Senior Full Stack Developer
February 2026
12 min read

After years of building distributed systems, microservices, and cloud-native applications, I've learned that software engineering is rarely about finding the "right" answer — it's about making the best trade-off for your specific context.

Here are the most important trade-offs I've encountered, how I think about them, and what I wish I'd understood earlier in my career.

1. CONSISTENCY vs Availability

This is the fundamental tension in distributed systems, formally described by the CAP theorem. When your network partitions — and it will — you must choose between keeping your system consistent or keeping it available.

01
Choose Consistency

A banking system should never show an incorrect account balance — even if that means the service is temporarily unavailable. The cost of wrong data vastly outweighs the cost of brief downtime.

Choose Availability

A social media feed can tolerate eventual consistency. Seeing a post 2 seconds late is far better than the feed being down. Users notice absence more than slight delay.

What I've learned: Most engineers default to strong consistency without questioning whether they need it. Before making this choice, ask — what's the actual cost of eventual consistency in this context? Often it's lower than you think, and the performance gains from availability are significant at scale.

2. LATENCY vs Throughput

These two are often confused but they're fundamentally different. Latency is how long a single request takes. Throughput is how many requests you can handle per second. Optimising for one often hurts the other.

02
Optimise for Latency

Process each request individually. Users get fast responses. Real-time APIs, live dashboards, and user-facing endpoints need this — anything over 200ms is noticeable.

Optimise for Throughput

Batch requests together. Each request waits for the batch to fill, adding latency — but you process far more per second. ETL pipelines and background jobs benefit here.

What I've learned: Optimise for latency at the edge (APIs users interact with directly) and throughput in the background (ETL pipelines, event processing, batch jobs). The mistake is applying one model everywhere.

"The engineers I respect most aren't the ones who know all the answers. They're the ones who ask the right questions before making a decision."

3. NORMALISATION vs Denormalisation

In relational databases, normalisation eliminates redundancy and ensures data integrity. Denormalisation deliberately introduces redundancy to improve read performance.

03
Normalised

Clean, consistent, easy to maintain. A single source of truth for every piece of data. But joins get expensive at scale, and complex queries against many tables add latency.

Denormalised

Fast reads — data is co-located where it's needed. But harder to keep consistent, more expensive to write to, and schema changes become painful.

What I've learned: Start normalised. Denormalise only when you have evidence of a performance problem — not before. When you do denormalise, be surgical — denormalise the specific query that's slow, not the entire schema.

4. SYNCHRONOUS vs Asynchronous

Microservices need to talk to each other. The choice between synchronous REST calls and asynchronous messaging via queues or event streams has enormous implications for your system's reliability and complexity.

04
Synchronous

Simple to reason about — request goes out, response comes back. But tight coupling means if service B is slow or down, service A is directly affected.

Asynchronous

Services are decoupled — publish an event and move on. But now you must handle message ordering, duplicate processing, dead letter queues, and eventual consistency.

What I've learned: Use synchronous when the user is waiting for an immediate response. Use asynchronous when the operation can happen in the background. The most common mistake is going fully async everywhere because it sounds more scalable, then drowning in operational complexity.

5. MONOLITH vs Microservices

Perhaps the most debated topic in modern software architecture. Microservices are not inherently better than monoliths — they're a trade-off, and the right answer depends on context.

05
Monolith

Simple to develop, test, and deploy. Local function calls are faster than network calls. Everything in one place. Becomes harder to scale and change as it grows.

Microservices

Independent scaling, independent deployment, team autonomy. But introduces network latency, distributed system complexity, and significant operational overhead.

What I've learned: Conway's Law — systems mirror the communication structure of the organisations that build them. Start with a well-structured monolith. Extract services when you have a concrete reason: a scaling bottleneck, a team boundary, a deployment conflict. I've seen small teams adopt microservices prematurely and spend more time on infrastructure than on product features.

6. BUILD vs Buy

Every engineering team faces this constantly — build a custom solution or use an existing tool, library, or managed service.

06
Build

Full control, no external dependencies. Tailored exactly to your needs. But it takes time, introduces long-term maintenance burden, and you'll likely reinvent wheels poorly before getting them right.

Buy / Use Managed Services

Faster to production. Someone else handles the hard operational problems. But you take on vendor dependency and may pay for capabilities you don't need.

What I've learned: Be honest about your core competency. Use managed services for undifferentiated infrastructure. Build custom solutions only where you have unique requirements that off-the-shelf tools genuinely can't meet.

7. DEVELOPER EXPERIENCE vs Performance

Code that's easy to write, read, and maintain is not always the most performant. Code optimised for performance is not always easy to maintain. This tension is subtle but it shapes daily engineering decisions.

07
Developer Experience First

Use the ORM, use the abstraction, write readable code. Productive and maintainable — but Hibernate can generate inefficient SQL, and N+1 problems can silently destroy performance at scale.

Performance First

Write raw SQL, drop to lower abstractions. Full control over what the database executes — but verbose, harder to maintain, and tightly coupled to your schema.

What I've learned: Optimise for developer experience by default. Use the abstraction. Then measure. If you find a genuine bottleneck, drop down to the lower abstraction for that specific case. Don't prematurely optimise.

THE Meta Trade-off

Every decision in system design is a trade-off between competing forces — simplicity vs capability, speed vs correctness, flexibility vs performance. The engineers I respect most aren't the ones who know all the answers. They're the ones who ask the right questions before making a decision.

What are the actual requirements — not the assumed ones?
What's the cost of getting this wrong?
Can we start simple and evolve, or do we need to get this right from the start?
What will this decision cost us in 6 months when requirements change?

The best system is not the most technically impressive one. It's the one that solves the actual problem, can be maintained by your team, and can evolve as your understanding improves.

What trade-offs have shaped how you design systems? I'd love to hear your perspective.

MK
Mandeep Kaur
Senior Full Stack Developer · Glasgow, UK

7+ years building enterprise Java systems across fintech, healthcare, and analytics. Currently scaling a production traceability platform at WyldTrace. MSc Data Analytics (Distinction), University of Strathclyde.