Scaling React Performance: Architecture, Not Magic
Most frontend performance problems are not caused by “slow JavaScript”.
They’re caused by:
- Poor state structure
- Unnecessary recalculations
- Excessive rendering
- Misunderstanding how React actually updates the DOM
Here’s the catch: the app works fine — until the data grows.
- At 100 items everything feels smooth
- At 10,000 users start noticing lag
- At 100,000 the UI becomes unpredictable
Performance is rarely about speed. It’s about scaling behavior.
This post covers performance thinking in React + Redux Toolkit applications, with mental models and tools for building systems that scale predictably.
Performance Is Architecture, Not Micro – Optimizations
Many developers think optimization means:
- Adding
useMemo - Sprinkling
React.memo - Using
useCallbackeverywhere
That’s not optimization. That’s patching symptoms.
If you need useMemo everywhere, something is wrong with your data flow.
Real performance comes from architecture:
- How is state structured?
- How often is it recalculated?
- How many components are subscribed?
- How much work does React need to do per update?
If your architecture is wrong, no hook will save you.
React Compiler Changes the Game – But Not the Architecture
The React Compiler is a big step forward. It can automatically:
- Memoize computations
- Stabilize references
- Reduce unnecessary re-renders caused by inline values
This removes a lot of accidental inefficiencies.
But it does not fix:
- Bad state modeling
- Global subscriptions
- O(N) recalculation strategies
- Rebuilding large structures on every update
If your architecture forces large parts of the tree to change, the compiler cannot magically make that O(1).
It reduces friction. It doesn’t replace performance thinking.
Think in Terms of Complexity
Whenever you update state, ask: How much of the application needs to change?
There are two fundamentally different approaches:
Recalculate everything
const updatedTree = rebuildWholeTree(oldTree, updatedNode);
Update one small piece — recompute the whole structure.
Complexity: O(N)
Update only what changed
state.nodes[id].status = "updated";
Update one piece — only affected parts get new references.
Complexity: O(height) or even O(1)
That difference becomes dramatic as data grows:
- 100 items → you won’t notice
- 10,000 items → users will
Performance thinking is about predicting that curve.
Normalize Your State
Nested structures are intuitive — but dangerous.
Instead of deeply nested trees:
{
root: {
children: [
{
id: "1",
children: [{ id: "2", children: [] }],
},
];
}
}
Prefer normalized state:
{
tree: {
rootId: "1",
ids: ["1", "2"],
entities: {
"1": {
id: "1",
parentId: null,
childrenIds: ["2"],
depth: 0,
status: "in-progress"
},
"2": {
id: "2",
parentId: "1",
childrenIds: [],
depth: 1,
status: "in-progress"
}
}
}
}
Normalization gives you:
- O(1) access to entities
- Localized updates
- Better rendering isolation
- Simpler reasoning about change propagation
With normalized state:
state.tree.entities["2"].status = "done"
Only node “2” changes. Not the whole tree.
Redux Toolkit makes this easy with createEntityAdapter.
Subscriptions Define Your Render Surface
A common hidden performance issue:
const tree = useSelector((state) => state.tree);
Any change in tree — this component re-renders. Even if it needs only one node.
Better:
const node = useSelector((state) => state.tree.entities[id]);
Now:
- Each component subscribes only to what it needs
- One update — one component re-renders
- Not the entire tree
Performance is often about scope of subscription. Large subscription equals large render surface.
Rendering Is Also an Algorithm
Even if state updates are efficient, React still reconciles. Reconciliation is roughly O(N) in the size of the subtree being compared.
If you:
- Recreate large arrays
- Rebuild object literals inline
- Generate new callbacks every render
React walks more of the tree than necessary.
Example:
<MyList items={[...items]} />
That spread creates a new reference every render. React cannot optimize that.
Optimization is not only about computation. It’s about minimizing reconciliation work.
Virtualization: When Rendering Becomes the Bottleneck
Sometimes your state logic is efficient — but rendering thousands of DOM nodes is expensive. That’s where virtualization comes in.
Virtualization means rendering only what is visible: instead of rendering 10,000 rows, render 30 and recycle DOM nodes while scrolling.
Without virtualization
- Render cost grows with N
- Scrolling becomes heavy
- Layout and paint become bottlenecks
With virtualization
- Render cost becomes proportional to viewport size
- Not total dataset size
Virtualization does not fix bad architecture. But combined with normalized state and narrow subscriptions, it allows UIs to scale to massive datasets.
Measure Before You Optimize
Performance improvements shouldn’t be guesses. They should be measured.
Useful tools
React DevTools Profiler
Helps identify expensive renders and unnecessary updates.
Chrome Performance Tab
Shows scripting, layout, and paint costs in a timeline view.
React-Scan (react-scan.com)
A focused React performance analysis tool that visualizes component update frequency, highlights render hot spots, helps detect wasted renders, and shows which parts of your tree are most expensive. Especially useful for understanding render surfaces and subscription impact.
If you don’t measure, you’re optimizing blind.
How to Approach Optimization in Practice
When you design or optimize a system, don’t start with hooks. Start with these questions.
Can I trade memory for speed?
Store more to compute less later. Practical actions:
- Cache derived values instead of recalculating them
- Use memoized selectors for expensive computations
- Maintain counters instead of aggregating children each time
- Build lookup maps for O(1) access instead of scanning arrays
Memory is usually cheaper than CPU.
Can I reduce the number of operations?
Don’t try to make a slow algorithm slightly faster. Replace it. Practical actions:
- Replace linear searches with indexed access
- Keep data pre-sorted if you sort frequently
- Use divide-and-conquer instead of flat iteration
- Batch state updates instead of triggering multiple re-renders
Big wins come from algorithmic changes.
Can I limit change propagation?
UI performance is mostly about how far change spreads. Practical actions:
- Isolate local state when global state isn’t needed
- Normalize entities to avoid deep updates
- Subscribe narrowly in
useSelector - Avoid global recalculations when one small piece changes
UI systems are change-propagation systems. Optimization is controlling how far change travels.
Performance Is About Scaling Behavior
Ask yourself:
- What happens if my data grows 10x?
- What about 100x?
- What about 1000x?
If your answer is “Probably fine” — you haven’t analyzed it.
If your answer is “Update cost grows with height, not total size” — you’re thinking architecturally.
React Compiler reduces accidental inefficiencies. Virtualization reduces rendering pressure. Tools like React-Scan show you where work actually happens.
But only good architecture makes scaling predictable.
Leave a Reply