Category: Uncategorized

  • Engineering Challenges in the Age of AI-Driven Development

    In recent years, artificial intelligence has been rapidly transforming the way software is developed. AI-powered tools – from code autocompletion to generating architectural solutions – are becoming part of developers’ daily workflows. However, adopting AI is not just about introducing new tools – it represents a fundamental shift in the engineering model of software development.

    From a practical perspective, the challenges can be divided into three levels: code, system, and organization.

    1. Integration into the Existing Stack

    One of the first challenges is integrating AI tools into established development processes. Most teams already have well-defined pipelines: CI/CD, code reviews, linters, and testing.

    The key question is where AI participates in the development lifecycle:

    • at the developer level (code suggestions, generation),
    • during code review,
    • inside CI/CD pipelines.

    The deeper the integration, the higher the risks related to reproducibility and quality control.

    2. Code-Level Quality

    AI generates code that often appears correct but may include:

    • hidden bugs,
    • inefficient implementations,
    • violations of internal standards.

    The core issue is that AI optimizes for local correctness, not system-wide correctness.

    Therefore, AI-generated code should be treated as untrusted input and must go through:

    • strict code review,
    • comprehensive test coverage,
    • static analysis.

    3. Context Limitations

    AI does not have a full understanding of the system and operates within the provided context. This leads to:

    • ignoring existing abstractions,
    • duplication of logic,
    • violations of architectural boundaries.

    The quality of output is directly tied to the quality of input.

    This effectively introduces a new discipline – context engineering.

    4. Non-Deterministic Behavior

    Unlike traditional tools, AI systems are inherently non-deterministic. The same input may produce different outputs.

    This creates fundamental challenges for:

    • reproducibility,
    • process stability,
    • usage in automated pipelines.

    Engineering teams must clearly define where non-determinism is acceptable and where it is not.

    5. Debugging and Observability

    Non-determinism and limited context lead to a lack of transparency.

    It becomes difficult to:

    • understand why a particular solution was generated,
    • reproduce behavior,
    • debug issues.

    This requires new practices like introducing AI observability tooling.

    6. System-Level Technical Debt

    At the system level, AI accelerates development but can silently degrade architecture:

    • increased coupling,
    • duplicated solutions,
    • erosion of architectural boundaries.

    This is not an immediate issue, but a cumulative effect that becomes visible over time.

    Mitigation requires:

    • regular architectural reviews,
    • strong module boundary enforcement,
    • disciplined refactoring.

    7. Testing Under Non-Determinism

    Traditional testing assumes deterministic outputs. With AI, this assumption no longer holds.

    Testing strategies must evolve:

    • from exact output matching to contract-based validation,
    • increased reliance on integration testing,
    • manual validation for critical paths.

    This is a direct consequence of non-deterministic behavior.

    8. Performance and Cost

    AI introduces new constraints:

    • API latency,
    • token-based costs,
    • increased infrastructure complexity.

    At scale, this impacts:

    • development speed,
    • CI/CD duration,
    • operational expenses.

    Teams must deliberately optimize:

    • where AI is used,
    • how much context is sent,
    • when caching is applied.

    9. Vendor Lock-in and Model Evolution

    AI tools are often tightly coupled to specific vendors.

    This introduces risks:

    • pricing changes,
    • shifts in model behavior,
    • limited portability.

    Additionally, models evolve over time, meaning the same input may yield different results in the future.

    Practical mitigation includes:

    • abstraction layers over providers,
    • model versioning,
    • fallback strategies.

    10. Engineering Culture Shift

    At the organizational level, AI changes how teams make decisions.

    New requirements emerge:

    • formalizing AI usage guidelines,
    • defining review standards for AI-generated code,
    • focusing on decision quality, not just code quality.

    Without this, teams tend to diverge in their approaches.

    11. Solution Fragmentation

    AI may suggest different approaches to the same problem, leading to:

    • multiple implementations of similar logic,
    • increased maintenance complexity,
    • reduced system predictability.

    This is not just a code issue, but a system consistency problem.

    It can be addressed through:

    • strong architectural principles,
    • shared libraries,
    • centralized design practices.

    Conclusion

    Adopting AI in programming is not just a tooling upgrade – it is a shift in the engineering paradigm.

    The key challenge is that problems emerge across multiple layers:

    • local (code quality),
    • system (architecture and technical debt),
    • organizational (consistency and engineering culture).

    Teams that consciously manage all three levels gain a significant advantage.

    The key takeaway: AI amplifies engineers, but it also demands a higher level of engineering discipline than traditional tools.

  • Architectural Patterns Matter More Than Framework APIs

    Spring, Symfony, and Express look different.

    Annotations vs attributes.
    Java vs PHP vs JavaScript.
    Different DI containers.
    Different conventions.

    But if you look deeper, almost all backend frameworks are built on the same architectural patterns.

    If you think in patterns — switching stacks is easy.
    If you think in “framework style” — it’s painful.

    What Do All Backend Frameworks Actually Have in Common?

    Not specific APIs.
    Not annotations.
    Not CLI tools.

    But this:

    • MVC
    • Layered architecture
    • Dependency Injection
    • Inversion of Control
    • Middleware / Interceptors
    • Repository pattern
    • Service layer
    • Modularity
    • Separation of concerns

    The syntax changes. The ideas don’t.

    MVC: The Form Changes, The Meaning Doesn’t

    Spring / Spring Boot

    @RestController
    public class UserController {
        @GetMapping("/users/{id}")
        public UserDto getUser(@PathVariable Long id) {
            return userService.getUser(id);
        }
    }

    Structure:
    Controller → Service → Repository

    Symfony

    #[Route('/users/{id}', methods: ['GET'])]
    public function show(int $id): JsonResponse {
        return $this->json($this->userService->getUser($id));
    }

    Same model:
    Controller → Service → Repository

    Express

    app.get('/users/:id', async (req, res) => {
      const user = await userService.getUser(req.params.id)
      res.json(user)
    })

    Express doesn’t enforce structure. But if you understand MVC — you’ll build it anyway.

    Layered Architecture – Almost Universal

    The most common backend structure:

    Controller

    Service

    Repository

    Database

    Spring encourages it.
    Symfony encourages it.
    Express allows it — if you’re disciplined.

    If you understand layered architecture, switching frameworks is mostly a syntactic change.

    DI and IoC – Different Implementation, Same Principle

    Spring

    • Powerful IoC container
    • Lifecycle management
    • Constructor injection

    Symfony

    • Service container
    • Autowiring
    • Constructor injection

    Express

    • No built-in DI
    • Use Awilix / Inversify
    • Or wire dependencies manually

    Dependencies are injected, not created inside the class.

    Middleware Exists Everywhere

    Spring: Filters, Interceptors, AOP
    Symfony: Event subscribers
    Express: app.use()

    Used for:

    • Logging
    • Authentication
    • Metrics
    • Transactions

    Different mechanisms. Same pattern.

    Repository Pattern

    An abstraction over data access.

    Spring → JPA repositories
    Symfony → Doctrine repositories
    Express → Custom data layer abstraction

    Different tooling. Same idea: business logic does not depend on database details.

    When Is Switching Hard?

    When you think in terms of:

    • “The Spring way”
    • “The Symfony way”
    • “The Express way”

    Then every transition feels like restarting your career.

    When Is Switching Easy?

    When you think in terms of:

    • Controllers
    • Services
    • Repositories
    • DI
    • Middleware
    • Transactions
    • Modules

    Then changing stacks means:

    • A new ecosystem
    • New conventions
    • A new runtime

    But not a new architecture.

    Where Are the Real Differences?

    • JVM vs PHP runtime vs Node event loop
    • Thread-per-request vs event-driven async
    • Maven / Composer / npm
    • Ecosystems

    These are platform differences – not architectural ones.

    The Bottom Line

    Frameworks change. Patterns persist.

    If you invest in understanding architectural principles instead of framework APIs, switching stacks becomes an engineering transition -not a career reset.

  • Why Performance Thinking Matters in React + Redux Applications?

    Scaling React Performance: Architecture, Not Magic

    Most frontend performance problems are not caused by “slow JavaScript”.

    They’re caused by:

    • Poor state structure
    • Unnecessary recalculations
    • Excessive rendering
    • Misunderstanding how React actually updates the DOM

    Here’s the catch: the app works fine — until the data grows.

    • At 100 items everything feels smooth
    • At 10,000 users start noticing lag
    • At 100,000 the UI becomes unpredictable

    Performance is rarely about speed. It’s about scaling behavior.

    This post covers performance thinking in React + Redux Toolkit applications, with mental models and tools for building systems that scale predictably.

    Performance Is Architecture, Not Micro – Optimizations

    Many developers think optimization means:

    • Adding useMemo
    • Sprinkling React.memo
    • Using useCallback everywhere

    That’s not optimization. That’s patching symptoms.

    If you need useMemo everywhere, something is wrong with your data flow.

    Real performance comes from architecture:

    • How is state structured?
    • How often is it recalculated?
    • How many components are subscribed?
    • How much work does React need to do per update?

    If your architecture is wrong, no hook will save you.

    React Compiler Changes the Game – But Not the Architecture

    The React Compiler is a big step forward. It can automatically:

    • Memoize computations
    • Stabilize references
    • Reduce unnecessary re-renders caused by inline values

    This removes a lot of accidental inefficiencies.

    But it does not fix:

    • Bad state modeling
    • Global subscriptions
    • O(N) recalculation strategies
    • Rebuilding large structures on every update

    If your architecture forces large parts of the tree to change, the compiler cannot magically make that O(1).

    It reduces friction. It doesn’t replace performance thinking.

    Think in Terms of Complexity

    Whenever you update state, ask: How much of the application needs to change?

    There are two fundamentally different approaches:

    Recalculate everything
    const updatedTree = rebuildWholeTree(oldTree, updatedNode);

    Update one small piece — recompute the whole structure.
    Complexity: O(N)

    Update only what changed
    state.nodes[id].status = "updated";

    Update one piece — only affected parts get new references.
    Complexity: O(height) or even O(1)

    That difference becomes dramatic as data grows:

    • 100 items → you won’t notice
    • 10,000 items → users will

    Performance thinking is about predicting that curve.

    Normalize Your State

    Nested structures are intuitive — but dangerous.

    Instead of deeply nested trees:

    {
      root: {
        children: [
          {
            id: "1",
            children: [{ id: "2", children: [] }],
          },
        ];
      }
    }

    Prefer normalized state:

    {
      tree: {
        rootId: "1",
        ids: ["1", "2"],
        entities: {
          "1": {
            id: "1",
            parentId: null,
            childrenIds: ["2"],
            depth: 0,
            status: "in-progress"
          },
          "2": {
            id: "2",
            parentId: "1",
            childrenIds: [],
            depth: 1,
            status: "in-progress"
          }
        }
      }
    }

    Normalization gives you:

    • O(1) access to entities
    • Localized updates
    • Better rendering isolation
    • Simpler reasoning about change propagation

    With normalized state:

    state.tree.entities["2"].status = "done"

    Only node “2” changes. Not the whole tree.

    Redux Toolkit makes this easy with createEntityAdapter.

    Subscriptions Define Your Render Surface

    A common hidden performance issue:

    const tree = useSelector((state) => state.tree);

    Any change in tree — this component re-renders. Even if it needs only one node.

    Better:

    const node = useSelector((state) => state.tree.entities[id]);

    Now:

    • Each component subscribes only to what it needs
    • One update — one component re-renders
    • Not the entire tree

    Performance is often about scope of subscription. Large subscription equals large render surface.

    Rendering Is Also an Algorithm

    Even if state updates are efficient, React still reconciles. Reconciliation is roughly O(N) in the size of the subtree being compared.

    If you:

    • Recreate large arrays
    • Rebuild object literals inline
    • Generate new callbacks every render

    React walks more of the tree than necessary.

    Example:

    <MyList items={[...items]} />

    That spread creates a new reference every render. React cannot optimize that.

    Optimization is not only about computation. It’s about minimizing reconciliation work.

    Virtualization: When Rendering Becomes the Bottleneck

    Sometimes your state logic is efficient — but rendering thousands of DOM nodes is expensive. That’s where virtualization comes in.

    Virtualization means rendering only what is visible: instead of rendering 10,000 rows, render 30 and recycle DOM nodes while scrolling.

    Without virtualization
    • Render cost grows with N
    • Scrolling becomes heavy
    • Layout and paint become bottlenecks
    With virtualization
    • Render cost becomes proportional to viewport size
    • Not total dataset size

    Virtualization does not fix bad architecture. But combined with normalized state and narrow subscriptions, it allows UIs to scale to massive datasets.

    Measure Before You Optimize

    Performance improvements shouldn’t be guesses. They should be measured.

    Useful tools

    React DevTools Profiler
    Helps identify expensive renders and unnecessary updates.

    Chrome Performance Tab
    Shows scripting, layout, and paint costs in a timeline view.

    React-Scan (react-scan.com)
    A focused React performance analysis tool that visualizes component update frequency, highlights render hot spots, helps detect wasted renders, and shows which parts of your tree are most expensive. Especially useful for understanding render surfaces and subscription impact.

    If you don’t measure, you’re optimizing blind.

    How to Approach Optimization in Practice

    When you design or optimize a system, don’t start with hooks. Start with these questions.

    Can I trade memory for speed?

    Store more to compute less later. Practical actions:

    • Cache derived values instead of recalculating them
    • Use memoized selectors for expensive computations
    • Maintain counters instead of aggregating children each time
    • Build lookup maps for O(1) access instead of scanning arrays

    Memory is usually cheaper than CPU.

    Can I reduce the number of operations?

    Don’t try to make a slow algorithm slightly faster. Replace it. Practical actions:

    • Replace linear searches with indexed access
    • Keep data pre-sorted if you sort frequently
    • Use divide-and-conquer instead of flat iteration
    • Batch state updates instead of triggering multiple re-renders

    Big wins come from algorithmic changes.

    Can I limit change propagation?

    UI performance is mostly about how far change spreads. Practical actions:

    • Isolate local state when global state isn’t needed
    • Normalize entities to avoid deep updates
    • Subscribe narrowly in useSelector
    • Avoid global recalculations when one small piece changes

    UI systems are change-propagation systems. Optimization is controlling how far change travels.

    Performance Is About Scaling Behavior

    Ask yourself:

    • What happens if my data grows 10x?
    • What about 100x?
    • What about 1000x?

    If your answer is “Probably fine” — you haven’t analyzed it.

    If your answer is “Update cost grows with height, not total size” — you’re thinking architecturally.

    React Compiler reduces accidental inefficiencies. Virtualization reduces rendering pressure. Tools like React-Scan show you where work actually happens.

    But only good architecture makes scaling predictable.

  • What breaks first when a frontend starts scaling

    Many modern frontend systems are built with scaling in mind from day one.
    Redux or similar state management solutions are often introduced early — and they genuinely help avoid a whole class of problems.

    Still, even in these systems, things eventually start to break.
    Just not always where you expect.

    1. Responsibility breaks before state does

    Centralized state does not automatically answer questions like:

    • who owns the data
    • where business logic should live
    • which changes are safe and which are not

    Even with Redux, it becomes harder over time to understand where decisions are made and who is responsible for their consequences.

    2. Redux prevents chaos, not complexity

    A global store reduces duplication and makes data flow more predictable.
    But as applications grow, teams often end up with:

    • overly generic slices
    • logic spread across reducers, middleware, and components
    • hidden dependencies between parts of the state

    The system stays structured on paper, but becomes mentally expensive to work with.

    3. Components still accumulate responsibility

    Even with a global store in place, components gradually:

    • know too much about data structures
    • make decisions that should exist at a higher level
    • grow conditional and edge-case logic

    Redux doesn’t prevent blurred boundaries — it just makes them less random.

    4. Contracts break outside the store

    APIs, business rules, and data models continue to evolve.
    Without explicit contracts:

    • frontend compensates for backend changes
    • validation and transformation logic spreads
    • the store turns into a buffer for mismatches

    Again, the issue isn’t the tool, but the lack of clear agreements.

    5. Quality stops scaling through architecture alone

    Even with a solid setup, teams eventually see:

    • fear of refactoring
    • slow and risky reviews
    • areas of the codebase no one wants to touch

    At this point, quality depends more on processes than architecture.

    6. Performance issues show up last

    Redux often delays performance problems.
    But when they appear, the causes are usually architectural:

    • unnecessary subscriptions
    • poorly defined update boundaries
    • complex state dependencies

    And they’re solved by redesign, not by tweaking selectors.


    Redux and similar tools solve important problems.
    They just don’t solve the main one.

    As frontend systems scale, the first thing that breaks isn’t state.
    It’s understanding — and responsibility for decisions.

  • AI didn’t replace my thinking — it changed how I think about code

    At the beginning, I had a fairly simple and probably common fear:
    what if AI just replaces us?

    On top of that, there was a complete lack of understanding of where to even start. There are tons of tools, even more noise, and very few clear answers to the question: “What should a developer actually do with all this?”

    My first steps were cautious and a bit chaotic. I used AI in small, isolated ways: to help with code, to explain some logic, to speed up routine tasks. No real strategy – mostly curiosity.

    Over time, it became clear that AI doesn’t take thinking away. It shifts the focus of thinking.

    If earlier I mostly thought about how to write code, now I find myself thinking more and more about different things:

    • how to think in terms of systems, not isolated tasks
    • how to build quality and reliability into the process, not add them at the end
    • how to improve my effectiveness without increasing complexity
    • how to make decisions with long – term consequences in mind
    • what gaps in my knowledge are blocking me from better application design

    AI helps me move faster, but it doesn’t make decisions for me.
    It doesn’t take responsibility for architecture, consequences, or long-term maintenance – and in that sense, it’s not a competitor but a tool.

    The fear of “being replaced” slowly turned into a more useful question:
    how can I use AI in a way that amplifies my work and reduces mistakes, instead of just speeding up typing?

    More to come.

  • My first post here

    Hi 👋
    This is my first post here, so please don’t be too strict.

    I’m starting this blog mostly as a place to share some thoughts, ideas, and bits of experience that I pick up along the way. Nothing super polished, nothing “expert-level” every time — just things I find interesting or useful and want to put into words.

    Sometimes it might be about work, sometimes about learning new stuff, sometimes just random reflections. I don’t have a strict plan yet, and honestly, I think that’s okay. I want this space to feel more like a conversation than a lecture.

    If you’re reading this: welcome, and thanks for stopping by.
    Hopefully, over time, this blog will grow into something meaningful — at least for me, and maybe for someone else too.

    That’s it for now. More soon 🚀