Part 1: Taming the MRP Beast with Internal Events
Around 2012, I found myself working at a multinational company that manufactured office supplies and various other products. One of the persistent headaches within the R&D department was their workflow for analyzing the cost impact of material changes. They’d experiment with different components, aiming for those crucial cost reductions without sacrificing quality. However, the process for understanding the financial ramifications was frustratingly slow, relying on the company’s MRP system to run overnight batch jobs for recalculations. This lag time really stifled their ability to iterate and innovate efficiently.
So, naturally, I’m the poor sap who gets told to fix it. My initial thought was an in-memory object graph. Being a firm believer in Test-Driven Development (TDD) and working in C#, I started by defining the behavior I expected from my model. The specifications for these tests were directly derived from the rules and logic of the company’s MRP system. I envisioned a system where any Part in the Bill of Materials could have its cost updated, and that change would automatically propagate up the assembly hierarchy. To model this hierarchical structure efficiently, I leaned heavily on the Composite pattern. In essence, everything became a Part – individual materials were Part instances, and assemblies of parts were also Part instances, capable of containing other Part objects. This elegant pattern allowed me to treat individual components and complex assemblies uniformly.
Then came the real test: those damn cost updates. The idea was, if a tiny part changed price, the entire product’s cost needed to reflect it, instantly. My first attempt felt… logical? Each Part would keep a reference to its parent Part (within the Composite structure). On a Cost change, I would access the parent and invoke a RecalculateCost() method defined in the Part base class. However, this approach quickly revealed the problem of cyclic dependencies because I was serializing the whole graph to a binary file. Cornered and fueled by desperation (and likely too much late-night coding), I turned to events.
I was aware of events in C#, but they weren’t my first choice for this direct propagation problem. However, the cyclic dependency nightmare demanded a different approach. The idea was simple: instead of directly telling any Part what to do, each Part would just whine into the digital void, ‘My cost changed!’ Any Part that contained it (its ‘parent’ in the Composite structure) could subscribe to this event and react accordingly. This loose coupling, this inversion of control, worked like a charm. The overnight wait? Gone. Poof. The once tightly coupled Part instances could now have their costs updated and the entire Bill of Materials recalculated in less than two seconds. Everyone was happy.

Part 2: The Distributed Task Conundrum and the Rise of Webhooks
Years after taming the MRP beast, I faced a new challenge at a different company. This time, the problem revolved around a system where various web applications assigned tasks to users. Some of these tasks were physical, like “pick up item X from shelf Y,” while others were purely digital, such as “review document Z.” The headache? Users had to log into each individual web application to check their assigned tasks, and then log back into that specific application to mark a task as completed. It was an exercise in digital hopping that wasted time and frustrated users.
My goal was to consolidate this chaos. I envisioned a centralized platform where all these disparate task-assigning web applications could publish their tasks. Users would then have a single, unified interface to view all their pending tasks and mark them as complete. The core challenge, however, wasn’t just displaying the tasks; it was figuring out how to notify the originating web application when a task was marked as completed on my new central platform. How would the “Item Picking Application” know that its task “pick up item X” was done without constant, inefficient polling?
This is where I discovered webhooks. At first glance, I wondered if this was just a fancy term for an HTTP request. After all, it was simply my platform making a POST request to a specific URL on another web application. But as I dug deeper, I realized its true power. While a webhook is an HTTP request, its essence lies in its role as an event notification between applications. It struck me then: the underlying principle was the same as with the internal events in my BOM simulator.

The Shared Essence: Events, Webhooks, and Decoupling
In both scenarios, it boiled down to an interested party manifesting its interest in a particular action taking place (the Event) and requesting to be notified in a specific way. In the BOM simulator, that notification was an in-memory method call to an event handler. In the task management system, it was an HTTP request to a specific URL (a webhook endpoint).
This fundamental concept of notifying interested parties without direct coupling is the core idea of the observer pattern. It allows the component initiating the change (e.g., a Part in the BOM, or my central task platform) to remain unaware of who is listening or how they will react. It simply announces that something has happened. This drastically reduces tight dependencies, fostering systems that are more flexible, maintainable, and scalable. Whether you’re working with objects in memory or distributed web applications across a network, the ability to communicate via events, rather than rigid, direct calls, provides a path to more robust and adaptable designs.


