EventOps

Events Don't Have a People Problem. They Have a Memory Architecture Problem.

By ·
Events Don't Have a People Problem. They Have a Memory Architecture Problem.

Key takeaway: The bottleneck in your event operation probably isn’t your people. It’s the absence of a system that lets their knowledge persist beyond the events they work on. Structured coordination data captured during execution turns each edition into a compounding asset instead of a fresh start.


You know this situation: you need more people who can make good decisions independently. However, the gap between your skills and theirs is too large. As a result, you become the decision-making bottleneck for every project and must be involved in all of them.

And how much travel can be demanded of you and your best people?

And if the person wasn’t there over the last couple of years, they wouldn’t be able to contribute independently, since they just don’t have the necessary knowledge.

After running events for close to two decades and losing good people along the way, I’ve learned this pattern is structural. It’s built into how events work. Events are temporary organizations. Teams assemble, execute under intense pressure, and disband.

I used to think the solution was better people, better handovers, or more thorough debriefs. Based on my experience, I believe the real issue is deeper. According to a report from NASA’s Inspector General, the agency’s longstanding “Lessons Learned” database seems like it’s no longer viable, and they have moved into using new approaches that allow users to find all relevant information within a document by entering a single natural query, making the process more targeted and efficient than generating a random list of endless links. But even for NASA, collecting and organizing these lessons seems to be a crucial challenge.

If you keep reading, you’ll learn:

  • Why events suffer from a specific type of organizational forgetting that debriefs and handover documents cannot solve.
  • What “memory architecture” means in practice, and how it differs from documentation.
  • How structured coordination data can turn each event edition into a compounding asset instead of a fresh start.

Why Events Keep Resetting

As an ex-academic, I can’t resist returning to research and highlighting the work of my fellow Swedish academics, Rolf A. Lundin and Anders Söderholm, who coined the term “temporary organizations” in 1995. The defining feature is what could be called “institutionalized termination.” The organization is designed to end. And when it ends, the knowledge leaves with it.

I think most event professionals feel this intuitively, but don’t frame it this way. You build something intense over months. The team gels. People learn the quirks of the venue, the suppliers, and the talent. Then the event wraps, the team scatters, and next year the process starts over again. Some of my own events ran for 10 years straight.

Researchers who study organizational forgetting (de Holan and colleagues at MIT Sloan) identified two types that I think hit events the hardest: “failure to capture” and “memory decay.” Failure to capture occurs when knowledge was never recorded in the first place. Memory decay is when it was recorded somewhere, but the record degraded or became unfindable. Both are structural problems. They happen regardless of how talented your team is.

What makes this worse at the group level is something I found surprising in research. Skill decay in teams is driven less by individuals forgetting their jobs and more by “poorer coordination as a result of forgetting.” People forget how to work together. The timing. The shorthand. The unspoken agreements about who handles what. That coordination layer is almost impossible to write in a handover document.

Why Debriefs and Handovers Don’t Solve This

The natural instinct when knowledge keeps disappearing is to write more down. Longer debriefs. More detailed handover documents. Better templates.

I was a big proponent of “Working on your business rather than in your business,” ala Michael E. Gerber, and had my team and me write endless system manuals for everything. This could only take us partly there. But the effort of creating, and especially maintaining, these systems often made most of them unusable very quickly.

Debriefs capture what people remember, not what actually happened. They’re written after the fact, filtered through recency bias. The crisis in the last hour gets three paragraphs. The subtle scheduling conflict that set off a chain reaction on day one is forgotten. It was resolved in the moment; people were too busy to note it, or it was marked as not significant enough.

Even NASA, with far more resources than any event organization, struggled with this. As I understood it, their lessons-learned database became unusable because a search term returned a wall of links to sort through. NASA rebuilt it as a knowledge graph to move away from “plain text search” on narrative documents into something more structured.

I think the issue is that “narrative-based knowledge capture” (reports, debriefs, handbooks) could work if the data volume is small and the reader has time to read. That does not sound true for us in the event industry. We could be dealing with hundreds of decisions, and many of them are made under the pressure of the actual event running. And the person who needs that knowledge next year might be reading it at 2 AM while prepping for load-in.

What Memory Architecture Actually Means

When I use the term “memory architecture,” I mean something specific. I mean structured, timestamped, attributable coordination data captured during execution. The key phrase is “during execution.”

A debrief says, “We had problems with the AV setup.” A structured coordination log records when the AV task was flagged, who responded, the decision made, the resolution time, and whether the same issue appeared in the previous edition.

The difference matters because structured data can be searched, compared, and analyzed. A narrative in a Word document cannot, at least not reliably. You can ask a structured system: “Show me every task that was flagged as urgent in the last 48 hours before gates opened, across the last three editions.” You cannot ask that question of a folder of PDF debriefs.

This is essentially what process mining does in industries like manufacturing and logistics. It uses structured event logs (activity name, timestamp, resource, outcome) to discover actual process flows and identify bottlenecks. The same logic applies to event coordination: every task assignment, status update, and decision point generates data that can be structured and stored.

The raw material is already in place in most event operations. It’s in chat apps, email chains, shared spreadsheets, and the memories of your most experienced people. The problem is that it’s unstructured and scattered. When someone leaves or when a new edition starts, that raw material is effectively inaccessible.

Structuring it during execution (rather than trying to reconstruct it afterward) changes what’s possible. You move from “let me tell you what I remember” to “let me show you what happened.”

From Fresh Start to Compounding Assets

When coordination data is structured and carried forward from one edition to the next, something fundamental shifts. Instead of always starting from scratch, we have a compounding effect where each layer is added to an existing foundation.

This can fundamentally change how fast a new team member can productively contribute.

Instead of spending weeks wading through a fog, hoping that senior members have time to support them with missing context, they can prompt against the existing structured data themselves. They can, for example, see for themselves which tasks repeatedly took longer than expected.

But even for the experienced team, it would be a game-changer. They can verify a gut feeling about recurring problems, and both visualize and quantify them to rally enough resources to solve the underlying issue.

I think the most underrated benefit is what it does for decision-making under pressure. During execution, when things go sideways (and they always do), having a structured way to access how similar situations were handled previously gives your team a reference point. Think of it as a coach who stands next to you and has all the playbooks written down and helps you decide what your options are.

This is what we are building at MergeLabs. I believe the event industry has a genuine opportunity here, and I think it’s one that the current generation of tools has largely missed.

The bottleneck in your operation probably isn’t your people. Based on what I’ve seen, it’s the absence of a system that lets their knowledge persist beyond the events they work on. When that system exists, you stop losing your best people’s knowledge. And you stop needing to be in every room yourself.

In a follow-up article, I’ll dig into how event communication architecture is shifting. Specifically, why moving from chat channels to activity-based nodes changes what your team can retrieve, search, and learn from after the event is over.

If you want to see how we are solving this at MergeLabs, reach out for a digital coffee. I can talk about this topic for hours.

Frequently Asked Questions

What is memory architecture for events?

Memory architecture is structured, timestamped coordination data captured during event execution, not after. Unlike debriefs or handover documents, it records what actually happened: when tasks were flagged, who responded, what decisions were made, and how long resolutions took. This structured data can be searched, compared, and analyzed across editions.

Why don’t debriefs and handover documents solve knowledge loss?

Debriefs capture what people remember, not what actually happened. They are written after the fact, filtered through recency bias, and degrade quickly. Even NASA’s lessons learned database failed as a plain-text search system. The volume of decisions in event operations is too large and the context too complex for narrative-based knowledge capture to work reliably.

How does structured coordination data compound across event editions?

When coordination data carries forward, each edition builds on the previous one instead of starting fresh. New team members can see operational context from past editions. Recurring bottlenecks become visible through pattern comparison. And during live execution, teams can reference how similar situations were handled before, reducing dependence on any single person’s memory.

coordination knowledge-management organizational-memory eventtech

Want to see this in action?

Book a demo and see how MergeLabs handles coordination for events like yours.

Book a Demo