Beginner

Claude Made Agent Memory Real. But Semantics and Ontology Are Still Missing

Claude Made Agent Memory Real. But Semantics and Ontology Are Still Missing

Claude’s newest Managed Agents announcement is a useful marker for where enterprise AI is going. Anthropic describes “dreaming” as a scheduled process that reviews agent sessions and memory stores, extracts patterns, and curates memories so agents improve over time; the same release also introduces outcomes and multiagent orchestration in Managed Agents (Claude blog). Anthropic’s memory docs describe memory stores as workspace-scoped collections of text documents that can be attached to a Managed Agent session, mounted into the session’s container under /mnt/memory/, and read or written by the agent with the same file tools it uses elsewhere (Claude API Docs). Anthropic’s memory launch post also says these memories are file-based, exportable, manageable through the API, and built with scoped permissions, audit logs, and rollback/versioning controls (Claude blog).
That is exactly the right direction. Agents need memory they can inspect, edit, carry across sessions, and share under boundaries. A file-based memory store is especially interesting because it treats memory as part of the agent’s working environment rather than a black box retrieval service bolted on the side.
But a place to remember is not the same as memory that understands. A pile of notes is storage. Useful memory needs semantics and ontology. Semantics tells the system what something is. Ontology tells the system why it matters from a particular perspective.
Take a rock. Semantically, it is a rock: shape, weight, material, location, texture. But to a tired hiker, it is a chair. To a sculptor, it is raw material. To a geologist, it is evidence. To a construction crew, it is an obstacle. Same object. Same facts. Different perspective. Different meaning.
Personal memory has this problem everywhere. “Call Ravi next week” is not useful unless the system knows what Ravi means to you. Is Ravi an investor, a friend, a student, a customer, a doctor, or a co-founder? The reminder has the same words in every case, but the memory should behave differently depending on the relationship.
That is why personal memory is hard. A person’s ontology is fluid because we are many people at once. We are parents, friends, colleagues, patients, customers, managers, students, and founders, often in the same day. The same artifact can change meaning as the person changes frame.
Organizations are different. They still have politics, ambiguity, and competing incentives, but they also have roles: sales, finance, legal, support, engineering, procurement, operations, leadership. Roles give organizational memory more stable ontologies.
A supplier delay email has one semantic structure: vendor, purchase order, delayed item, original date, revised date, reason. To procurement, it is supplier risk. To finance, it is an accrual issue. To operations, it is a production delay. To legal, it may be contractual exposure.
Same file. Same semantic facts. Different ontology.
That is the point I want to make about Company Brain. Claude gives agents a place to remember. Sentra’s bet is that company memory also needs structure: files as source, semantics as extraction, ontology as perspective, and a permissioned memory graph as the substrate agents can use.
The next chapter of enterprise AI will not be one assistant that knows everything. It will be a company that remembers enough for humans and agents to act from the same reality without forcing everyone to see that reality the same way.
In Part 5, I argued that memory has to become shared semantic state rather than a feature inside individual tools. That argument was about the substrate. This piece is about what sits above it, because the substrate alone does not create adoption. The thing that creates adoption is the lens layer.
The dominant enterprise AI playbook is still one assistant per company, with one company-wide model of how the work fits together. Even when companies invest heavily in that direction, adoption stays uneven. Sales uses it one way, product another, support barely at all, legal almost never, and leadership turns it into a search box. The interface looks unified, but the use does not.
The reason is that the assistant is implicitly carrying one ontology. One ontology cannot fit how seven different functions actually see their work.
Custom ontologies are the real path to organization-wide AI adoption. Adoption is not optimized by giving every employee the same assistant and hoping they prompt it well. That still asks people to translate their work into the AI’s interface. The better direction is the reverse: AI should understand the way each part of the company already sees the work.
Sales thinks in accounts, stakeholders, objections, renewal risk, buying committees, and commitments. Product thinks in signals, roadmap tradeoffs, launch blockers, regressions, feature requests, and customer pain. Support thinks in severity, recurrence, workarounds, customer impact, escalations, and resolution paths. Legal thinks in obligations, approvals, exposure, redlines, and commitments. Leadership thinks in strategy, drift, execution gaps, ownership, and company state.
These are not separate companies. They are different ways the same company understands the same work.
The architectural move is to separate the substrate from the lens. The substrate is shared, permissioned, inspectable, and durable. The lens is functional, customizable, and per-vertical. A company defines a broad ontology of how it operates, and each function defines a narrower ontology of how it sees its slice. The substrate holds the same memory underneath. The lens decides what that memory means in context.
This is also why file-based memory is such an interesting starting point, but not the whole destination. In Claude’s Managed Agents system, memory stores can be attached at session creation, and multiple stores can be attached when different parts of memory have different owners or access rules (Claude API Docs). That maps naturally to the enterprise problem because memory does not have one owner. Some memory belongs to a user, some to a project, some to a function, and some to the company.
The harder question is how those memories become meaningful. If a customer issue is discussed in a meeting, appears again in a Slack thread, becomes a support ticket, triggers a sales concern, and later changes a roadmap priority, those should not become five unrelated memories. They should become one evolving memory with different traces: a support trace, a customer trace, a product trace, a revenue trace, an action trace, and a decision trace. Each trace exists because an ontology decided what mattered.
When this becomes mature, the interface to AI changes. Most AI at work is still reactive. You open a chat box, ask a question, paste context, wait for an answer, and then go do the work. Even when the answer is good, the burden is still on you to know what to ask, where the context lives, and what should happen next.
Mature company memory flips that script. The system already knows what happened across meetings, messages, emails, tickets, workflows, and actions. It knows what usually matters to your role, which ontology applies, which conditions changed, and what action may be available. So it can bring the work to you instead of waiting for you to go looking for it.
That is the product shift I care about most: AI brings the context, the human makes the decision, and agents complete the task. It sounds simple, but it changes who carries the burden of context.
A manager starts the day and sees that a customer commitment was made in a meeting last week but never became work. The system does not wait for the manager to search for it. It says what happened, where it came from, who was involved, why it matters, and what action is available. The manager makes the decision in thirty seconds. Without the substrate, that commitment gets surfaced two months later by an angry customer.
A CEO sees that a strategic initiative is drifting because the decisions in leadership meetings are not matching the work being created in the operating teams. That is the kind of thing a dashboard usually misses because the dashboard sees metrics after the fact. Company memory can see the gap earlier because it remembers the commitments, the rationale, the handoffs, and the actions that followed or failed to follow. The CEO does not need a new report. The CEO needs the gap surfaced in the moment it opens.
An agent working on a support escalation needs more than the latest ticket. It should know that the same issue appeared in a customer call, that sales promised a workaround, that product deprioritized the fix, and that the customer is up for renewal. The same memory looks different depending on whether the agent is briefing support, preparing an account executive, or warning product about roadmap risk.
Proactive AI needs boundaries because the more useful it becomes, the more those boundaries matter. Ontology determines what matters. Permissions determine who can see it. Provenance shows where it came from. Action memory shows what has already been done. Humans approve the meaningful decisions. Agents execute within guardrails.
Without those boundaries, proactive AI becomes noise or surveillance. With those boundaries, it becomes a way for the company to act from shared reality.
Here is the bet I am making, specific enough to be wrong. Within eighteen months, the gap between companies that have built shared semantic state and companies that have bolted agents onto fragmented data will become measurable in a particular way. It will not show up in AI usage metrics because both groups will have high usage. It will show up in the rate at which decisions translate into action.
The first group will close the loop between meetings and work, commitments and follow-through, strategy and execution. The second group will keep generating activity that does not compound. The metric to watch is not how often people use AI. It is how often the company successfully does the thing it decided to do.
That is what mature company memory enables. The useful parts of interactions become memory. The memory becomes a model of how the company works. The model brings the next important thing to the person who can do something about it. As traces accumulate, the system learns which signals matter, which actions work, which commitments tend to slip, which handoffs fail, and which decisions create downstream consequences. The company starts to learn from itself.
Claude’s memory announcement matters because it makes persistent agent memory feel like infrastructure rather than a product gimmick. A memory store mounted into an agent’s filesystem is a good primitive. But enterprise memory cannot stop at notes. Notes have to become semantic. Semantic objects have to become traces. Traces have to be interpreted through the right ontology. The ontology has to respect the role, the permission boundary, and the action surface.
Factual memory tells the company what exists and what happened. Interaction memory preserves why things happened and how people reasoned. Action memory remembers what was done, when, and by whom. The semantic substrate holds it together. Ontologies decide how different people and agents should understand it.
The end state is not one AI that knows everything. It is shared company memory: one reality, many lenses, and humans and agents acting from the same state.
That is the bet. One substrate, many lenses, work that compounds. The thing we are building toward is not a smarter assistant. It is a company that remembers itself.
—-
At Sentra, where we are building what can be only described as a "company brain", a shared intelligence/memory layer that sits on all communication channels, knowledge bases, action and agent traces to understand how everyone in an organization actually works as well as how work actually gets done, constructing a living world model of the entire company in near real time.