Should memory (as you present it) be multi-level. Roughly, do not simply remember "Did this result in a good customer outcome?" which is evaluated at the end of the chain, but rather use this rubric along the entire "reasoning" chain, to have each level of the search...each asker/responder..be able to say "I remember this being helpful".
Distributed memory through the model (sidenote: this is rapidly leaving vanilla LLM territory imho) might be a powerful facilitator...
This is a fab comment, and is an incredibly sophisticated observation that really pushes the boundaries of what's possible.
My framework deliberately only has two core reasoning patterns (Plan-Execute-Reflect and ReAct). This being the case, I could theoretically instrument each step of those cycles to track memory utility at the granular level you're describing. The reasoning chains are well-defined and discrete, which would make step-wise tracking feasible.
But honestly, what you're describing is probably a $100 million problem in its own right. The level of reasoning chain instrumentation, step-aware memory optimisation, and distributed reasoning intelligence you're suggesting would be transformative for the entire AI industry.
This article was intended to help folks think about a more fundamental problem we're facing right now: how do we properly administer agent memories in production? Currently, most systems either keep everything (leading to performance degradation) or delete based on age (losing valuable knowledge), just like DBAs used to archive transaction data based on arbitrary time windows.
What you're describing would essentially create reasoning-aware memory systems where the agent learns not just what information is useful, but when and how different memories contribute to different types of thinking. That's genuinely revolutionary territory. And it is do-able.
While I understand the rationale, It still astounds me more funding isn’t put towards cracking the persistent memory issue. It’s the foundation upon which we can then layer reasoning and learning, to really accelerate AI capability. Glad you are pushing on this.
Thanks. The Avatar analogy has a couple of points of relevance for me - when you really tackle this subject, you see early on that it's a Pandora's box of problems to solve. I think this challenge is swept under the carpet for this reason most of the time - it's a tough problem to fix. Even Microsoft CTO Kevin Scott said recently that "I think the thing that's missing right now with our agents is like they are conspicuously missing memory...".
Should memory (as you present it) be multi-level. Roughly, do not simply remember "Did this result in a good customer outcome?" which is evaluated at the end of the chain, but rather use this rubric along the entire "reasoning" chain, to have each level of the search...each asker/responder..be able to say "I remember this being helpful".
Distributed memory through the model (sidenote: this is rapidly leaving vanilla LLM territory imho) might be a powerful facilitator...
This is a fab comment, and is an incredibly sophisticated observation that really pushes the boundaries of what's possible.
My framework deliberately only has two core reasoning patterns (Plan-Execute-Reflect and ReAct). This being the case, I could theoretically instrument each step of those cycles to track memory utility at the granular level you're describing. The reasoning chains are well-defined and discrete, which would make step-wise tracking feasible.
But honestly, what you're describing is probably a $100 million problem in its own right. The level of reasoning chain instrumentation, step-aware memory optimisation, and distributed reasoning intelligence you're suggesting would be transformative for the entire AI industry.
This article was intended to help folks think about a more fundamental problem we're facing right now: how do we properly administer agent memories in production? Currently, most systems either keep everything (leading to performance degradation) or delete based on age (losing valuable knowledge), just like DBAs used to archive transaction data based on arbitrary time windows.
What you're describing would essentially create reasoning-aware memory systems where the agent learns not just what information is useful, but when and how different memories contribute to different types of thinking. That's genuinely revolutionary territory. And it is do-able.
Thank you for kind (nay, inspiring) words. The clarity of your presentation made it possible.
While I understand the rationale, It still astounds me more funding isn’t put towards cracking the persistent memory issue. It’s the foundation upon which we can then layer reasoning and learning, to really accelerate AI capability. Glad you are pushing on this.
Thanks. The Avatar analogy has a couple of points of relevance for me - when you really tackle this subject, you see early on that it's a Pandora's box of problems to solve. I think this challenge is swept under the carpet for this reason most of the time - it's a tough problem to fix. Even Microsoft CTO Kevin Scott said recently that "I think the thing that's missing right now with our agents is like they are conspicuously missing memory...".
Agreed.