Software Engineering Daily: Knowledge Graphs as Agentic Memory with Daniel Shalef
Episode Information:
- Title: Knowledge Graphs as Agentic Memory with Daniel Chalef
- Host/Author: Software Engineering Daily
- Release Date: March 25, 2025
Introduction
In this episode of Software Engineering Daily, host Kevin Ball engages in an insightful conversation with Daniel Shalef, the founder of Zepp, a startup focused on developing a memory layer for AI agents using temporal knowledge graphs. Released on March 25, 2025, the discussion delves deep into the challenges of contextual memory in AI, the implementation and advantages of temporal knowledge graphs, and the future of agentic applications.
Understanding the Challenge of Contextual Memory in AI
Daniel Shalef opens the discussion by highlighting a fundamental issue in current AI models:
[00:00] Daniel Shalef: "Contextual memory in AI is a major challenge because current models struggle to retain and recall relevant information over time."
He contrasts human memory's ability to build long-term semantic relationships with AI systems' reliance on fixed context windows, which often leads to the loss of crucial past interactions. This limitation hinders AI agents from maintaining coherent and contextually aware interactions over extended periods.
Introducing Zepp and the Agentic Future
Shalef introduces Zepp, his second startup, which aims to pave the way for an agentic future by ensuring that AI agents have access to the right information at the right time. He elaborates:
[01:32] Daniel Shalef: "Zepp is a memory layer for agentic applications, and we focus on mid-sized companies in the enterprise. It's a very exciting space to be in."
The goal is to enable AI agents to autonomously interpret instructions, make decisions, and take actions to achieve set goals, all while retaining and utilizing long-term contextual information.
Defining AI Agents and Their Components
Kevin Ball seeks clarity on the term "agent" as used by Shalef. Daniel provides a comprehensive definition:
[02:22] Daniel Shalef: "AI agents today have an LLM as their brain, but they also have the ability to autonomously interpret instructions and make decisions... and take actions to achieve goals that have been set for them."
He breaks down the essential components of AI agents:
- Tools: Functions that allow agents to take action, such as querying the web or generating invoices.
- Reasoning: The ability to understand and decide the next steps based on a broad understanding of their environment.
- Memory: Essential for recalling past interactions and enabling planning for future actions.
Shalef emphasizes the importance of memory in facilitating effective decision-making and planning within AI agents.
The Role of Knowledge Graphs in Memory
The conversation shifts to the core of Zepp's solution: knowledge graphs. Shalef explains their significance in modeling complex relationships:
[08:00] Daniel Shalef: "Knowledge graphs are data structures that allow you to semantically model complex relationships... they can be used as a memory layer because they enable dense and well-described semantic datasets that are ideal for retrieval."
He highlights the advantages of knowledge graphs in enabling efficient data retrieval and the challenge of defining ontologies—the categories and relationships within the graph. With the advent of Large Language Models (LLMs), Zepp leverages these models to automate and scale the construction of knowledge graphs, overcoming the limitations of manual or inflexible methods.
Zepp's Graffiti Library and Ontology Generation
Kevin Ball probes deeper into how Zepp automates ontology creation. Daniel introduces Graffiti, Zepp's open-source library:
[10:55] Daniel Shalef: "Graffiti builds the ontology on the fly by doing named entity recognition across the data it processes, intelligently understanding relationships and deduplicating entities."
Graffiti’s ability to dynamically create and manage ontologies allows Zepp to handle diverse and unpredictable data inputs, ensuring that the knowledge graph remains consistent and query-friendly. This adaptability is crucial for applications where data varies significantly, such as in human-agent interactions.
Handling Type Extraction and Evolution in Knowledge Graphs
Ball and Shalef discuss the extraction and management of entity types within Graffiti:
[13:10] Kevin Ball: "How do you think about that type extraction and are those types themselves then morphable?"
[15:18] Daniel Shalef: "Developers can define custom types that are well-described... Graffiti will still extract entities that don't match existing types, allowing for organic ontology development while supporting structured types for specific business needs."
Shalef explains that Graffiti not only extracts entities but also allows for the creation of custom, well-defined types using Python's Pydantic models. This flexibility ensures that as new data emerges, the knowledge graph can adapt without compromising its structural integrity.
Temporal Knowledge Graphs and Dynamic Data
A significant advancement introduced by Zepp is the incorporation of temporality into knowledge graphs:
[16:50] Daniel Shalef: "Graffiti is a temporal knowledge graph... it can handle dynamic data by understanding when information becomes valid or invalid."
Shalef illustrates this with an example where a user's preference changes over time. Graffiti records both the creation and invalidation of relationships, enabling the AI agent to maintain an accurate and current understanding of user preferences.
Scalability and Performance of Graffiti
When discussing the technical implementation, Shalef emphasizes Graffiti's scalability and efficient querying mechanisms:
[27:13] Daniel Shalef: "Graffiti uses semantic and full-text indexing, allowing for near-constant time queries and the ability to retrieve relevant subgraphs quickly."
He mentions that Zepp has benchmarked Graffiti against existing solutions like MEM GPT and found superior performance, especially in handling long-term memory recall with reduced latency and cost.
Applications and Future Directions
Shalef shares insights into the emerging domains leveraging agentic applications:
[40:53] Daniel Shalef: "Ambient agents are particularly exciting... they monitor environments and take proactive actions, such as home automation or vehicle telemetry monitoring."
He also touches on significant interest in sectors like healthcare, where automating processes like insurance claims analysis can yield substantial efficiency gains. Additionally, Zepp is exploring applications in national security and enterprise analytics, where autonomous agents can process vast data sets to generate actionable reports.
Challenges and Ongoing Work
Despite Zepp’s advancements, Shalef acknowledges ongoing challenges:
[37:57] Daniel Shalef: "We're focusing on bridging the gap between prototypes and production, ensuring compliance, data governance, and security for enterprise clients."
He notes that deploying comprehensive memory systems in real-world environments involves not only technological hurdles but also navigating organizational requirements and ensuring data integrity and privacy.
Philosophical Considerations and Future Research
Towards the end of the discussion, Shalef delves into the philosophical aspects of creating agentic memory systems:
[45:15] Daniel Shalef: "We're dealing with dynamic data and how agents understand the world, which ties into AI safety and the broader impact on society."
He expresses keen interest in exploring how agents can reconcile different human perspectives and the potential for integrating Bayesian reasoning to enhance probabilistic understanding within knowledge graphs.
Conclusion
The episode concludes with a reflection on the transformative potential of temporal knowledge graphs in creating robust, memory-enabled AI agents. Daniel Shalef's insights provide a comprehensive overview of the technical innovations and practical applications driving the next generation of intelligent systems. As Zepp continues to refine Graffiti and navigate the complexities of production deployment, the future of agentic memory in AI looks promising, poised to revolutionize how machines interact, learn, and assist in diverse domains.
Notable Quotes:
- Daniel Shalef [00:00]: "Contextual memory in AI is a major challenge because current models struggle to retain and recall relevant information over time."
- Kevin Ball [01:20]: "Daniel, welcome to the show."
- Daniel Shalef [02:22]: "AI agents today have an LLM as their brain, but they also have the ability to autonomously interpret instructions and make decisions..."
- Daniel Shalef [08:00]: "Knowledge graphs are data structures that allow you to semantically model complex relationships..."
- Daniel Shalef [15:18]: "Developers can define custom types that are well-described... Graffiti will still extract entities that don't match existing types..."
- Daniel Shalef [37:57]: "We're focusing on bridging the gap between prototypes and production, ensuring compliance, data governance, and security for enterprise clients."
- Daniel Shalef [45:15]: "We're dealing with dynamic data and how agents understand the world, which ties into AI safety and the broader impact on society."
This comprehensive summary encapsulates the key points, discussions, and insights from the episode, providing a clear and structured overview for those who haven't had the chance to listen.
