
Tencent Cloud open-source Agent Memory reduces token consumption by up to 61%
On May 14th, Tencent Cloud officially open-sourced TencentDB Agent Memory, providing short-term memory compression and long-term personalized memory capabilities for Agent long-task scenarios. Long-term memory was made available for free last month, and this open-source release focuses on short-term memory compression. The project is currently compatible with mainstream Agent frameworks such as OpenClaw and Hermes, supporting one-click integration. According to reports, TencentDB Agent Memory utilizes the technology of "Context Offloading + Mermaid Task Canvas" to offload complete information to external storage while retaining key states and execution paths in a structured task graph, allowing the Agent to maintain a lightweight context in long tasks while supporting layered tracing and recovery of original information. In multi-task continuous session experiments, this solution reduced token consumption by up to 61% while improving task success rates in long-task scenarios
According to Zhitong Finance APP, on May 14th, Tencent Cloud officially open-sourced TencentDB Agent Memory, providing short-term memory compression and long-term personalized memory capabilities for Agent long-task scenarios. Long-term memory was made available for free last month, and this open-source release focuses on short-term memory compression. The project is currently compatible with mainstream Agent frameworks such as OpenClaw and Hermes, supporting one-click integration.
It is reported that TencentDB Agent Memory uses the technology of "Context Offloading + Mermaid Task Canvas" to offload complete information to external storage while retaining key states and execution paths in a structured task graph, allowing the Agent to maintain a lightweight context in long tasks while supporting layered tracing and recovery of original information.
In multi-task continuous session experiments, this solution reduced token consumption by up to 61% while improving task success rates in long-task scenarios
