Skip to main content

Small Memory

Store and retrieve flow outputs for persistent memory across executions.

Updated over 3 months ago

Overview

The Small Memory node enables you to record and store outputs from flows, creating persistent memory that can be accessed across multiple executions. This node acts as a data repository, allowing you to build conversation history, maintain session context, or accumulate data over time for use by other nodes in your flows.

Input Configuration

Stored Content Section

Purpose: Displays and manages all stored data entries

Data Collection: Records outputs from connected nodes automatically when the flow executes

Connection Point: Receives data inputs from other nodes on the left side

Data Format: Stores entries with timestamps, roles, and content in a structured table format

Initial State: Shows placeholder text "Record and store all outputs of a flow by connecting generators to this node"

Session Management

Session Selection

Session Dropdown: Choose between different storage sessions using the dropdown (e.g., "Test", "Primary")

Session Types: Access different data storage containers for organizing related information

Session Creation: Click the "+" icon to create new sessions through the "Create New Session" dialog

Session Naming: Enter descriptive names in the Session Name field when creating new sessions

Session Operations

Clear Entries: Remove all stored data from the current session using the Clear Entries button

Data Persistence: Sessions maintain data across multiple flow executions until manually cleared

Session Switching: Change between sessions to access different data sets

Advanced Settings

Access configuration options by clicking the settings icon:

Clear Execution

Purpose: Automatically clear stored entries on every flow execution

Toggle Setting: Enable or disable automatic clearing behavior

Use Case: Useful when you want fresh data for each execution rather than accumulating entries

Agent Session

Purpose: Enable or disable agent session mode for AI interactions

Toggle Setting: Controls how the memory integrates with conversational AI systems

Use Case: Optimizes data format for chatbot and AI assistant applications

Store by User Session

Purpose: Record entries by sessions with customer ID/User ID organization

Session Management: Organize data by individual users or customer sessions

Create New Session: Generate new user-specific storage containers

Data Segregation: Keep different users' data separate and organized

Data Output Configuration

Entry Filtering

Pass Last X Entries: Control how many recent entries to output to connected nodes

Configuration Options:

  • Pass All Entries: Output complete stored data set

  • Pass Last X Entries: Output specified number of most recent entries

Entry Count: Set specific number of entries to pass when using "Pass Last X Entries" mode

Output Control: Fine-tune data volume sent to downstream nodes

Data Structure

Table Format: Data displayed with columns for #, Timestamp, Role, and Content

Entry Numbering: Sequential numbering for easy reference

Timestamp Tracking: Automatic timestamp recording for each stored entry

Role Classification: Entries categorized by role (User, Assistant, etc.)

Content Storage: Full content preservation with expandable view options

Best Practices

Memory Organization

Session Naming: Use descriptive names for different memory purposes or contexts

Data Categorization: Separate different types of conversations or data flows into distinct sessions

Regular Maintenance: Periodically clear old or unnecessary entries to maintain performance

Session Strategy: Plan session structure based on your application's data organization needs

Performance Optimization

Entry Limits: Use "Pass Last X Entries" to control data volume for better performance

Selective Clearing: Clear sessions when data is no longer needed rather than accumulating indefinitely

Memory Monitoring: Track stored data volume to prevent excessive memory usage

Execution Strategy: Consider using "Clear Execution" for temporary data storage needs

Integration Patterns

Conversation Memory: Store chat history for AI assistants and chatbots

Data Accumulation: Collect outputs from multiple flow executions for analysis

Session Context: Maintain user-specific context across interactions

Historical Reference: Keep records for audit trails or data analysis

Integration Considerations

Flow Architecture

Input Connections: Connect various generator nodes to automatically store their outputs

Output Distribution: Stored data can feed into AI generators, analysis nodes, or reporting systems

Memory Chaining: Use multiple Small Memory nodes for different data categories

Session Coordination: Plan session usage across related flows for consistent data organization

Data Management

Entry Volume: Monitor storage capacity and implement appropriate clearing strategies

Data Quality: Ensure meaningful data is being stored rather than excessive noise

Access Patterns: Design output filtering based on downstream node requirements

Backup Considerations: Plan for data persistence and recovery needs

Use Cases

Chatbot Memory: Maintain conversation history for AI assistants

User Sessions: Track individual user interactions and preferences

Data Collection: Accumulate results from repeated flow executions

Context Preservation: Maintain relevant information across flow sessions

Audit Logging: Keep records of flow outputs for compliance or analysis

The Small Memory node provides essential data persistence capabilities, enabling sophisticated memory management and context preservation across flow executions for enhanced automation and AI interaction workflows.

Did this answer your question?