Skip to main content

Large Memory

Create searchable document databases and query them with natural language.

Updated over 3 months ago

Overview

The Large Memory node enables you to create searchable document databases called "memory pods" and query them using natural language. This node uses vector database technology to store and retrieve information from your documents, making it ideal for document analysis, knowledge base creation, and content research.

Memory Pod Management

Memory Pod Selection

The Memory Pod section controls which document collection you're working with:

  • Dropdown Menu: Shows "None" when no pod is selected, or displays available memory pods

  • Pod Selection: Click the dropdown to choose from previously created memory pods

  • Connection Point: Can receive pod selection from other workflow nodes

Creating New Memory Pods

Create new document collections using the add button next to the Memory Pod section:

Memory Pod Creation Interface

When creating a new memory pod, a modal opens with the following options:

Memory Pod Naming

  • Name Field: Enter a descriptive name for your memory pod

  • Purpose: Helps identify and organize different document collections

Document Upload Options Choose from three document sources:

  • Upload from computer: Select files directly from your local device

  • FluxPrompt Uploads: Access previously uploaded documents in FluxPrompt

  • FluxPrompt Objects: Use existing FluxPrompt workflow objects

File Selection Interface

  • Computer Upload: "Choose Files" button to browse and select local documents

  • FluxPrompt Uploads: Dropdown showing available uploaded documents (e.g., extracted PDFs, text files)

  • FluxPrompt Objects: Dropdown showing workflow objects and processed data

Memory Pod Creation

  • Create Memory Pod Button: Finalizes the creation process with selected documents

  • Close Option: Cancel creation and return to the main interface

Editing Memory Pods

Modify existing memory pods using the edit button (pencil icon):

  • Add Documents: Include additional files in the existing pod

  • Remove Documents: Delete specific documents from the pod

  • Pod Management: Reorganize or update pod contents

  • Delete Pod: Remove the entire memory pod and its contents

Search and Query Interface

Prompt Section

Configure your search queries in the Prompt area:

  • Query Input: Enter natural language questions about your documents

  • Placeholder Text: "Type something to search in vector database"

  • Connection Point: Can receive queries from other workflow nodes

  • Search Context: Queries are processed against the selected memory pod only

AI Model Configuration

Select the AI model for processing queries:

  • Model Dropdown: Shows current model (e.g., "OpenAI: gpt-4")

  • Model Options: Choose from available AI models for query processing

  • Performance: Different models offer varying capabilities for document analysis

Advanced Settings

Access detailed configuration through the Settings panel:

Search Configuration

Return Raw Toggle

  • Purpose: Control output format of search results

  • Options: Enable to receive raw data, disable for formatted responses

Search Type Selection

  • Similarity Search: Find documents most similar to your query

  • Similarity Search (NER): Enhanced search with Named Entity Recognition

OpenAI Models: Complete range including GPT-5, GPT-4.1, GPT-4o series, GPT-3.5 variants, and reasoning models (O3-Mini, O1)

Search Parameters

Top K Setting

  • Purpose: Control number of search results returned

  • Default: 10 results

  • Range: Adjustable based on query requirements

Search Execution

Search in Memory Pod Button

Execute queries against your selected memory pod:

  • Function: Processes natural language queries against stored documents

  • Processing: Uses vector similarity to find relevant document sections

  • Results: Returns contextually relevant information from your documents

Output Display

Search Results

View query results in the Output section:

  • Document Excerpts: Relevant sections from your stored documents

  • Contextual Answers: AI-generated responses based on document content

  • Source Attribution: Information about which documents provided the answers

  • Connection Point: Results can be passed to other workflow nodes

Use Cases and Applications

Document Analysis

  • Research Assistance: Query large document collections for specific information

  • Content Discovery: Find relevant sections across multiple documents

  • Knowledge Extraction: Extract insights from extensive document libraries

Knowledge Base Creation

  • Internal Documentation: Create searchable company knowledge bases

  • Reference Materials: Store and query technical documentation, manuals, and guides

  • Educational Resources: Build searchable collections of learning materials

Workflow Integration

  • Dynamic Queries: Receive search terms from other workflow nodes

  • Automated Research: Integrate document search into larger automation workflows

  • Content Processing: Combine with other nodes for comprehensive document workflows

Best Practices

Memory Pod Organization

  • Descriptive Names: Use clear, descriptive names for easy identification

  • Thematic Grouping: Group related documents in the same memory pod

  • Size Management: Balance pod size with search performance and relevance

Query Optimization

  • Specific Questions: Use clear, specific queries for better results

  • Natural Language: Write queries as natural questions rather than keywords

  • Context Consideration: Frame queries with appropriate context for your documents

Model Selection

  • Task Matching: Choose AI models based on your specific document analysis needs

  • Performance Balance: Consider speed vs. accuracy requirements

  • Cost Management: Select models appropriate for your usage patterns

The Large Memory node transforms static document collections into dynamic, quarriable knowledge bases, enabling sophisticated document analysis and information retrieval within FluxPrompt workflows.

Did this answer your question?