Skip to main content

How FluxPrompt Handles Your Data

Understand how your prompts, queries, and content are processed across AI providers, search engines, and within the FluxPrompt platform.

Updated over 3 weeks ago

Overview

When you build and run agents in FluxPrompt, your inputs — including prompts, context, instructions, and search queries — are transmitted to third-party AI providers and search engines in order to generate results. This article explains exactly what data is sent, to whom, and how session history and memory are managed within the platform.

FluxPrompt acts as an orchestration layer. It does not train its own AI models. The intelligence behind text generation, image creation, and search is provided by external services, each governed by their own data and privacy policies.


How Prompts and Data Are Sent to AI Providers

What Gets Transmitted

When you run an AI Text Generator node (or any AI-powered node), the following inputs are packaged and sent to the selected AI provider's API:

  • Prompt: Your main instruction or question

  • Content and Context: Any background data or connected node output you've provided

  • Instructions: Formatting or structural guidance

  • Persona: Voice or tone definitions

  • Model Parameters: Temperature, Top-P, Top-K, token limits, and any provider-specific settings

These are combined into a single API request and sent to the provider you have selected.

AI Provider Data Policies

Provider

Model Examples

Data Retention

Data Policy Reference

Anthropic

Claude 3, Claude 3.5

✅ 0-day retention — API inputs and outputs are not stored or used for training

OpenAI

GPT-4, GPT-4o

Refer to OpenAI's API policy

Google Gemini

Gemini 1.5 Pro, Flash

✅ 0-day retention — API inputs are not used to train Google's models by default

Groq

Llama, Mixtral via Groq

✅ 0-day retention — API data is not stored or used for training by default

Perplexity

Sonar, Sonar Pro

✅ 0-day retention — API data is not retained or used for model training

Open Source

Various community models

Varies by model host

Varies by model host

Important: FluxPrompt does not control how third-party AI providers store, log, or use data submitted through their APIs. We strongly recommend reviewing each provider's API data usage policy before sending sensitive or personally identifiable information through any node.

API vs. Consumer Products

Data sent through FluxPrompt uses each provider's API endpoint, not their consumer-facing products (e.g., ChatGPT, Claude.ai). API data handling policies are often different — and in many cases more restrictive — than consumer product policies. Refer to each provider's API-specific terms for accurate details.


Web Search Data Flow and Third-Party Search Engines

What Gets Transmitted

When you run a Web Search node, your search query — including any text passed in from connected nodes — is sent to the selected search engine's API. The following data leaves FluxPrompt:

  • Search query: The exact terms entered or passed into the node

  • Configuration parameters: Country/region setting, language preference, results count, and search type (web, news, images, patents, etc.)

Search Engine Data Policies

Engine

Data Policy Reference

Google

Bing

DuckDuckGo

Privacy Tip: If privacy is a concern for your use case, select DuckDuckGo as your search engine. DuckDuckGo does not store personal search histories or track users across queries.

Search Results

Results returned by search engines — including titles, URLs, snippets, and metadata — are passed back into your agent's data pipeline. These results may subsequently be sent to an AI provider if connected to a downstream node such as AI Text Generator.


Memory, Session History, and Data Retention

Session Behavior

FluxPrompt agents operate on a per-run basis. By default:

  • Each time you click Run Prompt, a new request is constructed and sent to the AI provider

  • No conversation history or prior run context is automatically carried forward between separate runs

  • The AI model receives only what is present in the node's input fields at the time of execution

Passing Memory Between Runs

If you need the AI to retain context across multiple interactions, you must explicitly pass prior outputs as inputs:

  • Connect the output of a previous node to the Content and Context input of the next node

  • Use a memory or variable node to store and retrieve session state within the agent workflow

  • Chain nodes together so that context accumulates through the agent pipeline

Without this explicit configuration, each run is stateless from the AI provider's perspective.

Token Usage and Logs

  • Token usage is displayed per run and tracked within the node interface

  • Logs are available in the expandable Logs section of each node and contain execution details, response times, and error information

  • Logs are session-scoped and intended for debugging purposes

FluxPrompt Data Retention

FluxPrompt retains data necessary to operate the platform, including agent configurations, run logs, and account information. For full details on what FluxPrompt stores, how long it is retained, and your rights regarding that data, refer to the FluxPrompt Privacy Policy.


General Privacy Best Practices

Avoid Sending Sensitive Data

  • Do not include personally identifiable information (PII), financial data, health records, or confidential business information in prompts unless you have verified the AI provider's enterprise data handling terms

  • Use placeholder or anonymized data when testing agent workflows

Choose Providers Intentionally

  • Review the data policy of each AI provider before selecting them for sensitive workflows

  • Enterprise agreements with providers like OpenAI or Anthropic may offer stronger data protections — consult your organization's agreements if applicable

Limit Data Scope

  • Only pass the context and content strictly necessary for the task

  • Avoid connecting nodes in ways that inadvertently expose more data than the AI model needs

Monitor Token Logs

  • Regularly review the Logs section to understand what data is being processed

  • Use token usage metrics to identify unexpectedly large payloads that may indicate over-sharing of context


Related Articles

  • AI Text Generator

  • Web Search

  • Agent Architecture

  • Settings Overview


Did this answer your question? 😞😐😃

FluxPrompt Help Center — We run on Intercom

Did this answer your question?