Overview
The AI Text Generator node enables you to create text content using various AI models from different providers. This node supports multiple AI systems including Anthropic, OpenAI, Gemini, Open Source models, Groq, and Perplexity, offering flexibility in choosing the best model for your specific text generation needs.
Input Configuration
Each input section can be expanded or collapsed by clicking the arrow icon next to the section name, allowing you to organize your workspace and focus on the fields you need.
Prompt Section
The primary input for your text generation request:
Purpose: Enter your main question or request for the AI to process
Connection Point: Can receive prompts from other workflow nodes
Best Practices: Be clear and specific about what you want the AI to generate
Content and Context Section
Provide additional background information to influence the AI's response:
Purpose: Add relevant context, background information, or data that should inform the AI's response
Connection Point: Can receive contextual data from other workflow nodes
Instructions Section
Specify how you want the AI to structure or format its response:
Purpose: Guide the tone, style, format, or specific requirements for the generated content
Connection Point: Can receive formatting instructions from other workflow nodes
Persona Section
Define the voice, style, or perspective for the AI's response:
Purpose: Customize the AI's voice and tone to match your needs (professional, casual, expert, etc.)
Connection Point: Can receive persona definitions from other workflow nodes
AI Model Selection
Choose from multiple AI providers:
Anthropic: Claude models for reasoning and analysis
Gemini: Google's AI models for various tasks
OpenAI: GPT models including GPT-4 and GPT-3.5
Open Source: Community-developed models
Groq: High-performance inference models
Perplexity: Search-enhanced AI models
Model-Specific Options
Anthropic Models:
claude-3-7-sonnet-thinking
claude-3-haiku-20240307
claude-3-sonnet-20240229
claude-3.5-sonnet
claude-4-sonnet-thinking
claude-4-opus-latest
claude-4-opus-thinking
claude-4-sonnet-latest
claude-3-7-sonnet-latest
OpenAI Models:
GPT-5: Flagship model for coding and reasoning (400,000 tokens each direction)
GPT-5-Mini: Cost-efficient version (400,000 tokens each direction)
GPT-5-Nano: Fastest, cheapest version for summarization (400,000 tokens each direction)
GPT-4.1: Complex task model (1,047,576 input, 32,768 output tokens)
GPT-4o: Advanced multimodal model (128,000 input tokens, 4,096 output tokens, training up to Oct 2023)
GPT-4o-Mini: Cost-efficient multimodal model (128,000 input tokens, 4,096 output tokens, training up to Oct 2023)
GPT-4-Turbo-Preview: Latest GPT-4 model with reduced "laziness" (128,000 input tokens, 4,096 output tokens, training up to Dec 2023)
GPT-4-0125-Preview: New GPT-4 Turbo variant (128,000 input tokens, 4,096 output tokens, training up to Dec 2023)
GPT-4-1106-Preview: GPT-4 Turbo with improved instruction following and JSON mode (128,000 input tokens, 4,096 output tokens, training up to Apr 2023)
GPT-4: Standard GPT-4 model (8,192 input tokens, 4,096 output tokens, training up to Sep 2021)
GPT-4-0613: GPT-4 snapshot with improved function calling support (8,192 input tokens, 4,096 output tokens, training up to Sep 2021)
GPT-3.5-Turbo: Updated GPT-3.5 with higher accuracy (16,385 input tokens, 4,096 output tokens, training up to Sep 2021)
GPT-3.5-Turbo-0125: Latest GPT-3.5 with bug fixes for non-English languages (16,385 input tokens, 4,096 output tokens, training up to Sep 2021)
GPT-3.5-Turbo-1106: GPT-3.5 with improved instruction following and JSON mode (16,385 input tokens, 4,096 output tokens, training up to Sep 2021)
GPT-3.5-Turbo-16k: Legacy model with extended context (16,385 input tokens, 4,096 output tokens, training up to Sep 2021)
GPT-3.5-Turbo-Instruct: Legacy-compatible GPT-3.5 model (4,096 input tokens, 2,096 output tokens, training up to Sep 2021)
O3-Mini-2025-01-31: Reasoning model (100,000 tokens, 200,000 context window, training up to Sep 2023)
O1: Reasoning model (100,000 tokens, 200,000 context window, training up to Sep 2023)
Gemini Models:
gemini-1.5-pro
gemini-1.0-pro
gemini-flash
gemini-2.5-flash-lite
gemini-2.5-flash
gemini-2.5-pro
Open Source Models:
code-llama-34b-instruct
llama-2-70b-chat
mistral-7b
falcon-40b-instruct
llava-13b
Groq Models:
llama3-70b-8192
llama3-1-70b-versatile
llama3-groq-70b-8192-tool-use-preview
gemma2-9b-it
mixtral-8x7b-32768
llama-3-1-8b-instant
Perplexity Models:
sonar-pro
sonar
Advanced Settings
Access detailed configuration options by clicking on the model dropdown at the bottom of the node interface. This opens the Advanced Settings panel where you can fine-tune various parameters for your selected AI model.
Output Configuration
Auto Setting Toggle
Purpose: Automatically optimize output token limits
Manual Override: Disable to set custom token limits
Token Display: Shows current token limit (e.g., 8192)
Model Parameters
Temperature (Creativity Control)
Range: 0.0 to 1.0
Default: 0.2 - 0.5 depending on model
Purpose: Controls randomness and creativity in responses
Low Values: More focused, deterministic output
High Values: More creative, varied responses
Top-P (Nucleus Sampling)
Range: 0.0 to 1.0
Default: 0.8 - 1.0 depending on model
Purpose: Controls the diversity of word choices
Implementation: Considers only the most probable tokens
Top-K (Token Selection)
Range: 1 to high values (typically 40-50)
Purpose: Limits the number of token choices considered
Effect: Lower values create more focused responses
System-Specific Parameters
OpenAI Advanced Settings:
F-Penalty: Frequency penalty to reduce repetition
P-Penalty: Presence penalty to encourage topic diversity
Value Range: Typically 0.0 to 2.0 with 0.5 defaults
Groq Settings:
Seed Control: Set seed value for reproducible outputs
Max Tokens: Configure maximum response length
Groq Stop: Custom stop sequences
Perplexity Settings:
Type Selection: Choose between sonar and sonar-pro
Search Context Size: Low, Medium, or High
Integration: Web search capabilities built-in
Stream Configuration
Stream Enabled: Toggle real-time response streaming
Stream Type Options:
Chat: Standard conversational streaming
Form: Structured form-based streaming
API Callback: Integration with external systems
Output Display
Generated Content
Main Output: Displays the AI-generated text response
Scrollable Area: Handle long-form content with scroll functionality
Copy Functionality: Easy copying of generated content
Connection Point: Output can feed into other workflow nodes
Response Management
Pagination: Navigate through multiple response segments ("1 -1" navigation)
Token Usage: Track consumption displayed at top ("Tokens used: 0.975")
Model Indicator: Shows selected model and estimated word count
Logs Section
Expandable Interface: Click to view detailed execution logs
Error Tracking: Monitor any issues during generation
Performance Metrics: Review response times and token usage
Execution Control
Run Prompt Button
Location: Top-right corner of the interface
Function: Initiates AI text generation
Visual Feedback: Button provides immediate response when clicked
Processing: Shows generation progress and completion
Best Practices
Prompt Optimization
Be Specific: Clear, detailed prompts yield better results
Provide Context: Use Content and Context field for background information
Set Expectations: Use Instructions field to specify format and style
Define Voice: Use Persona field to establish tone and perspective
Model Selection Guidelines
Anthropic Claude: Best for reasoning, analysis, and thoughtful responses
OpenAI GPT: Excellent for creative writing and general tasks
Gemini: Strong performance across varied applications
Open Source: Cost-effective options for specific use cases
Groq: High-speed processing for real-time applications
Perplexity: Best when current web information is needed
Parameter Tuning
Start Conservative: Begin with default settings and adjust gradually
Temperature Adjustment: Lower for factual content, higher for creative tasks
Token Management: Balance response length with processing efficiency
Streaming Benefits: Enable streaming for long-form content generation
Integration Considerations
Workflow Integration
Input Connections: Connect prompts, context, and instructions from other nodes
Output Usage: Generated text can feed into other processing nodes
Multi-Model Workflows: Use different models for different types of content
Performance Optimization
Model Efficiency: Choose appropriate models for task complexity
Token Management: Monitor usage to optimize costs
Streaming: Use streaming for better user experience with long responses
Caching: Consider response caching for repeated similar prompts
The AI Text Generator provides comprehensive text generation capabilities with extensive model choices and fine-tuned control options, enabling sophisticated content creation workflows.