The GenX Pipeline
GenX works in four stages. Understanding this pipeline helps you get the most out of the platform.Monitor
Your campaigns define what to look for on Twitter. GenX continuously scans for tweets matching your keyword rules via TwitterAPI.io’s real-time stream.When a matching tweet is found, it appears in your Conversations queue with a “Pending” status.
Analyze
GenX gathers context around the tweet — the author’s profile, any thread context, and the conversation it belongs to. This context helps the AI generate better, more relevant replies.
Generate
Using your chosen LLM provider (OpenAI, Anthropic, etc.), GenX crafts a reply that:
- Matches your selected tone (friendly, witty, bold, etc.)
- Stays under 280 characters (Twitter’s limit)
- Feels natural and human — not like a bot
- Addresses the specific content of the original tweet
Campaign → Conversation → Reply
Here’s how the data flows:Campaigns
A campaign is your search filter. It defines:- Keywords — Twitter search query (e.g.,
"best CRM" OR "looking for CRM") - Interval — How often to check for new tweets (30 seconds to 24 hours)
- Status — Active (monitoring) or Stopped
Conversations
Every matching tweet becomes a conversation in your queue. A conversation contains:- The original tweet (text, author, media)
- Context tweets (if it’s part of a thread)
- Draft replies (AI-generated, waiting for approval)
- Posted replies (sent to Twitter)
Statuses
| Status | Meaning |
|---|---|
| Pending | New tweet, no reply generated yet |
| Draft | AI reply generated, waiting for approval |
| Posted | Reply successfully posted to Twitter |
| Failed | Reply couldn’t be posted (auth expired, rate limit, etc.) |
| Ignored | Manually skipped — won’t generate a reply |
Bot Accounts
GenX posts replies through bot accounts — Twitter accounts that you control. Every new user gets a free bot account to start with. Bot accounts need to be logged in to post replies. Login sessions last 24 hours and auto-renew when active.LLM Providers
GenX supports 5 AI providers for reply generation:| Provider | Models | Best For |
|---|---|---|
| OpenRouter | 200+ models | Flexibility — switch models anytime |
| OpenAI | GPT-4o, GPT-4o Mini | Reliability and speed |
| Anthropic | Claude Sonnet, Haiku | Tone matching and nuance |
| Groq | Llama 3.3, Mixtral | Ultra-fast generation |
| Gemini 2.5 Flash | Cost-effective with large context |