Documentation / AI Features

AI Features

Luminal CMS has AI woven into its core. This guide covers provider setup, in-editor content generation, podcast creation, automated tasks, and MCP server integration.

AI Resources Module

The AI Resources module is the central hub for all AI configuration. Access it from AI Tools > AI Resources in the admin navigation.

Key Features

  • Multi-provider management with a card-based UI
  • Add, edit, test, and remove AI providers
  • Set a default active provider
  • MCP server configuration
  • Image provider settings

Setting Up AI Providers

Luminal supports four provider types:

Anthropic Claude

Models: Claude Opus 4.6, Sonnet 4.5, Haiku 4.5

  1. In AI Resources, click + Add Provider.
  2. Select Anthropic Claude as the type.
  3. Enter a friendly label (e.g., "Claude Opus").
  4. Paste your Anthropic API key.
  5. Select the model from the dropdown.
  6. Click Save, then Test Connection to verify.

OpenAI GPT

Models: GPT-4o, GPT-4o-mini, GPT-4-turbo, o1

Same setup flow — enter your OpenAI API key and select the model.

Google Gemini

Google’s Gemini family is a fully supported provider in Luminal CMS. The setup is straightforward once you know where to go — the confusion is that Google has two different consoles and multiple ways to get an API key. This guide walks you through both paths.

Available Models

ModelBest ForMax OutputNotes
Gemini 2.5 FlashGeneral use, content generation, fast responses65,536 tokensRecommended default — fast, capable, cost-effective
Gemini 2.5 ProComplex reasoning, analysis, long-form writing65,536 tokensMost capable Gemini model — higher cost
Gemini 2.0 FlashLegacy compatibility8,192 tokensRetiring June 2026
Gemini 2.0 Flash LiteQuick tasks, low cost8,192 tokensRetiring June 2026

Path A: Google AI Studio (Easiest — Recommended)

This is the fastest way to get a Gemini API key. No billing account required for the free tier. Takes about 2 minutes.

  1. Go to aistudio.google.com/apikey.
  2. Sign in with your Google account (any Gmail or Google Workspace account works).
  3. Click “Create API key”.
  4. Google will either create a new Google Cloud project automatically, or ask you to select an existing one. If prompted, click “Create API key in new project” — this creates a project behind the scenes with the Gemini API already enabled. You don’t need to configure anything in Google Cloud Console.
  5. Your API key appears on screen. Copy it immediately — you can always come back to this page to view it later, but copy it now.
  6. That’s it. Head to Luminal CMS and add it as a provider (steps below).

Free tier: Google AI Studio API keys include a free tier with rate limits (typically 15 requests per minute, 1 million tokens per day for Flash models). This is generous enough for content generation, AI Assist, and Agent Scheduler tasks. You only need billing if you exceed these limits or need higher rate limits.

Path B: Google Cloud Console (Advanced)

If you already have a Google Cloud project (e.g., for NotebookLM, YouTube Data API, or other Google services), you can create the key there instead. This path gives you more control over billing, quotas, and API restrictions.

  1. Go to console.cloud.google.com and sign in.
  2. Select or create a project:
    • Click the project dropdown at the top of the page (next to “Google Cloud”).
    • Either select an existing project or click “New Project”, give it a name (e.g., “Luminal CMS”), and click Create.
  3. Enable the Gemini API:
    • In the search bar at the top, type “Generative Language API” and select it from the results (it may also appear as “Gemini API” or “Gemini for Google Cloud”).
    • Click “Enable”. If it says “Manage” instead, the API is already enabled.
    • Alternative: Navigate to APIs & Services → Library, search for “Generative Language”, and enable it from there.
  4. Create an API key:
    • Go to APIs & Services → Credentials (or search “Credentials” in the top bar).
    • Click “+ Create Credentials” at the top, then select “API key”.
    • Your new key appears in a dialog. Copy it.
  5. Optional — Restrict the key (recommended for production):
    • Click “Edit API key” (or click the key name in the Credentials list).
    • Under “API restrictions”, select “Restrict key” and check only “Generative Language API”.
    • Under “Application restrictions”, you can restrict by IP address if your server has a static IP. This prevents the key from being used elsewhere if it leaks.
    • Click Save.

Billing: Google Cloud Console projects may require a billing account for API access beyond the free tier. If you see “PERMISSION_DENIED” or “billing account required” errors, go to Billing in the Cloud Console sidebar and link a billing account to your project. You will not be charged unless you exceed the free tier limits.

Adding Google Gemini to Luminal CMS

Once you have your API key (from either path above):

  1. In the admin panel, navigate to AI Tools → AI Resources.
  2. Click + Add Provider.
  3. Set Type to “Google”.
  4. Enter a Label (e.g., “Gemini Flash” or “Google Gemini”).
  5. Select a Model from the dropdown. Gemini 2.5 Flash is recommended for most use cases — it’s fast, capable, and cost-effective.
  6. Paste your API key into the API Key field.
  7. Click Save.
  8. Click the “Test” button on the provider card. You should see a green “Connection successful” message with the model name confirmed.
  9. If you want Gemini as your default provider, click “Set Active” on the card. The active provider is used by AI Assist, Agent Scheduler, and all content generation features.

Troubleshooting Google Gemini

ErrorCauseFix
API key not validKey is malformed or was deletedGenerate a new key at aistudio.google.com/apikey
PERMISSION_DENIEDGenerative Language API not enabled, or billing requiredEnable the API in Cloud Console → APIs & Services → Library. Link a billing account if prompted.
RESOURCE_EXHAUSTEDRate limit or daily quota exceededWait for quota reset (resets daily), or upgrade to a paid plan in Cloud Console → Billing.
Model not foundModel ID doesn’t exist or was retiredSwitch to a current model (Gemini 2.5 Flash or 2.5 Pro). The 2.0 models retire June 2026.
Connection timeoutServer can’t reach Google’s APICheck server firewall rules. The CMS needs outbound HTTPS access to generativelanguage.googleapis.com.

Google Gemini vs. AI Studio vs. Cloud Console — What’s the Difference?

Google AI Studio (aistudio.google.com) is Google’s lightweight developer tool for experimenting with Gemini. It can generate API keys with one click. Behind the scenes, it creates a Google Cloud project for you — you just don’t have to navigate the Cloud Console to do it.

Google Cloud Console (console.cloud.google.com) is the full cloud management platform for all Google services. It’s where you manage billing, quotas, service accounts, and API restrictions. You need it for NotebookLM setup (service accounts), but for basic Gemini API access, AI Studio is simpler.

Bottom line: Use AI Studio to get your key quickly. Use Cloud Console if you want fine-grained control, IP restrictions, or if you need other Google Cloud services alongside Gemini.

Custom (OpenAI-Compatible)

For self-hosted or alternative providers: Groq, Together AI, Mistral, Ollama, LM Studio, vLLM.

  • Enter the Base URL of the API endpoint.
  • Enter the API key (if required).
  • Type the model name manually (freeform text input).

AI Assist in Page Manager

The most frequently used AI feature. See the Page Manager documentation for detailed usage. Quick summary:

  1. Open a page in the editor.
  2. Click the purple AI Assist button.
  3. Write a prompt, set tone and content type.
  4. Select provider, click Generate.
  5. Insert the result into your page.

Provider Info Bar

The AI Assist panel shows the active provider type, model, and a masked API key at the top, so you always know which provider is handling your request.

Token Stats and Cost

After generation, you see input tokens, output tokens, and estimated cost in USD. Cost estimates cover Claude, GPT, and Gemini models.

NotebookLM Podcast Generation

Turn page content into AI-generated podcasts via Google NotebookLM:

  1. Open a page in the Page Manager editor.
  2. Use the NotebookLM section in the AI panel.
  3. Select a format: Conversation, Deep Dive, or Debate.
  4. Select a length: Short, Standard, or Long.
  5. Click Generate Podcast.
  6. The generation runs asynchronously — progress polling resumes automatically (survives page navigation).
  7. When complete, play the audio directly or click Add to Podcasts to export to Podcast Manager.

Setup Requirements

NotebookLM requires a Google Cloud service account with the NotebookLM API enabled. Configure the project ID and service account key in AI Resources under the NotebookLM section.

Prompt Commons

Build a library of reusable prompts:

  • Save — Store the current prompt with a descriptive name.
  • Load — Select a saved prompt from the dropdown to populate the prompt field.
  • Delete — Remove prompts you no longer need.

Prompts are stored in admin/data/AIResources/prompt_commons.json.

Agent Scheduler

The Agent Scheduler module enables automated AI tasks on a cron schedule (server-only module):

  • Tasks — Define recurring AI operations with name, schedule, and pipeline selection.
  • Pipelines — PHP scripts that define the work to be done (e.g., fetch data, generate content, publish).
  • Schedules — Manual, hourly, daily, weekly, or monthly execution.
  • History — View execution logs for each task.

Plan/Approve Workflow

Some pipelines support a two-phase workflow: Phase 1 generates a plan (marked as "awaiting approval"), and Phase 2 executes after admin approval. The admin navigation rail shows a notification badge when tasks await approval.

MCP Server Configuration

Model Context Protocol (MCP) servers provide AI providers with contextual information about your site:

  • site-context — Exposes site configuration, theme settings, and page data.
  • vhost-scanner — Discovers all hosted sites on the server.

Custom MCP servers can be added for specialized context. Servers can be stdio-based (local process) or HTTP-based (remote endpoint).