bluefly / llm
LLM Platform Core Module - Comprehensive AI/LLM integration for Drupal with token optimization, analytics, and provider management
Requires
- php: >=8.1
- drupal/core: ^10.2 || ^11
- dev-development
- v0.1.3
- 0.1.2
- 0.1.0
- dev-main
- dev-chore/update-ci-gitlab-components
- dev-feature/branch-consolidation-20251102
- dev-feature/drupal-11-symfony-7-support
- dev-feature/enhanced-llm-capabilities
- dev-feature/0.1.0-mass-fix
- dev-feature/0.1.0-wip
- dev-feature/0.1.0
- dev-feature/0.1.0-fix-cli
- dev-feature/0.1.0-sync-0904225
- dev-feature/0.1.0-core-improvements
- dev-feature/llm-updates-20250901
- dev-drupal-standards-fix-20250802-1427
This package is auto-updated.
Last update: 2025-11-13 04:18:22 UTC
README
Comprehensive AI/LLM integration for Drupal with token optimization, analytics, and provider management.
Table of Contents
- Overview
- Features
- Requirements
- Installation
- Configuration
- Usage
- Architecture
- Submodules
- API Documentation
- Testing
- Security
- Performance
- Troubleshooting
- Contributing
- Maintainers
- License
Overview
The LLM Platform Core module provides enterprise-grade AI and Large Language Model (LLM) integration for Drupal. It enables seamless interaction with multiple AI providers, intelligent token management, real-time analytics, and comprehensive security features.
This module is designed for:
- Content Editors: AI-powered content creation and editing assistance
- Developers: Comprehensive API for AI integration in custom modules
- Site Builders: No-code AI workflow creation with visual tools
- Administrators: Full control over AI usage, costs, and security
Features
Core Features
- Multi-Provider Support: OpenAI, Anthropic, local models (Ollama), and custom providers
- Token Management: Advanced token counting, optimization, and usage tracking
- Conversation Management: Persistent chat history with context management
- Cost Tracking: Real-time cost calculation and budget controls
- Provider Health Monitoring: Automatic failover and load balancing
- Security: Enterprise-grade security with encryption, rate limiting, and audit logging
- Analytics Dashboard: Comprehensive usage analytics and visualizations
- GraphQL API: Full GraphQL support for headless applications
- Real-time Updates: WebSocket support for live conversations (experimental)
- Multi-tenant Support: Isolated environments for different user groups
Advanced Features
- AI Agent Management: Lifecycle management for autonomous AI agents
- Workflow Designer: Visual workflow builder with ECA integration
- Queue Management: Asynchronous processing for large-scale operations
- RAG Integration: Retrieval-Augmented Generation with Qdrant vector database
- Model Fine-tuning: Tools for training custom models
- Voice Integration: Text-to-speech and speech-to-text (Echo Voice submodule)
- Contextual Chat: Context-aware conversations with entity integration
- Embeddings Service: Generate and manage text embeddings
Developer Features
- Service Architecture: 100+ services with dependency injection
- Plugin System: Extensible provider plugin system
- Event System: Comprehensive event dispatching
- Drush Commands: CLI tools for all major operations
- Test Coverage: 94%+ test coverage (Unit, Kernel, Functional)
- API Documentation: Full API docs and code examples
- Code Quality: PHPStan Level 8, PHPCS compliant
Requirements
System Requirements
- Drupal: 10.3+ or 11.0+
- PHP: 8.1 or higher
- Database: MySQL 5.7.8+, MariaDB 10.3.7+, or PostgreSQL 10+
- Memory: Minimum 256MB, recommended 512MB+
Required Drupal Modules
system(Drupal core)user(Drupal core)field(Drupal core)node(Drupal core)
Recommended Modules
- ECK (Entity Construction Kit): For AI model and conversation entities
- ECA (Event-Condition-Action): For no-code AI workflows
- GraphQL: For GraphQL API support
- AI: For integration with Drupal AI module
- Key: For secure API key management
- Webform: For AI-powered form enhancements
PHP Libraries
Automatically installed via Composer:
guzzlehttp/guzzle^7.0 - HTTP clientmonolog/monolog^3.0 - Loggingsymfony/http-foundation^6.0 || ^7.0 - HTTP foundation
External Services (Optional)
- OpenAI API: For GPT models (API key required)
- Anthropic API: For Claude models (API key required)
- Ollama: For local model hosting (self-hosted)
- Qdrant: For vector database (optional, for RAG)
- Langflow: For workflow integration (optional)
Installation
Via Composer (Recommended)
# Add GitLab Composer repository (if not already added)
composer config repositories.bluefly-llm vcs https://gitlab.bluefly.io/llm/drupal/modules/llm.git
# Install the module
composer require bluefly/llm:^0.1
# Enable the module and dependencies
drush en llm llm_core llm_chat llm_dashboard -y
Via Drupal.org
# Download from Drupal.org
composer require drupal/llm
# Enable the module
drush en llm -y
Manual Installation
- Download the latest release from Drupal.org project page
- Extract to
web/modules/contrib/llm/ - Enable via admin interface:
/admin/modulesor via Drush:drush en llm -y
Post-Installation
# Clear caches
drush cr
# Run database updates
drush updb -y
# Import default configuration (optional)
drush cim -y
# Verify installation
drush status llm
Configuration
Initial Setup
Navigate to Configuration → AI Platform → LLM Settings (
/admin/config/llm/settings)Configure AI providers:
- Add API keys for OpenAI, Anthropic, or other providers
- Set default model preferences
- Configure rate limits and quotas
Set up permissions: Navigate to People → Permissions (
/admin/people/permissions)- Assign appropriate permissions for different user roles
- Review security settings
Provider Configuration
OpenAI Provider
// Navigate to /admin/config/llm/providers/openai
Settings:
- API Key: [Your OpenAI API key]
- Organization ID: [Optional]
- Default Model: gpt-4
- Max Tokens: 2000
- Temperature: 0.7
Anthropic Provider
// Navigate to /admin/config/llm/providers/anthropic
Settings:
- API Key: [Your Anthropic API key]
- Default Model: claude-3-sonnet-20240229
- Max Tokens: 4000
- Temperature: 1.0
Local Provider (Ollama)
// Navigate to /admin/config/llm/providers/ollama
Settings:
- Endpoint: http://localhost:11434
- Default Model: llama2
- Timeout: 60
Security Configuration
# Configure API key storage (recommended: use Key module)
drush config:set llm.settings api_key_storage 'key_module'
# Enable rate limiting
drush config:set llm.settings rate_limiting.enabled TRUE
drush config:set llm.settings rate_limiting.requests_per_minute 60
# Enable audit logging
drush config:set llm.settings audit_logging.enabled TRUE
Advanced Configuration
Token Management:
# /admin/config/llm/tokens
token_optimization:
enabled: true
cache_responses: true
compress_history: true
max_context_tokens: 4000
Cost Controls:
# /admin/config/llm/costs
budget_controls:
monthly_limit: 1000.00
daily_limit: 50.00
per_user_limit: 10.00
alert_threshold: 80
Usage
For Content Editors
Using the Chat Interface
- Navigate to Content → AI Chat (
/llm/chat) - Select an AI model from the dropdown
- Type your message and click "Send"
- View conversation history in the sidebar
- Export conversations as needed
AI-Powered Content Creation
- Create or edit content (
/node/add/article) - Use AI assistance button in the editor toolbar
- Generate content suggestions, summaries, or rewrites
- Accept or modify AI-generated content
For Developers
Using the Service API
// Get conversation manager service
$conversationManager = \Drupal::service('llm.conversation_manager');
// Create a new conversation
$conversation = $conversationManager->createConversation([
'model' => 'gpt-4',
'system_prompt' => 'You are a helpful assistant.',
]);
// Send a message
$response = $conversationManager->sendMessage($conversation, 'Hello, AI!');
// Get response text
$text = $response->getText();
// Calculate cost
$cost = $conversationManager->calculateCost($conversation);
Using the Provider Manager
// Get provider manager
$providerManager = \Drupal::service('llm.provider_manager');
// Get available providers
$providers = $providerManager->getAvailableProviders();
// Get specific provider
$openai = $providerManager->getProvider('openai');
// Send request to provider
$response = $openai->complete([
'prompt' => 'Tell me a joke',
'max_tokens' => 100,
'temperature' => 0.9,
]);
Using Token Management
// Get token manager
$tokenManager = \Drupal::service('llm.token_manager');
// Count tokens in text
$count = $tokenManager->countTokens('Hello world', 'gpt-4');
// Optimize prompt for token limits
$optimized = $tokenManager->optimizePrompt($prompt, 2000);
// Get token usage statistics
$stats = $tokenManager->getUsageStats($user);
For Site Builders
Creating AI Workflows with ECA
- Install and enable ECA module:
drush en eca -y - Navigate to Configuration → Workflow → ECA (
/admin/config/workflow/eca) - Create new model: "AI Content Review"
- Add event: "Node insert"
- Add condition: "Content type is Article"
- Add action: "LLM: Analyze content"
- Save and test workflow
Using Drush Commands
# List all LLM Drush commands
drush list --filter=llm
# Analyze conversation token usage
drush llm:analyze-tokens
# Clean up old conversations
drush llm:cleanup-conversations --days=90
# Check provider health
drush llm:provider-health
# Export analytics
drush llm:export-analytics --format=csv --output=/tmp/analytics.csv
# Test provider connection
drush llm:test-provider openai
# Generate test data
drush llm:generate-test-data --conversations=100
Architecture
Service Layer
The module uses a comprehensive service architecture with 100+ services:
llm/
├── Core Services
│ ├── llm.conversation_manager - Conversation lifecycle
│ ├── llm.token_manager - Token counting and optimization
│ ├── llm.provider_manager - Provider management
│ ├── llm.cost_calculator - Cost calculation
│ └── llm.health_service - Service health monitoring
├── Analytics Services
│ ├── llm.token_analytics - Token usage analytics
│ ├── llm.metrics_collector - Metrics collection
│ └── llm.dashboard_service - Dashboard data
├── Security Services
│ ├── llm.audit_logger - Audit logging
│ ├── llm.key_service - API key management
│ └── llm.access_control - Access control
└── Integration Services
├── llm.eca_integration - ECA workflow integration
├── llm.graphql_service - GraphQL API
└── llm.websocket_service - Real-time updates
Entity Types
Custom entity types for AI data:
- ai_conversation: Stores conversation history
- ai_model: Defines available AI models
- token_usage: Tracks token consumption
- dashboard_share: Manages shared dashboards
Plugin System
Extensible provider plugin architecture:
/**
* @LlmProvider(
* id = "custom_provider",
* label = @Translation("Custom Provider"),
* description = @Translation("Custom AI provider integration")
* )
*/
class CustomProvider extends LlmProviderBase {
// Implementation
}
Event System
Dispatch and subscribe to LLM events:
// Available events
- ConversationCreateEvent
- ConversationUpdateEvent
- TokenUsageEvent
- ProviderHealthEvent
- CostLimitEvent
Submodules
The LLM Platform Core includes 15 submodules for extended functionality:
Core Submodules
- LLM Core (
llm_core): Essential core services and APIs - LLM Chat (
llm_chat): Chat interface and conversation UI - LLM Dashboard (
llm_dashboard): Analytics dashboard and visualizations - LLM Security (
llm_security): Enterprise security features
Feature Submodules
- LLM Analytics (
llm_analytics): Advanced usage analytics and reporting - LLM AI Agents (
llm_ai_agents): Agent lifecycle management and orchestration - LLM Gateway (
llm_gateway): API gateway for external integrations - LLM UI (
llm_ui): User interface components and widgets - LLM API (
llm_api): REST API endpoints - LLM Docs (
llm_docs): Interactive API documentation
Integration Submodules
- AI Provider LLM Platform (
ai_provider_llm_platform): Drupal AI module integration - AI Contextual Chat (
ai_contextual_chat): Context-aware chat with entities - LLM Echo Voice (
llm_echo_voice): Voice input/output capabilities - LLM OpenAI Gateway (
llm_openai_gateway): OpenAI-specific gateway - Platform Analytics (
platform_analytics): Platform-wide analytics
Enabling Submodules
# Enable specific submodules
drush en llm_dashboard llm_analytics llm_security -y
# Enable all recommended submodules
drush en llm llm_core llm_chat llm_dashboard llm_security llm_analytics -y
API Documentation
REST API
GET /api/llm/conversations - List conversations
POST /api/llm/conversations - Create conversation
GET /api/llm/conversations/{id} - Get conversation
PATCH /api/llm/conversations/{id} - Update conversation
DELETE /api/llm/conversations/{id} - Delete conversation
GET /api/llm/models - List available models
POST /api/llm/chat - Send chat message
GET /api/llm/usage - Get usage statistics
GraphQL API
query {
conversations(limit: 10) {
id
title
model
messages {
role
content
timestamp
}
tokenUsage {
prompt
completion
total
}
}
}
mutation {
createConversation(input: {
model: "gpt-4"
systemPrompt: "You are helpful"
}) {
conversation {
id
title
}
}
}
JavaScript API
// Initialize LLM client
const llm = new Drupal.llm.Client({
apiKey: 'your-api-key',
baseUrl: '/api/llm'
});
// Create conversation
const conversation = await llm.createConversation({
model: 'gpt-4'
});
// Send message
const response = await llm.sendMessage(conversation.id, 'Hello!');
// Stream responses
llm.streamMessage(conversation.id, 'Tell me a story', (chunk) => {
console.log(chunk);
});
Testing
Comprehensive test suite with 94%+ coverage. See TESTING.md for detailed testing documentation.
Quick Test Commands
# Run all tests
vendor/bin/phpunit
# Run specific test types
vendor/bin/phpunit --group Unit
vendor/bin/phpunit --group Kernel
vendor/bin/phpunit --group Functional
# Generate coverage report
vendor/bin/phpunit --coverage-html coverage/
# Run code quality checks
vendor/bin/phpcs
vendor/bin/phpstan analyze
Security
Security Features
- API Key Encryption: All API keys encrypted at rest
- Rate Limiting: Configurable rate limits per user/role
- Audit Logging: Comprehensive audit trail for all operations
- Input Validation: Strict input validation and sanitization
- Output Sanitization: XSS protection on all outputs
- CSRF Protection: CSRF tokens on all state-changing operations
- Permission System: Granular permissions for all features
Security Best Practices
- Store API Keys Securely: Use the Key module for API key management
- Enable Audit Logging: Track all AI interactions
- Configure Rate Limits: Prevent abuse and control costs
- Review Permissions: Regularly audit user permissions
- Monitor Usage: Set up alerts for unusual activity
- Update Regularly: Keep module updated for security patches
Security Reporting
To report security issues:
- Email: security@bluefly.io
- Security Policy: https://gitlab.bluefly.io/llm/drupal/modules/llm/-/security/policy
Do not report security issues in public issue queues.
Performance
Performance Optimizations
- Response Caching: Cache AI responses for repeated queries
- Token Compression: Automatically compress conversation context
- Lazy Loading: Services loaded on-demand
- Database Optimization: Indexed queries for fast retrieval
- CDN Integration: Static assets via CDN
- Queue Processing: Async processing for expensive operations
Performance Benchmarks
| Operation | Average Time | Notes |
|---|---|---|
| Token Count | <1ms | Local calculation |
| Provider Request | 500-2000ms | Depends on model |
| Conversation Load | <50ms | With cache |
| Analytics Query | <100ms | Optimized queries |
Performance Configuration
# /admin/config/llm/performance
performance:
cache:
enabled: true
ttl: 3600
queue:
enabled: true
workers: 4
optimization:
compress_context: true
cache_embeddings: true
Troubleshooting
Common Issues
"Provider not responding"
Cause: API key invalid or provider service down
Solution:
# Test provider connection
drush llm:test-provider openai
# Check logs
drush watchdog:show --type=llm
# Verify API key
drush config:get llm.providers.openai api_key
"Token limit exceeded"
Cause: Conversation context too large
Solution:
// Enable token optimization
drush config:set llm.settings token_optimization.enabled TRUE
// Or manually compress conversation
$conversationManager->compressContext($conversation);
"Permission denied"
Cause: User lacks required permissions
Solution:
# Check user permissions
drush user:role:list
# Grant permission
drush role:perm:add authenticated 'use llm chat'
Debug Mode
Enable debug mode for verbose logging:
# Enable debug mode
drush config:set llm.settings debug_mode TRUE
# View debug logs
drush watchdog:show --type=llm --severity=Debug
Getting Help
- Issue Queue: https://www.drupal.org/project/issues/llm
- GitLab Issues: https://gitlab.bluefly.io/llm/drupal/modules/llm/-/issues
- Documentation: https://gitlab.bluefly.io/llm/documentation/-/wikis/home
- Community Support: Join #llm-platform on Drupal Slack
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Quick Start for Contributors
# Clone repository
git clone https://gitlab.bluefly.io/llm/drupal/modules/llm.git
cd llm
# Install dependencies
composer install
npm install
# Create feature branch
git checkout -b feature/my-feature
# Make changes and test
vendor/bin/phpunit
vendor/bin/phpcs
# Commit and push
git commit -m "feat: Add new feature"
git push origin feature/my-feature
# Create merge request on GitLab
Development Standards
- Follow Drupal coding standards (PHPCS)
- Maintain PHPStan Level 8 compliance
- Write tests for all new features (>90% coverage)
- Document all public APIs with PHPDoc
- Update CHANGELOG.md for all changes
Maintainers
- Bluefly Development Team - dev@bluefly.io
License
This project is licensed under the GNU General Public License v2.0 or later.
See LICENSE.txt for full license text.
Additional Resources
- Project Page: https://www.drupal.org/project/llm
- Documentation: https://gitlab.bluefly.io/llm/documentation/-/wikis/home
- Issue Tracker: https://www.drupal.org/project/issues/llm
- GitLab Repository: https://gitlab.bluefly.io/llm/drupal/modules/llm
- Changelog: CHANGELOG.md
- Testing Guide: TESTING.md
Supported by: Bluefly.io Version: 0.1.1 Last Updated: 2025-11-03