HeliosDB EXPLAIN - Complete User Guide¶
Table of Contents¶
- Introduction
- Quick Start
- EXPLAIN Modes
- Output Formats
- Understanding Query Plans
- AI-Powered Explanations
- Optimizer Features
- Why-Not Analysis
- Transaction Analysis
- Distributed Query Analysis
- Interactive Tuning
- Production Deployment
- Monitoring and Metrics
- Advanced Features
- Web UI
- Best Practices
- Troubleshooting
Introduction¶
HeliosDB EXPLAIN is the most advanced query plan analysis tool in the industry. It provides:
- AI-Powered Natural Language Explanations: Understand query plans in plain English
- Visual Plan Trees: Interactive, beautiful visualizations
- Why-Not Analysis: Discover why optimizations weren't applied
- Real-Time Tracking: See actual vs estimated metrics
- Transaction Analysis: Understand isolation levels and locking
- Distributed Query Analysis: Analyze cross-node execution
- Production Hardening: Load tested for 1000+ concurrent requests
- External Integrations: Prometheus, Grafana, Datadog, New Relic
- Plan Versioning: Track query plan evolution over time
- Web UI: Interactive visualization with dark mode
What Makes HeliosDB EXPLAIN Special?¶
Unlike PostgreSQL's EXPLAIN, Oracle's execution plans, or MySQL's EXPLAIN, HeliosDB EXPLAIN:
- Speaks Your Language: AI translates technical plans into plain English
- Explains Failures: Why-Not analysis tells you why indexes weren't used
- Tracks Real Performance: Compare estimates vs actual execution
- Production Ready: Battle-tested with 1000+ concurrent requests
- Developer Friendly: Interactive tuning with instant feedback
Quick Start¶
Basic EXPLAIN¶
-- Standard EXPLAIN
EXPLAIN SELECT * FROM users WHERE age > 25;
-- Verbose mode with costs
EXPLAIN VERBOSE SELECT * FROM users WHERE age > 25;
-- AI-powered explanation
EXPLAIN AI SELECT * FROM users WHERE age > 25;
-- Full analysis with Why-Not
EXPLAIN ANALYZE SELECT * FROM users WHERE age > 25;
Output Example¶
═══════════════════════════════════════════════════════════════
EXPLAIN PLAN ANALYSIS
═══════════════════════════════════════════════════════════════
Total Cost: 1250.45
Estimated Rows: 15000
Planning Time: 12.34ms
───────────────────────────────────────────────────────────────
AI-POWERED EXPLANATION
───────────────────────────────────────────────────────────────
Summary:
This query performs a full table scan on 'users', filtering rows
where age > 25. Approximately 15,000 rows match this condition.
Step-by-Step Execution:
1. Read 50,000 rows from table users (Cost: 1000.00)
2. Filter rows using predicate age > 25, keeping 15,000 rows (Cost: 250.45)
Performance Prediction:
Category: Moderate
Estimated Time: 250.09ms
This query is predicted to be 'Moderate', taking approximately
250ms to execute. It will process approximately 15,000 rows.
AI Suggestions:
• Consider adding an index on 'age' column to speed up lookups
• If you frequently query by age range, a B-tree index would help
───────────────────────────────────────────────────────────────
ACTIVE OPTIMIZER FEATURES (2)
───────────────────────────────────────────────────────────────
✓ Predicate Pushdown
Category: Pushdown
Trigger: Filter condition detected before scan
Benefit: Reduces rows read from storage
Savings: 70.0%
✓ SIMD Vectorization
Category: Vectorization
Trigger: Simple numeric predicate detected
Benefit: Process multiple rows simultaneously
Savings: 40.0%
═══════════════════════════════════════════════════════════════
EXPLAIN Modes¶
Standard Mode¶
Basic query plan with cost and row estimates.
Use When: - Quick plan overview needed - Debugging join order - Checking if index is used
Verbose Mode¶
Includes detailed cost breakdown and cardinality estimates.
Use When: - Need detailed cost information - Analyzing complex joins - Comparing plan alternatives
AI Mode¶
Natural language explanation of the query plan.
Output:
Summary:
This query searches the orders table for all pending orders.
The database will use an index scan on the status column,
which is very efficient.
Performance: Fast (estimated 15ms)
Use When: - Explaining plans to non-technical stakeholders - Learning query optimization - Quick performance assessment
Analyze Mode¶
Full analysis with Why-Not insights and optimization suggestions.
Includes: - All optimizer decisions with reasoning - Unused indexes and why - Stale statistics warnings - Configuration recommendations - Cardinality estimation issues
Use When: - Query is slower than expected - Understanding why optimization didn't apply - Comprehensive performance debugging
Output Formats¶
Text Format (Default)¶
Human-readable tree structure with ANSI colors.
JSON Format¶
Structured output for programmatic analysis.
{
"plan": {
"node_type": "Scan",
"operation": "Scan users",
"cost": 100.0,
"rows": 1000,
"details": {
"table": "users",
"columns": "5"
}
},
"total_cost": 100.0,
"total_rows": 1000,
"planning_time_ms": 5.23
}
YAML Format¶
Readable structured output.
HTML Format¶
Interactive web visualization.
SVG Format¶
Scalable vector graphics for presentations.
Understanding Query Plans¶
Plan Node Types¶
Sequential Scan¶
When Used: - No suitable index available - Table is small (<10k rows) - Query needs most/all rows
Performance: - Fast for small tables - Slow for large tables - Linear time O(n)
Index Scan¶
When Used: - Suitable index exists - Selective predicate - Fetching small portion of table
Performance: - Very fast for selective queries - Logarithmic lookup O(log n) - May still scan table for actual data
Hash Join¶
→ Hash Join [cost=1500.00, rows=5000]
type: Inner
condition: orders.customer_id = customers.id
→ Scan customers [cost=200.00, rows=1000]
→ Scan orders [cost=500.00, rows=10000]
When Used: - Equijoin condition - Unsorted inputs - Memory available for hash table
Performance: - O(n + m) time complexity - Good for large joins - Requires memory
Nested Loop Join¶
→ Nested Loop Join [cost=10000.00, rows=1000]
type: Inner
condition: a.id = b.a_id
→ Scan a [cost=100.00, rows=10]
→ Index Scan b_a_id_idx [cost=5.00, rows=100]
When Used: - Small outer input - Inner input has index - No equijoin condition
Performance: - O(n * m) worst case - Fast when outer is small - Slow for large cartesian products
Sort¶
When Used: - ORDER BY clause - Merge join preparation - DISTINCT elimination
Performance: - O(n log n) - Memory intensive - Can spill to disk
Aggregate¶
→ HashAggregate [cost=1200.00, rows=10]
group_by: 1
aggregates: 2
→ Scan orders [cost=500.00, rows=10000]
When Used: - GROUP BY clause - Aggregate functions (SUM, COUNT, etc.) - DISTINCT
Performance: - O(n) with hash table - Memory proportional to groups - Can spill to disk
AI-Powered Explanations¶
How It Works¶
- Plan Analysis: AI analyzes the query plan structure
- Cost Assessment: Evaluates estimated costs and row counts
- Pattern Matching: Identifies common patterns and anti-patterns
- Natural Language: Generates human-readable explanation
- Recommendations: Suggests optimizations based on best practices
Example: Complex Join¶
EXPLAIN AI
SELECT o.id, c.name, SUM(oi.amount)
FROM orders o
JOIN customers c ON o.customer_id = c.id
JOIN order_items oi ON oi.order_id = o.id
WHERE o.status = 'completed'
GROUP BY o.id, c.name;
AI Output:
Summary:
This query calculates the total order amount for each completed
order, along with customer information. The database will:
1. Filter completed orders using an index on the status column
2. Join with customers table using a hash join
3. Join with order_items using another hash join
4. Group the results to sum amounts per order
Step-by-Step Execution:
1. Scan orders table with index on 'status' column (Fast: 50ms)
2. Build hash table for customers (Memory: 10MB)
3. Probe hash table to join orders with customers (Fast: 20ms)
4. Build hash table for order_items (Memory: 50MB)
5. Probe to join with order_items (Moderate: 100ms)
6. Group and aggregate results (Fast: 30ms)
Performance Prediction:
Category: Moderate
Estimated Time: 200ms
This query should complete in about 200ms. The main cost comes
from joining order_items which is a large table. The hash joins
are efficient, but will use about 60MB of memory.
Suggestions:
• Ensure indexes exist on order_items.order_id for better performance
• If you frequently query by status, the current index is optimal
• Consider partitioning order_items if it grows beyond 10M rows
Warnings:
⚠ Hash join on order_items may spill to disk if work_mem < 50MB
Optimizer Features¶
Predicate Pushdown¶
What It Is: Move WHERE conditions as close to data source as possible.
Example:
-- Without pushdown (bad)
SELECT * FROM (SELECT * FROM users) AS u WHERE age > 25;
→ Reads all rows, then filters
-- With pushdown (good)
SELECT * FROM users WHERE age > 25;
→ Filters while reading
Benefit: 70% fewer rows read from storage
Projection Pushdown¶
What It Is: Read only needed columns, not all columns.
Example:
-- Without pushdown (bad)
SELECT id FROM users; -- but reads all columns internally
-- With pushdown (good)
SELECT id FROM users; -- only reads 'id' column
Benefit: 50-90% less I/O depending on column count
Join Reordering¶
What It Is: Execute joins in optimal order (smallest result first).
Example:
SELECT * FROM a JOIN b JOIN c WHERE a.id = 1;
-- Bad order: (a JOIN b) JOIN c → 1M rows intermediate
-- Good order: (a JOIN c) JOIN b → 100 rows intermediate
Benefit: 10-100x faster for multi-way joins
SIMD Vectorization¶
What It Is: Process multiple rows simultaneously using CPU vector instructions.
Example:
Benefit: 2-8x faster for numeric operations
Why-Not Analysis¶
Understanding Why-Not¶
Why-Not analysis answers: "Why didn't the optimizer do X?"
Common questions: - Why wasn't my index used? - Why did it choose nested loop instead of hash join? - Why is the estimate so wrong?
Unused Index Analysis¶
Output:
═══════════════════════════════════════════════════════════════
WHY-NOT ANALYSIS
═══════════════════════════════════════════════════════════════
Unused Indexes:
• idx_users_age on users
Reason: Query has no WHERE clause on indexed column
Cost Impact: 900.0
Suggestion: Add WHERE clause on 'age' column to utilize index
Stale Statistics:
• Table: users
Age: 15 days old
Changed: 25.0%
Impact: Cardinality estimates may be inaccurate, affecting
join order
Configuration Issues:
• Parameter: work_mem
Current: 256MB
Suggested: 512MB
Reason: Hash table for join may spill to disk with current
work_mem
Common Reasons for Unused Indexes¶
-
Query Selects Too Many Rows
-
No Matching Predicate
-
Function Applied to Column
-
OR Condition Without All Columns Indexed
-
Data Type Mismatch
Transaction Analysis¶
EXPLAIN for Transactions¶
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
EXPLAIN TRANSACTION
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
COMMIT;
Output:
═══════════════════════════════════════════════════════════════
TRANSACTION EXPLAIN ANALYSIS
═══════════════════════════════════════════════════════════════
Isolation Level: Serializable
Transaction Mode: Read-Write
Lock Analysis:
• Row Lock: accounts (id=1)
Type: WRITE LOCK
Duration: Until COMMIT
Conflicts: May block other UPDATE/DELETE on same row
• Table Lock: accounts
Type: ROW EXCLUSIVE
Duration: Until COMMIT
Conflicts: Prevents LOCK TABLE EXCLUSIVE
MVCC Behavior:
Visibility Snapshot: Created at BEGIN
Sees: All committed data as of transaction start
Write Conflicts: Will abort if concurrent updates detected
Potential Deadlocks:
⚠ Risk: MEDIUM
Scenario: Another transaction updates accounts(id=1) then
accounts(id=2), while this transaction does reverse
Suggestions:
• Consider READ COMMITTED isolation if SERIALIZABLE not required
• Always acquire locks in same order to prevent deadlocks
• Keep transactions short to minimize lock contention
Understanding Isolation Levels¶
| Level | Dirty Reads | Non-Repeatable Reads | Phantom Reads |
|---|---|---|---|
| Read Uncommitted | Yes | Yes | Yes |
| Read Committed | No | Yes | Yes |
| Repeatable Read | No | No | Yes |
| Serializable | No | No | No |
Distributed Query Analysis¶
Cross-Node Execution¶
EXPLAIN DISTRIBUTED
SELECT o.*, c.name
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.created_at > NOW() - INTERVAL '7 days';
Output:
═══════════════════════════════════════════════════════════════
DISTRIBUTED QUERY ANALYSIS
═══════════════════════════════════════════════════════════════
Cluster: 4 nodes
Parallelism: 8 workers
Execution Plan:
Coordinator (node-1):
→ Gather Results
Estimated: 5000 rows from 4 nodes
Worker Nodes (node-1 to node-4):
→ Parallel Hash Join [local]
→ Scan orders (partition filter applied)
Partitions: 4
Each node processes: ~2500 rows
→ Broadcast customers table
Size: 1000 rows → 4KB per node
Network: 16KB total
Network Analysis:
Data Shuffled: 16KB (customers broadcast)
Result Transferred: 250KB (5000 rows)
Estimated Network Time: 5ms
Performance Prediction:
Local Execution: 50ms per node
Network Overhead: 5ms
Total: 55ms
Optimization Applied:
✓ Partition Pruning: 75% of partitions skipped
✓ Broadcast Join: Small table broadcast vs shuffle
✓ Parallel Execution: 4x speedup
Interactive Tuning¶
What-If Analysis¶
Output:
═══════════════════════════════════════════════════════════════
WHAT-IF ANALYSIS
═══════════════════════════════════════════════════════════════
Scenario: With index on users(age) and work_mem=512MB
Current Plan:
→ Sequential Scan users [cost=1000.00, rows=15000]
With Changes:
→ Index Scan users_age_idx [cost=150.00, rows=15000]
Impact:
Cost Reduction: 85% faster
Memory Usage: Unchanged (index scan doesn't need work_mem)
Recommendation:
✓ Create index: Significant improvement
✗ Increase work_mem: No impact for this query
SQL to Apply:
CREATE INDEX idx_users_age ON users(age);
Index Advisor¶
EXPLAIN ADVISOR
SELECT o.* FROM orders o
WHERE o.status = 'pending'
AND o.created_at > NOW() - INTERVAL '7 days';
Output:
═══════════════════════════════════════════════════════════════
INDEX ADVISOR
═══════════════════════════════════════════════════════════════
Current Performance: SLOW (cost=5000.00)
Recommended Indexes:
1. Composite Index on (status, created_at)
CREATE INDEX idx_orders_status_created
ON orders(status, created_at);
Impact: 95% cost reduction
Size: ~50MB
Maintenance: Low
Why: Covers both WHERE conditions, enables index-only scan
2. Partial Index on pending orders
CREATE INDEX idx_orders_pending
ON orders(created_at)
WHERE status = 'pending';
Impact: 90% cost reduction
Size: ~10MB (smaller!)
Maintenance: Very Low
Why: If 'pending' is common filter, partial index is more
efficient
Recommendation: Create composite index (#1) if you frequently
query by both status and date. Use partial index (#2) if you
mostly query pending orders.
Production Deployment¶
Load Testing¶
HeliosDB EXPLAIN is load tested for production use:
Load Test Results:
Concurrent Requests: 1000
Duration: 60 seconds
Total Requests: 58,432
Success Rate: 99.8%
Performance:
Requests/sec: 974
P50 Latency: 45ms
P95 Latency: 98ms
P99 Latency: 156ms
Performance Benchmarks¶
| Operation | Target | Actual | Status |
|---|---|---|---|
| Standard EXPLAIN | <100ms | 12ms | Pass |
| Verbose EXPLAIN | <150ms | 45ms | Pass |
| AI EXPLAIN | <500ms | 234ms | Pass |
| Analyze EXPLAIN | <1000ms | 567ms | Pass |
| HTML Generation | <500ms | 123ms | Pass |
| SVG Generation | <300ms | 78ms | Pass |
Production Configuration¶
use heliosdb::sql::explain_production::{
ProductionExplainService,
ProductionConfig,
};
let config = ProductionConfig {
max_concurrent_requests: 1000,
timeout_ms: 5000,
enable_caching: true,
cache_ttl_seconds: 300,
max_memory_mb: 512,
enable_monitoring: true,
enable_detailed_errors: true,
};
let service = ProductionExplainService::new(config);
// Execute EXPLAIN with production safeguards
let result = service.explain(&plan, ExplainMode::AI)?;
Monitoring and Metrics¶
Prometheus Metrics¶
HeliosDB EXPLAIN exports Prometheus metrics:
# HELP explain_requests_total Total EXPLAIN requests
# TYPE explain_requests_total counter
explain_requests_total 1234
# HELP explain_duration_ms EXPLAIN operation duration
# TYPE explain_duration_ms histogram
explain_duration_ms_bucket{le="50"} 800
explain_duration_ms_bucket{le="100"} 950
explain_duration_ms_bucket{le="500"} 1200
# HELP explain_cache_hits_total Cache hits
# TYPE explain_cache_hits_total counter
explain_cache_hits_total 890
Grafana Dashboards¶
Pre-built dashboards included:
- EXPLAIN Analytics Dashboard
- Requests per second
- Latency percentiles (P50, P95, P99)
- Cache hit rate
-
Plan complexity trends
-
Performance Monitoring Dashboard
- Query cost distribution
- Feature activation frequency
- Error rate trends
-
Memory usage
-
Optimizer Insights Dashboard
- Most common plan patterns
- Index usage statistics
- Join strategy distribution
- Why-Not analysis trends
APM Integration¶
Datadog¶
use heliosdb::sql::explain_integrations::DatadogIntegration;
let datadog = DatadogIntegration::new("your-api-key".to_string());
// Send trace to Datadog APM
datadog.send_trace(&explain_output, duration_ms)?;
New Relic¶
use heliosdb::sql::explain_integrations::NewRelicIntegration;
let newrelic = NewRelicIntegration::new("your-license-key".to_string());
// Send custom event
newrelic.send_event(&explain_output, duration_ms)?;
Advanced Features¶
Plan Versioning¶
Track query plan evolution over time:
use heliosdb::sql::explain_advanced::PlanVersionManager;
let mut manager = PlanVersionManager::new();
// Store plan version
let version = manager.store_version(
"query-hash-123".to_string(),
explain_output,
"alice".to_string(),
"production".to_string(),
);
// Get all versions
let versions = manager.get_versions("query-hash-123");
// Compare versions
let diff = manager.compare_versions("v1", "v2")?;
println!("{}", diff.visualize_diff());
Output:
╔════════════════════════════════════════════════════════════════╗
║ PLAN DIFF ANALYSIS ║
╠════════════════════════════════════════════════════════════════╣
║ SUMMARY ║
╠════════════════════════════════════════════════════════════════╣
║ Plan improved: Cost reduced by 45.2% (1000.00 → 547.80) ║
║ ║
║ 2 optimizer features changed: ║
║ + Index Scan ║
║ + Predicate Pushdown ║
╠════════════════════════════════════════════════════════════════╣
║ COST COMPARISON ║
╠════════════════════════════════════════════════════════════════╣
║ Old Cost: 1000.00 ║
║ New Cost: 547.80 ║
║ Change: -45.2% ║
╚════════════════════════════════════════════════════════════════╝
Plan Library¶
Save and share common query patterns:
use heliosdb::sql::explain_advanced::{
PlanLibrary,
PlanCategory,
};
let mut library = PlanLibrary::new();
// Save a plan
let saved = library.save_plan(
"Fast User Lookup".to_string(),
"Efficient single-row fetch using primary key".to_string(),
"SELECT * FROM users WHERE id = ?".to_string(),
explain_output,
PlanCategory::BestPractice,
vec!["select".to_string(), "indexed".to_string()],
"bob".to_string(),
);
// Search by category
let best_practices = library.search_by_category(&PlanCategory::BestPractice);
// Get most used patterns
let popular = library.get_most_used(10);
Historical Analysis¶
Analyze plan evolution:
use heliosdb::sql::explain_advanced::HistoricalAnalysis;
let analysis_engine = HistoricalAnalysis::new(30); // 30-day retention
let evolution = analysis_engine.analyze_evolution(&versions);
println!("Total versions: {}", evolution.total_versions);
println!("Time span: {} days", evolution.time_span_days);
// Cost trend over time
for (timestamp, cost) in evolution.cost_trend {
println!("{}: {:.2}", timestamp, cost);
}
// Feature adoption
for (feature, count) in evolution.feature_adoption {
println!("{}: {} times", feature, count);
}
Web UI¶
Interactive Visualization¶
Access the web UI:
Features:
- Dark Mode: Automatic or manual toggle
- Interactive Plan Tree: Hover for details, click to expand
- 📥 Export: PNG, SVG, PDF, HTML formats
- 📋 Copy-Paste Friendly: One-click copy to clipboard
- ♿ Accessible: WCAG 2.1 AA compliant
- 📱 Responsive: Works on mobile devices
Dark Mode¶
// Auto-detect system preference
if (window.matchMedia('(prefers-color-scheme: dark)').matches) {
// Dark mode enabled
}
// Manual toggle
toggleTheme(); // Button in UI
Export Formats¶
PNG Export¶
SVG Export¶
PDF Export¶
HTML Export¶
Best Practices¶
1. Use Appropriate EXPLAIN Mode¶
-- Development: Use AI mode for learning
EXPLAIN AI SELECT ...;
-- Production debugging: Use ANALYZE for deep dive
EXPLAIN ANALYZE SELECT ...;
-- Quick checks: Use standard mode
EXPLAIN SELECT ...;
-- Before deploying: Use ADVISOR
EXPLAIN ADVISOR SELECT ...;
2. Regular Statistics Updates¶
-- Update statistics weekly (or after major data changes)
ANALYZE users;
-- Check statistics age
SELECT table_name, stats_age_days
FROM pg_stat_user_tables
WHERE stats_age_days > 7;
3. Monitor Plan Changes¶
4. Cache Frequently Analyzed Plans¶
// Enable caching in production
let config = ProductionConfig {
enable_caching: true,
cache_ttl_seconds: 300, // 5 minutes
..Default::default()
};
5. Use What-If Analysis Before Index Creation¶
-- Don't create index blindly
EXPLAIN WHAT-IF (INDEX users(email))
SELECT * FROM users WHERE email = 'test@test.com';
-- Check impact first!
Troubleshooting¶
Problem: EXPLAIN is slow¶
Possible Causes: 1. Complex query with many joins 2. Statistics are stale 3. Large plan tree
Solutions:
-- Use caching
SET explain_cache_enabled = true;
-- Update statistics
ANALYZE;
-- Use standard mode instead of ANALYZE
EXPLAIN SELECT ... ; -- Instead of EXPLAIN ANALYZE
Problem: Cost estimates are very wrong¶
Possible Causes: 1. Stale statistics 2. Correlated columns 3. Data distribution changed
Solutions:
-- Update statistics
ANALYZE users;
-- Check statistics age
SELECT last_analyzed FROM pg_stat_user_tables WHERE table_name = 'users';
-- For correlated columns, create extended statistics
CREATE STATISTICS users_age_country_stats (dependencies)
ON age, country FROM users;
Problem: Index not being used¶
Diagnosis:
Common Reasons: 1. Too many rows match (>10% of table) 2. Function applied to column 3. Data type mismatch 4. Statistics outdated
Solutions:
-- Update statistics
ANALYZE users;
-- Check selectivity
SELECT COUNT(*) * 100.0 / (SELECT COUNT(*) FROM users)
FROM users WHERE age > 25;
-- If > 10%, seq scan may be faster!
-- For functions, use functional index
CREATE INDEX idx_users_lower_email ON users (LOWER(email));
Problem: Out of memory during EXPLAIN¶
Possible Causes: 1. Very complex query 2. Cache filled up 3. Memory leak (rare)
Solutions:
-- Clear cache
SELECT pg_explain_clear_cache();
-- Increase memory limit
ALTER SYSTEM SET explain_max_memory = '1GB';
-- Disable caching temporarily
SET explain_cache_enabled = false;
Problem: Transaction EXPLAIN shows high deadlock risk¶
Diagnosis:
Solutions:
-- Acquire locks in consistent order
BEGIN;
SELECT * FROM accounts WHERE id IN (1, 2) ORDER BY id FOR UPDATE;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
-- Use lower isolation level if possible
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Keep transactions short
-- Avoid user interaction during transaction
Appendix: Comparison with Other Databases¶
vs PostgreSQL EXPLAIN¶
| Feature | PostgreSQL | HeliosDB |
|---|---|---|
| AI Explanations | ❌ No | Yes |
| Why-Not Analysis | ❌ No | Yes |
| Transaction Analysis | ⚠ Limited | Full |
| Distributed Query | ❌ No | Yes |
| Web UI | ❌ No | Yes |
| Plan Versioning | ❌ No | Yes |
| Real-Time Tracking | ANALYZE | Enhanced |
| Index Advisor | ⚠ Extension | Built-in |
| Export Formats | ⚠ TEXT/JSON | 5+ formats |
vs MySQL EXPLAIN¶
| Feature | MySQL | HeliosDB |
|---|---|---|
| Cost Estimates | ⚠ Limited | Detailed |
| AI Explanations | ❌ No | Yes |
| JSON Output | Yes | Enhanced |
| Optimizer Trace | Yes | Enhanced |
| Visual Plans | ❌ No | Yes |
vs Oracle Execution Plans¶
| Feature | Oracle | HeliosDB |
|---|---|---|
| Detailed Plans | Yes | Yes |
| Cost Model | Advanced | Advanced |
| AI Insights | ❌ No | Yes |
| Plan Management | SPM | Versioning |
| Distributed | RAC | Native |
| Price | 💰💰💰 | 🆓 Free |
Further Reading¶
- Tutorial Series - 10+ step-by-step tutorials
- Best Practices Guide - 50+ pages of optimization tips
- Troubleshooting Guide - 40+ common scenarios
- API Reference - Complete API documentation
HeliosDB EXPLAIN - The most developer-friendly query plan analyzer in the industry.