QuantumLayers User Guide

Version 1.3 | Last Updated: March 2026


Table of Contents


Creating Your Account

Get started with QuantumLayers by creating your account. Choose between Google Sign-In for quick setup or email registration for more control.

Registration Options

Option A: Google Sign-In (Fastest)

  • Click “Continue with Google”
  • Select your Google account
  • Authorize QuantumLayers access
  • Your account is created instantly

Option B: Email Registration

Required Information:

  • First & Last Name: Used to personalize your experience
  • Email Address: This becomes your login username
  • Password: Must be at least 8 characters (use the “Show” button to verify what you typed)
  • Company: Optional – add your organization name if relevant

Email Verification

After registering, check your email for a verification link. Click it to confirm your email address and you’ll be automatically logged into your new account.


Signing In

Access your QuantumLayers account using the same method you used to register.

Google Sign-In

Simply click “Continue with Google,” select your account, and you’re in. Your session remains active until you log out or after 30 days of inactivity.

Email Login

  1. Enter your email address
  2. Enter your password
  3. Optionally check “Remember me for 30 days” (only on your personal devices)
  4. Click “Sign In”

Tip: Use the password visibility toggle to make sure you’ve typed your password correctly.


Managing Your Profile

Update your personal information, change your password, and manage your account settings in your profile page.

Updating Your Information

You can change your first name, last name, company name, email address, communication preferences and UI mode at any time. Your changes are saved when you click “Update Profile.”

Important: If you change your email address, you’ll need to verify the new email before it becomes active.

Changing Your Password

For Email Accounts:

Enter a new password (minimum 8 characters) and confirm it. The system will show you the password strength in real-time:

  • Weak: Short or simple password
  • Medium: 8-12 characters with some variety
  • Strong: 12+ characters with uppercase, lowercase, numbers, and symbols

For Google Accounts:

Password changes must be done through your Google account settings.


Account Security

Logging Out

Click the “Logout” button in the dashboard header. You’ll be asked to confirm, then securely logged out of your account.

Deleting Your Account

⚠️ Warning: This permanently deletes everything!

Deleting your account removes all your datasets, connections, insights, and analysis history. This cannot be undone.

Deletion Process:

  1. Scroll to the “Danger Zone” at the bottom of your profile page
  2. Click “Delete My Account”
  3. Confirm you understand everything will be deleted
  4. Enter your password to verify
  5. Your account and all data are permanently removed

Your Dashboard

Your dashboard is mission control for all your data analysis. Here you’ll see all your datasets and quickly access the tools you need.

Quick Actions

The header buttons give you instant access to key features:

  • Upload: Add a CSV file from your computer
  • Connect: Link to live data sources (databases, APIs, cloud storage)
  • Merge: Combine multiple datasets into one
  • Report: Create a new scheduled report to automatically deliver insights to your inbox
  • QL-Agent: Open the AI-powered conversational agent to interact with your data using natural language
  • Logout: Sign out of your account

Your Datasets

Each dataset shows:

  • Name: The name you gave it
  • Source: Where the data came from (upload, database, API, etc.)
  • Status: Processing state (🟢 Ready, 🟡 Processing, 🔴 Failed)
  • Size: Number of rows
  • Created: When you added it

Click View Details to see column statistics and data quality, Analyze to create visualizations, or Insights to generate AI insights.

Your Scheduled Reports

Below your datasets, the dashboard displays a My Scheduled Reports section listing all reports you’ve configured. From here you can see each report’s name, frequency, and recipient list at a glance. Click + Report to create a new scheduled report (see the Scheduled Reports section below for full details).


Uploading Data

You can load datasets by uploading CSV files from your computer for instant analysis. These will be stored securely directly on the QuantumLayers server.

Upload Requirements

  • File Type: CSV only
  • Maximum Size: 50MB for free tier
  • Headers: First row must contain column names
  • Recommended Limit: Up to 1 million rows

How to Upload

  1. Click the Upload button in your dashboard
  2. Give your dataset a descriptive name
  3. Optionally add a description
  4. Choose privacy: Private (only you) or Public (shareable)
  5. Either click to browse for your file or drag and drop it
  6. Click “Upload & Analyze”

You’ll see a progress bar as your file uploads. Once complete, your dataset will appear in the dashboard with “Processing” status while our AI analyzes it. This usually takes just a few minutes.


Connecting Data Sources

Instead of manually uploading files, you can connect directly to your live data sources. QuantumLayers will automatically synchronize your data every hour, keeping your analysis up-to-date. In the case of SQL and API connections, your data will be accessed on a live basis, while the dataset structure will be synchronized hourly.

SQL Databases

Connect directly to your MySQL, PostgreSQL, or SQL Server databases to analyze live data without any manual export process. This is perfect for tracking real-time metrics from your production systems, sales databases, or customer relationship management platforms. Once connected, your data syncs automatically in real-time, ensuring your analysis always reflects the current state of your business.

Connection Requirements:

  • Database Server Address: The hostname or IP address where your database is hosted (for example, “db.yourcompany.com” or “192.168.1.100”)
  • Port Number: Usually 3306 for MySQL or 5432 for PostgreSQL, though your database administrator can tell you if it’s different
  • Database Name: The specific database you want to query (your database may contain multiple databases, so make sure you have the exact name)
  • Username and Password: Login credentials with read access to the database (never use admin credentials – create a read-only user specifically for QuantumLayers)
  • SQL Query: A SELECT statement that defines exactly what data you want to analyze

SQL Query Templates

QuantumLayers now includes pre-built SQL query templates to help you get started quickly. When connecting to a SQL database, you can select your service type and choose from a variety of common query patterns.

How to Use Templates:

  1. Select your database type (MySQL, PostgreSQL, or SQL Server)
  2. Choose a template from the dropdown menu (examples include “Recent Sales Orders,” “Customer Summary,” “Product Performance,” etc.)
  3. The template query automatically loads into the SQL query box
  4. Customize the query with your actual table and column names
  5. Test and save your connection

Templates save you time by providing properly structured queries that follow best practices. You simply replace the placeholder table and column names with your actual database schema. This is particularly helpful if you’re new to SQL or want to ensure you’re using efficient query patterns.

Example SQL Query:

Let’s say you want to analyze your sales data. Your query might look like this:

This query pulls the last 12 months of completed orders, giving you all the columns you need to analyze sales patterns, customer behavior, product performance, and regional trends. As new orders come in, QuantumLayers will automatically refresh the data to its newest state.

Security Best Practices:

Always create a dedicated read-only database user for QuantumLayers. This user should have SELECT permissions only – no INSERT, UPDATE, or DELETE capabilities. This ensures that even if your connection details were somehow compromised, nobody could modify or delete your data. All credentials are encrypted both in transit and at rest, providing an additional layer of security for your sensitive database information.

REST APIs

Pull data from any HTTP/REST API that returns JSON-formatted data. This is ideal for connecting to third-party services like payment processors, marketing platforms, social media analytics, or any custom API your development team has built. Many modern business tools provide API access, allowing you to centralize all your data analysis in QuantumLayers without manual exports or complicated integrations.

What You Need:

  • API Endpoint URL: The complete web address that returns your data (for example, “https://api.yourservice.com/v1/transactions”)
  • API Key: An authentication token that proves you have permission to access the data (most APIs provide this in their settings or developer portal)

Automatic JSON Format Detection

QuantumLayers automatically detects and converts various JSON response formats into tabular data. You do not need to worry about whether your API returns data in a specific structure – the system intelligently analyzes the response and extracts the data table.

How Auto-Detection Works:

When you connect to an API, QuantumLayers analyzes the JSON response using a priority-based detection system. The system checks for different JSON patterns in this specific order:

  1. Direct Array of Objects – Checks if the response is immediately an array of data records
  2. Nested Data Wrappers – Looks for common wrapper keys like “data”, “results”, “items”, “records”, “rows”, “entries”, “content”, or “payload”
  3. Column-Oriented Formats – Detects formats with separate arrays for column names and data values
  4. Pure Columnar Format – Identifies when each property contains an array of values for that column
  5. Pagination Wrappers – Searches nested structures under keys like “response”, “result”, “body”, or “output”
  6. Single Object – Converts a single data record into a one-row dataset

This priority order ensures that common API response formats are correctly identified, even when they contain multiple levels of nesting or use different structural patterns.

Supported JSON Formats:

1. Direct Array of Objects (Most Common)

2. Nested Data Wrapper

Works with wrapper keys: “data”, “results”, “items”, “records”, “rows”, “entries”, “content”, or “payload”

3. Column-Oriented Format

Also supports “headers”/”rows” and “fields”/”records” combinations

4. Pure Columnar Format

5. Nested Pagination Wrapper

Works with outer wrappers: “response”, “result”, “body”, or “output”

6. Single Object Response

Converted into a single-row dataset

What Happens Automatically:

  • QuantumLayers identifies the structure of your JSON response
  • Extracts the tabular data regardless of wrapper format
  • Automatically detects column names and data types
  • Converts timestamps, numbers, and text appropriately
  • Creates a properly structured dataset ready for analysis

The API connection syncs automatically as the data is being queried. This is perfect for dashboards and reports that need to stay current throughout the day without any manual intervention.

SFTP Servers

You can automatically sync CSV files from your SFTP (Secure File Transfer Protocol) server. This is particularly useful when your organization exports data files to an SFTP location on a regular schedule, such as nightly database exports, automated report generation, or data feeds from other systems. Instead of manually downloading these files and uploading them to QuantumLayers, the system connects directly to your SFTP server and pulls the latest version automatically.

Connection Details Required:

  • SFTP Server Address: The hostname of your SFTP server (for example, “sftp.yourcompany.com” or “files.example.org”)
  • Port: Usually port 22 for SFTP connections, though your IT department can confirm if a custom port is configured
  • Username: Your SFTP account username
  • Authentication: Either a password or an SSH private key (SSH keys are more secure and recommended for automated connections)
  • Remote Directory Path: The full path to the folder containing your CSV file
  • CSV Filename or Pattern: The exact name of the file to synchronize, or a wildcard pattern to match multiple files

Wildcard Filename Patterns

QuantumLayers now supports wildcard patterns in SFTP filenames using the asterisk (*) character. This powerful feature allows you to automatically sync the most recent file from a set of files that follow a naming pattern, eliminating the need to update your connection configuration every time a new file is generated.

How Wildcard Matching Works:

When you include an asterisk (*) in your filename, QuantumLayers searches the specified directory for all files matching that pattern. It then automatically selects and syncs the most recently modified file. This means your dataset always reflects the latest available data, even when new files are added to the SFTP server.

Wildcard Pattern Examples:

Example 1: Daily Export Files

Perfect for daily exports where each file includes the date in its name. QuantumLayers automatically picks up today’s file without any manual intervention.

Example 2: Monthly Reports

Ideal for monthly reporting cycles. As each new month’s report is generated, QuantumLayers automatically switches to analyzing the latest month’s data.

Example 3: Timestamped Exports

Great for quarterly or year-based exports. You can filter to specific years while still automatically picking up the latest file within that timeframe.

Wildcard Best Practices:

  • Make your patterns specific enough to avoid matching unintended files
  • Use consistent naming conventions in your export processes
  • Remember that QuantumLayers selects files based on modification time, not filename alphabetical order
  • Test your pattern by checking which files exist on your SFTP server

Example Configuration:

Let’s say your accounting system exports a daily sales report to your SFTP server. Your configuration might look like this:

  • Server: sftp.yourcompany.com
  • Port: 22
  • Username: data_reports
  • Remote Directory Path: /exports/sales/daily
  • CSV Filename Pattern: sales_*.csv

With this setup, QuantumLayers connects to your SFTP server at sftp.yourcompany.com, navigates to the /exports/sales/daily directory, and downloads the file matching sales_*.csv that was most recently modified. The system checks for updates every hour, but it’s smart enough to only download the file if it has been modified since the last sync. This is determined by checking the file’s last modification timestamp on the SFTP server, so you’re not wasting bandwidth re-downloading identical files.

If your organization overwrites the same filename each day (like “daily_sales_report.csv” being replaced every night), this connection type is perfect. If your files have different names each day (like “sales_2024_12_15.csv”, “sales_2024_12_16.csv”), the wildcard pattern feature automatically picks up the newest file without any configuration changes.

Google Sheets

Connect directly to any Google Spreadsheet you have access to, making it easy to analyze data that your team maintains in Google Sheets. This is perfect for situations where non-technical team members update data in a familiar spreadsheet interface, and you want to perform advanced analysis on that data without requiring them to export files or learn new tools. Marketing campaign results, inventory tracking, customer feedback, and budget planning spreadsheets can all become live datasets in QuantumLayers.

Initial Setup – Google Authorization:

The first time you connect a Google Sheet, you’ll need to authorize QuantumLayers to access your Google account. A popup window will appear asking you to sign in to Google and grant read-only access to your spreadsheets. This is a one-time setup – after initial authorization, you can connect as many Google Sheets as you want without going through the authorization process again. QuantumLayers only requests read-only permissions, so your spreadsheets can never be modified through this connection.

Finding Your Spreadsheet ID:

Every Google Spreadsheet has a unique identifier embedded in its URL. When you open any Google Sheet, look at the address bar in your browser. The URL will look something like this:

The spreadsheet ID is the long string of letters and numbers between “/d/” and “/edit”. In this example, the ID is:

You can copy either the entire URL or just the ID portion into QuantumLayers – the system will automatically extract the ID if you paste the full URL.

Specifying Sheet and Range:

Most Google Spreadsheets contain multiple sheets (tabs) within a single file. You’ll need to tell QuantumLayers which specific sheet to import:

  • Sheet Name: The name of the tab you want to import (for example, “Sales Data” or “Q4 Results”). If you leave this blank, it defaults to “Sheet1”
  • Cell Range: Optionally specify which cells to import using A1 notation (for example, “A1:Z1000” to import the first 1000 rows across columns A through Z). If you leave this blank, the entire sheet is imported

Complete Example:

Imagine your marketing team maintains a spreadsheet tracking campaign performance. The spreadsheet has multiple sheets for different quarters, and you want to analyze Q4 data:

  • Spreadsheet URL: https://docs.google.com/spreadsheets/d/1a2B3c4D5e6F7g8H9i0J1k2L3m4N5o6P7q8R9s0T1u2V3w4X/edit#gid=0
  • Sheet Name: Q4 Campaigns
  • Range: A1:H500 (imports columns A through H, rows 1 through 500)

Once connected, QuantumLayers syncs this data every hour. When your marketing team updates the spreadsheet with new campaign results or adjusts existing data, those changes automatically flow into your dataset within the hour. This means your analysis and visualizations always reflect the current state of the spreadsheet without any manual intervention.


Merging Datasets

Merging allows you to combine data from multiple sources into a single, unified dataset for comprehensive analysis. This is one of the most powerful features in QuantumLayers, enabling you to answer questions that require information from different systems. For example, you might merge customer data with purchase history to understand buying patterns, or combine product information with sales figures to analyze performance by category.

Understanding How Merging Works

Think of merging as matching puzzle pieces. You have two or more datasets, each containing related information, and you want to combine them based on something they have in common. This common element is called the “join column” – a column that exists in all the datasets you want to merge and contains matching values that link the records together.

For merging to work successfully, you need at least two datasets that share a column with the exact same name and contain related values in that column. The most common examples are ID columns like customer_id, product_id, order_id, or employee_id – unique identifiers that appear in multiple systems to link related information.

Practical Example: Merging Customer and Order Data

Let’s walk through a real-world example. Imagine you have two datasets:

Dataset 1: Customers

customer_idcustomer_nameemailregion
101Acme Corpcontact@acme.comNorth
102TechFlow Incinfo@techflow.comSouth
103GlobalTrade Ltdsales@globaltrade.comEast
104Innovate Cohello@innovate.coWest

Dataset 2: Orders

order_idcustomer_idorder_dateorder_totalstatus
50011012024-12-102500.00completed
50021022024-12-121200.00completed
50031012024-12-153400.00pending
50041052024-12-16800.00completed

Notice that both datasets have a column called “customer_id”. This is your join column – the common element that links customers to their orders. Customer 101 (Acme Corp) appears in both datasets, as does customer 102 (TechFlow Inc). However, customer 103 (GlobalTrade Ltd) exists in the customers dataset but has no orders, and customer 105 appears in the orders dataset but isn’t in the customers table.

Join Types: Choosing How to Combine Your Data

Different join types determine which records are included in your merged dataset. Here’s how each type would combine our customer and order data:

Inner Join – Only Matching Records

An inner join includes only the rows where the customer_id exists in both datasets. This gives you only customers who have placed orders.

Result of Inner Join:

customer_idcustomer_nameregionorder_idorder_totalstatus
101Acme CorpNorth50012500.00completed
101Acme CorpNorth50033400.00pending
102TechFlow IncSouth50021200.00completed

Notice that customer 103 (GlobalTrade Ltd) doesn’t appear because they have no orders, and order 5004 from customer 105 is excluded because that customer isn’t in the customers dataset. This is the most restrictive join type but ensures every row has complete information from both sources.

Left Join – Keep All Records from First Dataset

A left join keeps all customers and adds order information where it exists. Customers without orders will appear with empty values in the order columns.

Result of Left Join:

customer_idcustomer_nameregionorder_idorder_totalstatus
101Acme CorpNorth50012500.00completed
101Acme CorpNorth50033400.00pending
102TechFlow IncSouth50021200.00completed
103GlobalTrade LtdEastNULLNULLNULL
104Innovate CoWestNULLNULLNULL

This is useful when you want to analyze all customers, including those who haven’t made purchases yet. You could use this to identify which customers need follow-up or to calculate what percentage of your customer base is actively purchasing.

Right Join – Keep All Records from Second Dataset

A right join keeps all orders and adds customer information where it exists. Orders from unknown customers will appear with empty values in the customer columns.

Result of Right Join:

customer_idcustomer_nameregionorder_idorder_totalstatus
101Acme CorpNorth50012500.00completed
102TechFlow IncSouth50021200.00completed
101Acme CorpNorth50033400.00pending
105NULLNULL5004800.00completed

This might reveal data quality issues – order 5004 is from customer 105, but we don’t have that customer’s information. This could indicate a problem in your systems that needs investigation.

Outer Join – Keep Everything from All Datasets

An outer join includes all customers and all orders, regardless of whether they match. This gives you the complete picture of everything in both datasets.

Result of Outer Join:

customer_idcustomer_nameregionorder_idorder_totalstatus
101Acme CorpNorth50012500.00completed
101Acme CorpNorth50033400.00pending
102TechFlow IncSouth50021200.00completed
103GlobalTrade LtdEastNULLNULLNULL
104Innovate CoWestNULLNULLNULL
105NULLNULL5004800.00completed

This is the most comprehensive view, showing you customers without orders (103, 104) and orders without customer records (105). It’s excellent for data auditing and understanding the complete landscape of your information.

Creating Your Merge

To merge datasets in QuantumLayers:

  1. Click the Merge button in your dashboard header to open the merge interface
  2. Name your merged dataset with something descriptive like “Customers with Order History” or “Product Sales Analysis”
  3. Select the datasets to merge by checking the boxes next to each one (you need at least two datasets selected)
  4. Choose your join column from the dropdown – this must be a column that exists in all selected datasets with the exact same name
  5. Select the join type for each dataset (inner, left, right, or outer) based on which records you want to include
  6. Click “Merge Datasets” to create your new combined dataset

The system processes your merge and creates a new dataset that appears in your dashboard alongside your original datasets. This merged dataset can be analyzed, visualized, and used just like any other dataset – you can even merge it with additional datasets for even more complex analysis.

Tips for Successful Merging

  • Ensure your join columns have matching values: If one dataset uses “customer_id: 101” and another uses “customer_id: CUST-101”, they won’t match even though they refer to the same customer
  • Use the same column names: The join column must be spelled identically in all datasets, including capitalization and underscores
  • Start simple: If you’re merging for the first time, start with just two datasets to understand how it works before attempting more complex multi-dataset merges
  • Handle duplicate column names: If both datasets have a column called “name,” the system automatically renames them to “name_dataset1” and “name_dataset2” so you can distinguish between them
  • Check your results: After merging, review the column analysis to ensure the merge produced the expected number of rows and columns

Managing Your Datasets

Editing Your Dataset

Every dataset can be edited from the dashboard by clicking on the “Edit” button, which will bring you to the original dataset creation screen (either Upload for CSV datasets, Merged for merged datasets, or Connect for DB, API or Google Sheet datasets). The dataset screen will be in edit mode and will automatically populate with your dataset parameters. Modified dataset parameters can be saved by clicking on the Update button.

Automatic Synchronization

Connected datasets (databases, APIs, SFTP, Google Sheets) automatically refresh every hour. The system checks for changes and only updates if your data has changed, keeping everything current without any effort from you.

Manual Refresh: Need the latest data right now? Click the “Refresh” button on any connected dataset to sync immediately.

Privacy Settings

Private: Only you can access this dataset (recommended for all sensitive data)

Public: Anyone with the link can view and analyze this dataset

Note: Privacy settings cannot be changed after creation. Choose carefully when creating your dataset.

Deleting Datasets

Click the red “Delete” button on any dataset page. You’ll be asked to confirm, then the dataset and all associated data, insights, and visualizations are permanently removed.

⚠️ This cannot be undone! Make sure you have any needed data exported before deleting.


Best Practices

Before Uploading Data

  • Clean your data (remove obvious errors and duplicates)
  • Use clear, descriptive column names
  • Make sure your first row contains headers
  • Consider data privacy and choose appropriate privacy settings

Organizing Your Datasets

  • Use descriptive names that you’ll understand later
  • Include dates or versions in names when relevant
  • Be consistent with naming conventions
  • Delete obsolete datasets regularly

Security Tips

  • Always use “Private” for sensitive business or personal data
  • Use read-only database credentials when connecting to databases
  • Use strong passwords for your QuantumLayers account
  • Log out when using shared computers
  • Regularly review which datasets you have and delete what you no longer need

Understanding Your Data

The dataset details page shows you everything about your data’s structure and quality before you start analyzing.

Overview Statistics

At the top, you’ll see:

  • Total number of rows (records)
  • Number of columns (fields)
  • Processing status
  • When the dataset was created

Data Diagnostics

Column Types Chart: Shows how many columns are numbers, text, dates, etc. This helps you understand what kind of analysis you can do.

Missing Data Chart: Reveals which columns have empty values and how many. High numbers of missing values might indicate data quality issues.

Column Analysis

Each column shows detailed statistics:

For All Columns:

  • Distinct Values: How many unique values (low = categorical, high = unique IDs)
  • Null Count: How many empty/missing values
  • Sample Values: Examples of actual data in this column

For Numeric Columns:

  • Min/Max: Smallest and largest values (helps spot outliers)
  • Mean: Average value
  • Standard Deviation: How spread out the values are (high = lots of variation)

Creating Visualizations

Turn your data into interactive charts and graphs. All visualizations are fully interactive – hover to see details, zoom, pan, and download as images.

Available Chart Types

Distribution & Statistical Charts

Histogram: See how numeric values are distributed. Great for understanding the range and frequency of data like sales amounts or ages. Shows you if data clusters in certain ranges or spreads evenly.

Box Plot: Visualize data distribution with statistical precision. Shows median, quartiles, and outliers in one chart. Perfect for comparing distributions across groups (like sales performance by region) or identifying unusual values that need attention. Complements your AI’s outlier detection by making those outliers visible.

Violin Plot: Understand the full shape of your data’s distribution. Like a box plot but shows the entire probability curve, revealing patterns like multiple peaks or skewness. Excellent for seeing why your AI recommends using median instead of mean, or for comparing how different groups truly differ in their distributions.

Time-Based Charts

Time Series (Line Chart): Track how values change over time. Perfect for spotting trends in revenue, website traffic, or any time-based metric. Shows patterns, seasonality, and growth trajectories clearly.

Area Chart: Emphasize magnitude over time with filled areas under the line. Better than line charts when you want to show the volume or scale of change. Helps viewers immediately grasp the size of your metrics.

Stacked Area Chart: Show both total values and composition over time. See how different categories (like product lines or regions) contribute to your overall totals, and watch how their proportions shift. Answers “What’s driving our growth?” at a glance.

Categorical Charts

Bar Chart: Compare values across categories. Excellent for ranking regions, products, or any categorical comparison. Makes it easy to see which categories are winning or lagging.

Horizontal Bar Chart: Like a bar chart but optimized for long category names or large numbers of items. Better readability when you have many categories or when category labels are lengthy (like product names or customer segments). Natural format for displaying rankings and leaderboards.

Pie Chart: Show how categories divide up your total. Best for comparing proportions like market share or customer segments when you have a few clear categories (3-6 works best).

Doughnut Chart: Modern alternative to pie charts with space in the center for displaying totals or key metrics. Shows proportions while allowing you to highlight the overall number (like “670 Total Customers” or “$125K Revenue”). Cleaner aesthetic for dashboards.

Relationship & Correlation Charts

Scatter Plot: Explore relationships between two numeric variables. Helps you see if things are correlated (like advertising spend vs. sales). Each point represents one observation, making patterns and outliers visible.

Bubble Chart: Extend scatter plots to show three variables at once. Like a scatter plot where the size of each bubble represents a third dimension (like market size, profit, or volume). Perfect for portfolio analysis, risk-return comparisons, or any situation where you need to compare items across three metrics simultaneously.

Heatmap: Instantly see patterns in your correlation matrix. Color intensity shows relationship strength—darker colors mean stronger correlations. Makes it easy to spot which variables move together and which insights from your AI’s correlation analysis deserve deeper investigation.

Regression: Understand how variables predict each other. Shows the mathematical relationship and helps forecast outcomes. Displays the trend line and lets you see how well variables fit the predicted pattern.

Building Charts

  1. Select chart type from the dropdown menu
  2. Choose which columns to visualize (typically this involves a X and one or multiple Y and Z columns)
  3. Optionally filter to specific data (like one region or category) by selecting the column to filter on, and entering a specific value to include in the visualization
  4. Optionally filter to specific dates by selecting the date column to filter on, and selecting a date range (see date filtering section below)
  5. For charts that may aggregate data point, select if you wish to sum of average those data points
  6. For time series and stacked area charts, select if you wish the series to be presented cumulatively
  7. The chart will appear instantly and updates as you change the various selections

Date Range Filtering

QuantumLayers allows you to filter your visualizations and AI insights by date, giving you precise control over the time period you want to analyze.

How Date Filtering Works:

When creating charts or generating AI insights, you can optionally specify:

  • Date Column: Select which column in your dataset contains the date/time information
  • Start Date: The beginning of your analysis period
  • End Date: The end of your analysis period (or “today” for current data)

Date Input Options:

You can specify dates in two ways:

Absolute Dates:

Enter specific dates like “2024-01-01” or “2025-12-31” to define exact boundaries for your analysis.

Relative Dates:

Use relative terms for dynamic date ranges that automatically adjust:

  • “today” – Current date
  • “yesterday” – One day ago
  • “7 days ago” – One week back
  • “30 days ago” – One month back
  • “1 year ago” – Twelve months back

Example Use Cases:

  • Recent Performance: Start Date: “30 days ago”, End Date: “today” to analyze the last month
  • Year-to-Date: Start Date: “2025-01-01”, End Date: “today” to see this year’s data
  • Specific Quarter: Start Date: “2024-10-01”, End Date: “2024-12-31” for Q4 2024
  • Rolling Window: Start Date: “90 days ago”, End Date: “today” for a 3-month rolling analysis

This date filtering applies to both chart generation and AI insights generation, ensuring your analysis focuses on exactly the time period you need.

Saving Charts

Advanced Statistical Analysis

You can also run sophisticated analyses with one click to gain deeper insight into your data:

Correlation Matrix

A correlation matrix shows which variables are related to each other. The values range from -1 (inverse relationship) to +1 (direct relationship). This allows you to quickly identify which factors influence each other. For example, you might discover that customer satisfaction strongly correlates with repeat purchases, or that temperature affects sales.

The color-coded heatmap shows strength – darker colors mean stronger relationships. Look for high values (above 0.7) to find meaningful connections.

ANOVA Analysis

The ANOVA analysis tests whether different groups have significantly different average values. For instance, do different marketing campaigns lead to different conversion rates? It allows you to determine if differences between groups are real or just random variation. This helps you make data-driven decisions about which strategies work best.

A p-value below 0.05 typically means the groups are genuinely different (statistically significant). The F-statistic shows how large the difference is.

PCA (Principal Component Analysis)

A principal component analysis simplifies complex data by finding the most important patterns. If you have 20 variables, PCA might reveal that 2-3 “components” explain most of the variation. The PCA allows you to reduce complexity and visualize high-dimensional data. It is great for discovering hidden patterns or groupings in your data.

The scree plot shows how much variance each component explains. Usually, the first 2-3 components capture the most important patterns.

Statistical Summary

The statistical summary includes comprehensive statistics for every numeric column including mean, median, standard deviation, percentiles, skewness, and kurtosis. It gives a complete picture of your data’s distribution. It helps you understand if you data is normally distributed, can help you identify outliers, and allows you to validate data quality.

You can compare the mean vs. the median to detect skewed data. Check the standard deviation to see the data spread. Use percentiles to understand the range of typical values.


AI-Powered Insights

For large datasets that are difficult to navigate or interpret, you can let AI do the heavy lifting. QuantumLayers automatically analyzes your data and generates insights in plain English, and creates relevant visualizations.

How AI Insights Work

When you upload or connect a dataset, our AI runs a rigorous statistical testing pipeline on your data before generating insights. This ensures that the findings you see are backed by real statistical evidence — not just surface-level patterns.

The pipeline automatically:

  1. Analyzes Structure: Understands your column types, data quality, and the relationships between columns
  2. Runs Statistical Tests: Executes a 9-step testing pipeline covering correlations, group effects, distributions, time series, regressions, and more (see The Statistical Pipeline below)
  3. Corrects for Multiple Testing: Applies the Benjamini-Hochberg false discovery rate correction so you only see findings that remain significant after accounting for the number of tests performed
  4. Ranks & Generates Insights: Filters candidate insights by statistical significance, ranks them by importance, and produces human-readable explanations
  5. Suggests Actions: Recommends next steps based on validated findings

The Statistical Pipeline

Each dataset passes through a 9-step statistical testing pipeline. The tests are designed to work together — upstream results automatically inform how downstream tests behave, preventing redundant or misleading findings.

  1. Pearson Correlation: Detects linear relationships between numeric columns (e.g., “marketing spend and revenue move together”).
  2. ANOVA / Kruskal-Wallis: Tests whether a numeric value differs significantly across groups defined by a categorical column. If the standard ANOVA doesn’t find significance, a non-parametric fallback (Kruskal-Wallis) runs to catch effects in skewed or outlier-heavy data. Reports effect sizes like eta-squared and Cohen’s d so you can gauge practical significance, not just statistical significance.
  3. Distribution Analysis: Identifies outliers, skewness, and kurtosis anomalies in your numeric columns — helping you spot unusual data shapes that could affect other analyses.
  4. Temporal Analysis: A multi-stage pipeline for time series data. It first checks whether a series is stationary (ADF test), automatically differencing non-stationary data before proceeding. It then tests for structural breaks (CUSUM), average levels, autocorrelation (momentum or oscillation), and trend direction. Each stage’s result shapes how subsequent stages behave — for example, if a structural break is detected, the average-level test is skipped since the overall mean is no longer meaningful.
  5. Regression: Fits predictive linear models to uncover which variables best explain variation in your data.
  6. Categorical Entropy: Flags imbalanced or dominated category distributions (e.g., a “Region” column where 95% of records are from one region).
  7. Chi-Square: Tests whether two categorical columns are associated. Uses Cramér’s V to measure association strength independently of sample size, and filters out associations that are statistically significant but trivially small (V < 0.1).
  8. Cross-Correlation: Discovers leading/lagging relationships between time series (e.g., “changes in marketing spend predict revenue changes 2 periods later”). Non-stationary series are automatically differenced before testing to prevent spurious correlations from shared trends.
  9. VIF Multicollinearity: Identifies redundant numeric columns that are highly correlated with each other, which can inflate regression estimates and make models unreliable.

After all tests complete, the Benjamini-Hochberg FDR correction filters out insights that are no longer statistically significant after adjusting for the total number of tests performed. This means the final insight list only contains findings that have been validated against the statistical properties of your data — a much stronger guarantee than raw p-values alone.

Generating Insights

  • By default, AI Insights are generated on the entire dataset. The default maximum insights generated is 30.
  • To reduce the number of insights generated (to increase speed or reduce the computational cost), you can limit the number of columns the AI Insights are being generated for by using the first column selector.
  • The second column selector allows you to limit the insights generated to a specific categorical column matching the value specified in the category value textbox.
  • You can also limit insights to a specific time period by selecting a date column and specifying start and end dates. This allows you to focus AI analysis on recent trends, specific quarters, or any custom date range. Use absolute dates (like “2025-01-01”) or relative dates (like “30 days ago” or “today”) for dynamic analysis.
  • The “Max Insights” numerical box also allows you to limit the number of insights generated, this time by only producing the N most relevant/statistically significant insights.
  • You can focus the scope of the AI Insights being generated by clearly defining, in the custom prompt text area, the most important questions that you’d like to be addressed. You can also give more details about the context of your data, to increase the relevance of the insights being generated. As a reference, the default prompt being used will be:

Holistic Analysis

The top section provides an executive summary of your data as generated by AI, relying on the most relevant insights:

  • Overall dataset characteristics
  • Key findings and patterns
  • Main correlations
  • Important trends
  • Data quality observations
  • Recommended next steps

Detailed Insights

Each insight card shows:

  • Finding: What the AI discovered
  • Importance Score: How significant this insight is (0-100)
  • Recommendation: What you should do about it
  • Type: Category of insight (correlation, trend, outlier, etc.)

Insight Types

Correlations: Two variables that move together. Example: “Strong positive correlation between marketing spend and revenue.” Detected via Pearson correlation testing across all numeric column pairs.

Temporal: Values changing over time. Example: “Sales showing 15% annual growth.” Detected through the multi-stage temporal pipeline, which accounts for stationarity, structural breaks, and autocorrelation before reporting trends.

Outliers: Unusual values that don’t fit the pattern. Example: “15 transactions exceed $10,000 (3 standard deviations above mean).” Detected during distribution analysis of each numeric column.

Categorical Effects: How categories differ. Example: “Product category significantly affects revenue (p < 0.01).” Detected via ANOVA with a non-parametric Kruskal-Wallis fallback, reporting effect sizes to distinguish practically meaningful differences from merely statistically significant ones.

Distributions: Shape of your data. Example: “Income data is right-skewed, consider using median instead of mean.” Includes skewness and kurtosis analysis to flag unusual data shapes.

Categorical Associations: Relationships between pairs of categorical columns. Example: “Customer segment and preferred channel are strongly associated (Cramér’s V = 0.45).” Detected via Chi-Square testing, with weak associations filtered out automatically.

Leading/Lagging Relationships: One time series predicting another with a delay. Example: “Changes in ad spend lead changes in website traffic by 3 periods.” Detected via cross-correlation analysis with automatic preprocessing to prevent false signals from shared trends.

Multicollinearity Warnings: Redundant columns that may distort analysis. Example: “Cost and price are highly correlated (VIF = 8.2) — 88% of the variance in one is explained by the other.” Helps you identify which columns to consolidate or remove before building models.

Importance Levels

  • Critical (80-100): Act on this immediately
  • High (60-79): Important finding, address soon
  • Medium (40-59): Worth considering in planning
  • Low (0-39): Informational, optional follow-up

Statistical Rigor

QuantumLayers applies several safeguards to ensure the insights you see are trustworthy:

  • False Discovery Rate Correction: When running many tests across all column combinations, some will produce significant results by chance. The Benjamini-Hochberg procedure adjusts p-values to control the expected proportion of false discoveries. Only insights that remain significant after this correction are shown.
  • Cascading Test Logic: Tests are designed to inform each other. ANOVA gates Kruskal-Wallis so you don’t get duplicate findings. The ADF stationarity test automatically determines whether time series data needs differencing before trend or cross-correlation analysis. CUSUM structural break detection suppresses misleading average-level results when a regime change is present.
  • Effect Size Filtering: Statistical significance alone isn’t enough. The engine also measures practical significance — for example, Cramér’s V for categorical associations and Cohen’s d for group comparisons — and filters out findings that are statistically significant but too small to matter.
  • Non-Parametric Fallbacks: When your data doesn’t meet the assumptions of standard tests (e.g., it’s heavily skewed or contains outliers), the engine automatically falls back to rank-based tests like Kruskal-Wallis and Mann-Whitney U that make no distributional assumptions.

AI-Generated Charts

Based on the insights, the AI automatically creates relevant visualizations. These might include:

  • Correlation heatmaps
  • Trend lines over time
  • Distribution histograms
  • Category comparisons
  • Scatter plots showing relationships

All charts are interactive – hover, zoom, and download them for presentations.

Making the Most of AI Insights

  1. Start with the holistic analysis to understand the big picture
  2. Focus on high-importance insights first
  3. Review the recommendations and plan your actions
  4. Use AI charts to visually confirm findings
  5. Create custom visualizations for deeper analysis

Scheduled Reports

Scheduled Reports allow you to automate the delivery of AI-generated insights and visualizations directly to your inbox (or your team’s inboxes) on a recurring basis. Instead of manually running analysis each time, you can configure a report once and receive it automatically on the schedule you define.

Accessing the Report Scheduler

From your dashboard, click the + Report button in the “My Scheduled Reports” section. This opens the Report Scheduler where you can configure all aspects of your automated report.

Report Settings

When creating a scheduled report, you’ll configure the following:

Report Name: Give your report a descriptive name that makes it easy to identify in your list, such as “Weekly Sales Summary” or “Monthly Customer Analysis.”

Frequency: Choose how often the report should be generated and delivered:

  • Daily: A new report is generated and sent every day at your specified time
  • Weekly: A new report is generated and sent once per week at your specified time
  • Monthly: A new report is generated and sent once per month at your specified time

Time: Select the time of day when the report should be generated and delivered. This is the time at which the system will run the analysis, generate the insights, and email the results.

Timezone: Select your timezone to ensure reports are delivered at the correct local time.

Recipients: Enter one or more email addresses (comma-separated) that should receive the report. You can send reports to yourself, your team, your manager, or any stakeholder who needs regular access to the analysis. Recipients do not need a QuantumLayers account to receive and view reports.

Format: Choose how the report is delivered:

  • PDF Only: Recipients receive an email with the report attached as a downloadable PDF document, ideal for archiving or printing
  • HTML Email Only: The report content is embedded directly in the email body, making it easy to read without opening attachments
  • Both PDF & HTML: Recipients get the best of both worlds – the report rendered in the email body for quick reading, plus an attached PDF for filing or sharing

Adding Datasets

Each scheduled report can include one or more datasets. Click + Add Dataset to include a dataset in your report. For each dataset you add, the system will automatically generate fresh AI insights and relevant visualizations at the time of each scheduled delivery, using the most current data available (including any automatic synchronization updates from connected sources).

You can add multiple datasets to a single report, which is useful for cross-functional summaries – for example, combining sales data, customer feedback data, and marketing campaign data into one comprehensive weekly report.

Managing Scheduled Reports

All your configured reports appear in the My Scheduled Reports section of your dashboard. From there you can:

  • Edit a report to change its name, schedule, recipients, format, or included datasets
  • Delete a report to stop future deliveries

Tips for Effective Scheduled Reports

  • Match frequency to data change rate: If your data updates daily, a daily report makes sense. If it changes slowly, a weekly or monthly report avoids unnecessary noise.
  • Be selective with datasets: Including too many datasets in a single report can make it long. Consider creating separate reports for different audiences or topics.
  • Use descriptive report names: When you have multiple scheduled reports, clear names help you quickly find and manage the right one.
  • Consider your recipients: Only include people who will benefit from the report. Each recipient receives every scheduled delivery, so keep the distribution list relevant.

QL-Agent

QL-Agent is an AI-powered conversational assistant built into QuantumLayers that lets you interact with your data using plain language. Instead of navigating menus and configuring options manually, you can simply tell QL-Agent what you need and it will take care of the rest.

Opening QL-Agent

Click the QL-Agent button in your dashboard header. QL-Agent opens in a dedicated window where you can type your requests and receive responses in real time.

What QL-Agent Can Do

QL-Agent has access to all your datasets and QuantumLayers features, and can perform actions on your behalf through natural language conversation. You can ask it to:

Work with Datasets:

  • List your available datasets and their details
  • Describe the structure, columns, and statistics of a specific dataset
  • Check the processing status or last synchronization time of connected sources

Generate Insights:

  • Run AI-powered analysis on any of your datasets
  • Focus insights on specific columns, categories, or date ranges
  • Ask follow-up questions about findings, such as “Why is revenue declining in Q3?” or “Which region has the most outliers?”

Create Visualizations:

  • Build charts by describing what you want to see, for example “Show me a bar chart of sales by region” or “Plot revenue over the last 12 months”
  • Save the best charts to your dataset’s Saved Charts section

Schedule Reports:

  • Create new scheduled reports by describing the desired frequency, recipients, and datasets to include, for example “Create a weekly PDF report of my sales data emailed to team@company.com every Monday at 9 AM”
  • Modify or review existing scheduled reports

How to Use QL-Agent

Simply type your request in the message box and press Send (or Ctrl+Enter). QL-Agent understands conversational language, so you don’t need to use any special syntax or commands. You can be specific (“Generate insights for dataset #3 filtered to the North region”) or general (“What’s interesting in my most recent dataset?”).

QL-Agent remembers context within a conversation, so you can ask follow-up questions naturally. For example, you might start with “List my datasets,” then follow up with “Generate insights for the second one,” and then “Now create a weekly report with those insights.”

Starting a New Conversation

Click New conversation at the top of the QL-Agent window to clear the current conversation history and start fresh. This is useful when switching topics or working with a different dataset.

Example Requests

Here are some examples to help you get started:

  • “List my available datasets”
  • “Generate insights for my most recent dataset”
  • “Save the 3 best charts for dataset #1”
  • “Create a weekly PDF report emailed to me”
  • “Show me a time series of revenue over the past 6 months from my sales data”
  • “What are the strongest correlations in my customer dataset?”
  • “Create a monthly report with my sales and marketing datasets, sent to the leadership team on the 1st of each month”

Getting Help

Need assistance? Contact the QuantumLayers team at contact@quantumlayers.com

Version 1.3 | Last Updated: March 2026