SpotGuide

Your complete guide to going live with ThoughtSpot Analytics

Everything you need to go from account activation to production — data connections, semantic models, security, user adoption, and AI-powered search — in one place.

📖 Read Docs 💬 Join Community 🎓 ThoughtSpot University
Recommended rollout · 6–10 weeks
01
Foundation
Week 1–2
Start here →
02
Connect
Week 2–3
03
Build
Week 3–7
04
Launch
Week 7–9
05
Adopt
Week 9+
Who are you? Personalise this guide.

Start Here

Essential first steps before you do anything else

💡 Before you connect data or build anything Activate your account, create a ThoughtSpot Community account, and get the right people involved. Decisions made in the first two weeks — especially around data modeling and security — are the hardest to undo later.
✉️
Admin

Activate Your Account

Open the administrator activation link from your welcome email and set up your ThoughtSpot Cloud instance. This is your starting point.

Get started →
🛠️
Admin

Explore the Admin Section

Familiarise yourself with the Admin panel — this is where you manage users, groups, privileges, authentication, and data connections.

View Admin docs →
💬
Everyone

Join ThoughtSpot Community

Your Community account is separate from your main ThoughtSpot login. It's where you get answers, share ideas, and connect with other practitioners.

Join Community →
🎓
Everyone

ThoughtSpot University

Free eLearning for every role — Admin, Analyst, and End User. Start with the role-specific learning paths for your team before going live.

Start learning →
🎯
Everyone

Define Your Use Case

Align with stakeholders on which business question you're answering first. Start with one high-value, well-defined use case — not everything at once.

Use case template →
🤝
Everyone

ThoughtSpot Support

Know how to submit a support case before you need to. Add your team's support contacts so the right people can raise tickets.

Support guide →

👥 Who Should Be Involved

Get the right people engaged from day one — data modeling, security, and adoption all require different skills.

🔧 Admin / IT
  • Account activation & admin setup
  • Data connection configuration
  • Authentication & SSO setup
  • User & group provisioning
  • Row-level security configuration
📊 Data / Analyst
  • Semantic model design & build
  • Column naming & synonym definitions
  • Default aggregation configuration
  • Liveboard & Answer creation
  • Data quality validation
💼 Business / Product
  • Define use cases & success metrics
  • Prioritise MVP content with stakeholders
  • Champion adoption internally
  • Executive communication & buy-in
  • Feedback collection post-launch
👤 End Users
  • Participate in UAT & training
  • Validate Liveboards match real needs
  • Surface feedback during pilot
  • Become Spotter early adopters

Phase 1 — Foundation Setup

Weeks 1–2 · Account, roles, and privileges

🕐 4–6 hours total effort 🔧 Admin: ~4h 📊 Data/Analyst: ~1h 💼 Business: ~1h
1

Activate your ThoughtSpot Cloud account

Use the administrator activation link received via email. Set a strong password, confirm your instance URL, and log in to the Admin portal.

2

Assign roles & privileges to team members

ThoughtSpot uses role-based access control (RBAC). Assign the correct roles: Admin, Analyst, and Viewer. Only give data-upload privileges to those who need them.

3

Create groups aligned to your security model

Groups are the foundation of both permissions and Row-Level Security. Design your group structure before you invite users — it's much harder to restructure later.

4

Confirm data is accessible in your CDW

Before connecting ThoughtSpot, verify your data exists in your Cloud Data Warehouse (Snowflake, BigQuery, Redshift, etc.) and a service account with appropriate read access is ready.

5

Prioritise your first use case

Agree with stakeholders on one high-value business question to answer first. A narrow, well-defined MVP is better than broad coverage — you can expand after validation.

⚠️ Common Mistake Don't try to model all your data at once. Start with the tables and metrics that serve your first use case. A focused semantic model delivers value faster and is easier to validate.

Phase 2 — Data Connection & Authentication

Weeks 2–3 · Connect your CDW and configure user access

🕐 4–8 hours total effort 🔧 Admin: ~6h 📊 Data: ~2h

🔌 CDW-Specific Connection Guide

Select your data warehouse for step-by-step setup instructions.

1

Create a dedicated service role: CREATE ROLE thoughtspot_role;

2

Grant usage on warehouse, database, schema, and tables to the role.

GRANT USAGE ON WAREHOUSE my_wh TO ROLE thoughtspot_role;
GRANT USAGE ON DATABASE my_db TO ROLE thoughtspot_role;
GRANT USAGE ON ALL SCHEMAS IN DATABASE my_db TO ROLE thoughtspot_role;
GRANT SELECT ON ALL TABLES IN SCHEMA my_db.my_schema TO ROLE thoughtspot_role;

3

Create a service user and assign the role: CREATE USER thoughtspot_svc DEFAULT_ROLE = thoughtspot_role;

4

In ThoughtSpot: use username/password or key-pair auth. Set DEFAULT_WAREHOUSE on the service user to avoid connection errors.

5

Allowlist ThoughtSpot's IP ranges in your Snowflake network policy. IP list → Snowflake connection docs

1

Create a dedicated GCP Service Account in IAM with a descriptive name (e.g. thoughtspot-svc@your-project.iam.gserviceaccount.com).

2

Grant the following roles to the service account: BigQuery Data Viewer + BigQuery Job User.

3

Download the JSON key file from IAM → Service Accounts → Keys → Add Key → JSON.

4

In ThoughtSpot: upload the JSON key file in the connection form. BigQuery connections use project-level scoping — ensure the service account has access to the correct project.

1

Create a dedicated DB user with SELECT on target schemas:

CREATE USER thoughtspot_svc WITH PASSWORD 'strong_password';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO thoughtspot_svc;

2

Ensure ThoughtSpot's IP ranges are added to your Redshift security group inbound rules (port 5439).

3

Use Redshift's native auth (username/password). IAM auth is not currently supported by ThoughtSpot.

4

For Redshift Serverless: use the serverless endpoint URL format (e.g. workgroup.account.region.redshift-serverless.amazonaws.com).

1

Generate a personal access token in Databricks (User Settings → Developer → Access Tokens) or use service principal OAuth for production.

2

Use the SQL Warehouse HTTP path (not the all-purpose cluster path). Find it in SQL Warehouses → your warehouse → Connection Details.

3

Ensure the token/principal has CAN USE permission on the target SQL Warehouse.

4

Grant SELECT on the target catalog/schema to the service principal or token user.

🔐
Admin

Configure Authentication

Choose your auth method: SAML SSO (recommended for enterprise), Local Authentication, or MFA. SAML configuration requires your IdP metadata and ThoughtSpot's SP metadata.

Auth guide →
👥
Admin

User & Group Management

Create users manually, sync via IdP, or automate via REST API. Group membership drives both content sharing permissions and row-level security rules.

Groups & Privileges →
💡 SAML SSO Tip If you're using Okta, Azure AD, or Google Workspace — configure SAML SSO before inviting end users. It makes the login experience seamless and enables automated group/role assignment via IdP attributes.

⚡ Common Issues & Fixes

Something not working? Find your issue below — each entry includes an actionable fix.

🔌 Data Connection Issues
  • Check that your CDW service account has SELECT privileges on the target schema.
  • Verify ThoughtSpot's IP ranges are allowlisted in your CDW firewall — see the CDW guide above for your platform.
  • For Snowflake: confirm the warehouse is not suspended and the role is active. Run SELECT CURRENT_ROLE(); to verify.
  • The service account may lack privileges on specific schemas or tables.
  • For Snowflake, run: GRANT SELECT ON ALL TABLES IN SCHEMA <schema> TO ROLE <role>;
  • In BigQuery: ensure the service account has BigQuery Data Viewer at the dataset level, not just project level.
  • Ensure the redirect URI in your IdP matches exactly what ThoughtSpot expects (case-sensitive, no trailing slashes).
  • Check token expiry settings — short-lived tokens cause silent re-auth failures. Increase token lifetime or enable refresh tokens.
🔍 Semantic Model / Search Issues
  • Check if the column exists in the Model (not just the raw table) — only columns added to the Model are searchable.
  • Add the term as a synonym on the relevant column in the Model editor.
  • Check the column's default aggregation in the Model editor — it may be set to AVG or COUNT instead of SUM.
  • Verify no row-level filters are applied at the Model level that would change the expected output.
  • Test the same query as an Admin (no RLS) vs a test user — if the numbers differ, RLS is filtering data you didn't expect.
🔑 Authentication Issues
  • SAML: check that the SP Entity ID in your IdP matches ThoughtSpot's metadata exactly — this comparison is case-sensitive.
  • Verify the ACS URL in your IdP uses the correct ThoughtSpot instance URL (check for trailing slashes or HTTP vs HTTPS).
  • JIT provisioning may not be auto-assigning the correct group. Manually assign the user to the correct group post-login.
  • Then investigate your IdP's attribute mapping to ensure the group claim is being sent correctly in the SAML assertion.

Still stuck? Submit a Support Case →

Phase 3 — Build Your Semantic Model, Security & Insights

Weeks 3–7 · The most important phase — get the foundation right before creating Liveboards

🕐 20–40 hours total effort 📊 Data/Analyst: ~30h 🔧 Admin: ~4h 💼 Business: ~6h
🔴 Critical: Build a Great Semantic Model First The semantic model is what powers ThoughtSpot's AI-driven search and Spotter. A poorly built model — with raw database column names, missing synonyms, and wrong aggregations — is the #1 cause of a bad end-user experience. Users may blame the product when the real issue is the model. Invest time here before building any Liveboards.

🧱 Semantic Model Best Practices

1

Use business-friendly column names

Rename database columns to business-friendly terms. "cust_id" → "Customer ID". "rev_ytd_usd" → "Revenue YTD". Your users will search using natural language — make sure the column names match how they speak.

2

Define synonyms for common business terms

Add synonyms so search understands that "sales" = "revenue", "client" = "customer", "headcount" = "employees". This dramatically improves search hit rates and Spotter accuracy.

3

Configure default aggregations correctly

Every measure needs the right default aggregation (SUM, COUNT, AVG, etc.). Wrong defaults mean users get unexpected results without realising it. Validate each measure against known data.

4

Join tables thoughtfully — star schema is best

Keep joins simple. A clean star schema (one fact table, multiple dimension tables) outperforms complex multi-hop joins for search performance and accuracy.

5

Test end-to-end with real business questions

Before inviting users, ask 10–15 realistic business questions in ThoughtSpot Search. If the answers look wrong or Search doesn't understand the question, fix the model — not the query.

⚠️ Model Anti-Patterns to Avoid Raw database column names · Missing synonyms · Unexpected default aggregations · Too many tables joined (keep it focused) · Building Liveboards before validating the model end-to-end
💬 AI Context & Model Instructions Before testing with Spotter, consider adding AI Context to your model — a plain-language description of what the data represents, key metrics, and how they relate. This helps Spotter interpret ambiguous questions correctly and return more relevant answers. You can add AI Context in the Model editor under Settings → AI Instructions. Think of it as briefing your AI analyst on your business before their first day.
🧪 Try It Now with Spotter

Pressure-test your model with Spotter before go-live

Once your model is ready, open Spotter and ask it questions using your own business vocabulary — not generic terms. The goal isn't just to get an answer; it's to verify that Spotter understands your data and returns numbers you'd trust. Use question patterns like these as a template, adapted to your model:

  1. Which regions are underperforming against their sales targets this quarter, and what's driving the gap?
  2. Show me the top 10 products by revenue over the last 12 months — and flag any that had a month-over-month decline in the last 90 days
  3. How has average order value changed across customer segments since we launched the new pricing in January?
  4. Which sales reps closed the most new logo deals this year, and how does their deal size compare to the team average?
  5. What percentage of last quarter's revenue came from customers acquired in the past 6 months versus existing accounts?

🔒 Data Security — RLS & RBAC

🔑
Critical

Row-Level Security (RLS)

RLS restricts which rows of data each user or group can see — if appropriate or required for your use case. It works via a rule referencing ThoughtSpot group membership or username. Where needed, define and test RLS for every persona before go-live.

RLS guide →
🏷️
Data

Column Security Rules (CSR)

Restrict visibility of sensitive columns (PII, cost, margin) to specific groups — if appropriate or required for your use case. Apply CSR at the table level and verify it propagates correctly to Models and Liveboards.

CSR docs →
🛡️
Admin

Role-Based Access Control

Use RBAC to control who can edit, view, or download content. Assign the least privilege necessary. Analyst role for builders, Viewer role for consumers.

RBAC guide →
💡 Test security before inviting users — not after Log in as a test user in each persona and verify they see exactly the data they should — no more, no less. RLS misconfigurations are invisible to admins but obvious to users.

📊 Create Liveboards & Answers

📋
Analyst

Start with 1–2 Liveboards

Build your MVP around the one use case you defined in Phase 1. Limit each Liveboard to 5–8 visualisations. Focused Liveboards load faster and get higher adoption.

Liveboard guide →
🔍
Analyst

Validate Search Works Well

Run 20+ representative search queries against your Model. If search doesn't return expected results, refine column names, synonyms, or aggregations — don't move on until this works.

Search docs →
📐
Analyst

Use Formulas Thoughtfully

ThoughtSpot formulas let you create custom metrics in the semantic layer. Define key business metrics (e.g. "Gross Margin %") once in the model — don't recreate them in every Liveboard.

Formula guide →
📄 When you're ready — manage content as code

ThoughtSpot Modeling Language (TML)

Once your Model and Liveboards are validated, you can export them as YAML files using TML. You don't need TML on day one — but if you have multiple environments or a CI/CD pipeline, explore it before your second use case.

  • Version control: commit your model to Git alongside your dbt or SQL code
  • Environment promotion: move content from dev → staging → prod without manual rebuilding
  • Bulk editing: modify multiple objects at once using find-and-replace in YAML
TML Documentation →

Phase 4 — Launch & User Adoption

Weeks 7–9 · Training, go-live, and making it stick

🕐 8–12 hours total effort 👥 Everyone: ~2–3h each
✅ Go-live is the start line, not the finish line The most successful ThoughtSpot deployments treat launch as a beginning, not an end. Plan your training, feedback loop, and adoption metrics before you flip the switch.

✅ Go-Live Readiness Checklist

Foundation

Security

Content

Adoption

📢 Announce to Your Team

📈 What Good Adoption Looks Like

Track these milestones at 30, 60, and 90 days post-launch

📅 30 Days
  • Pilot users logging in weekly
  • Liveboards viewed regularly
  • First Search/Spotter queries running
  • Initial feedback collected
📅 60 Days
  • Broader team rollout complete
  • 2nd use case scoped
  • Model refined based on feedback
  • Failed searches reviewed & fixed
📅 90 Days
  • Active weekly users growing
  • Self-service searches increasing
  • Executive dashboard in use
  • Expansion plan agreed
📊 Always Monitor
  • Weekly Active Users (WAU)
  • Search query volume
  • Failed / zero-result searches
  • Liveboard load times
⭐ AI-Powered Analytics

Meet Spotter — Your AI Analyst

Spotter lets your users ask questions in plain English and get instant visual answers. No SQL, no formulas — just natural language. The better your semantic model, the better Spotter works.

🤖 What Spotter Does

Understands natural language questions and generates charts, tables, and narrative explanations from your data.

🧱 What It Needs

A well-built semantic model with business-friendly names, synonyms, and correct aggregations. Spotter is only as good as your model.

⚙️ Admin Setup

Enable Spotter in the Admin panel, configure which data sources it can access, and set up user permissions per group.

🎓 Training Users

Run a 30-minute "first questions" session with end users. Show them 5 example queries — adoption spikes when users have a concrete starting point.

✅ Example Prompts
The types of questions you can ask Spotter
  • 💬 "What were total sales by region last quarter?"
  • 💬 "Top 10 customers by gross margin % in EMEA this year"
  • 💬 "Average order value by sales rep where deals closed after Jan 2024"
  • 💬 "What metrics do we track for customer retention?"
  • 💬 "What dimensions can I break revenue down by?"
  • 💬 "Which data sources are available in this model?"
  • 💬 "Why did revenue drop in March?"
  • 💬 "How did EMEA perform vs target last half?"
  • 💬 "Which product categories had declining revenue month over month?"
  • 💬 "What is projected revenue for Q3 based on current trends?"
  • 💬 "If churn continues at this rate, how many customers do we lose by year end?"
  • 💬 "Forecast sales for the next 6 months by region"
🔎 Diagnostics
Spotter gave an unexpected result — why?
The Model column has an incorrect default aggregation (e.g. AVG instead of SUM). Fix: open the Model editor, find the column, and update the Default Aggregation field.
Spotter didn't understand the term you used — it's not in the model's vocabulary. Fix: add the missing synonym in the Model column settings under "Synonyms".
Multiple columns could match the query term, causing Spotter to pick the wrong one. Fix: rename columns to be unambiguous — e.g. "Order Date" not just "Date", "Sales Revenue" not just "Revenue".
The result is correct — there's just no data matching the filter applied. Fix: verify data exists in the CDW for that time range or dimension value before assuming Spotter is wrong.
The tables needed for that query aren't joined in the Model, so Spotter can't traverse the relationship. Fix: add the missing join in the data model between the relevant tables.
ℹ️ Honest Expectations
What Spotter can't do (yet)
  • Single-connection analysis is Spotter's default mode — for questions spanning multiple data sources, Auto-mode enables cross-connection analysis automatically without needing to specify a model first
  • Does not write back to your data warehouse — read-only
  • Results depend entirely on model quality — garbage in, garbage out
📖 Spotter Setup Guide 🎓 Spotter eLearning 🌐 Spotter Resource Hub

Subscribe to Regular Events

Stay connected to the ThoughtSpot community and keep learning after go-live

Resources Hub

Everything you need, in one place

📚

ThoughtSpot Documentation

Full product docs — admin guide, data modeling, security, APIs, and release notes.

🎓

ThoughtSpot University

Free eLearning paths for Admins, Analysts, and End Users. Certifications available.

💬

ThoughtSpot Community

Ask questions, share ideas, and connect with thousands of ThoughtSpot practitioners.

🔌

Data Connection Guide

Step-by-step instructions for connecting Snowflake, BigQuery, Redshift, and more.

🔑

RLS Design Patterns

Best practices and design patterns for implementing row-level security in ThoughtSpot.

📊

Liveboard Creation Guide

How to build, share, and schedule Liveboards for your organisation.

📄

ThoughtSpot Modeling Language (TML)

Export, version-control, and migrate your Models and Liveboards as code.

REST API Reference

Automate user provisioning, content promotion, and usage reporting via REST API v2.0.

🆘

Submit a Support Case

How to raise and track support tickets with the ThoughtSpot support team.