Everything you need to go from account activation to production — data connections, semantic models, security, user adoption, and AI-powered search — in one place.
Essential first steps before you do anything else
Open the administrator activation link from your welcome email and set up your ThoughtSpot Cloud instance. This is your starting point.
Get started →Familiarise yourself with the Admin panel — this is where you manage users, groups, privileges, authentication, and data connections.
View Admin docs →Your Community account is separate from your main ThoughtSpot login. It's where you get answers, share ideas, and connect with other practitioners.
Join Community →Free eLearning for every role — Admin, Analyst, and End User. Start with the role-specific learning paths for your team before going live.
Start learning →Align with stakeholders on which business question you're answering first. Start with one high-value, well-defined use case — not everything at once.
Use case template →Know how to submit a support case before you need to. Add your team's support contacts so the right people can raise tickets.
Support guide →Get the right people engaged from day one — data modeling, security, and adoption all require different skills.
Weeks 1–2 · Account, roles, and privileges
Use the administrator activation link received via email. Set a strong password, confirm your instance URL, and log in to the Admin portal.
ThoughtSpot uses role-based access control (RBAC). Assign the correct roles: Admin, Analyst, and Viewer. Only give data-upload privileges to those who need them.
Groups are the foundation of both permissions and Row-Level Security. Design your group structure before you invite users — it's much harder to restructure later.
Before connecting ThoughtSpot, verify your data exists in your Cloud Data Warehouse (Snowflake, BigQuery, Redshift, etc.) and a service account with appropriate read access is ready.
Agree with stakeholders on one high-value business question to answer first. A narrow, well-defined MVP is better than broad coverage — you can expand after validation.
Weeks 2–3 · Connect your CDW and configure user access
Select your data warehouse for step-by-step setup instructions.
Create a dedicated service role: CREATE ROLE thoughtspot_role;
Grant usage on warehouse, database, schema, and tables to the role.
GRANT USAGE ON WAREHOUSE my_wh TO ROLE thoughtspot_role; GRANT USAGE ON DATABASE my_db TO ROLE thoughtspot_role; GRANT USAGE ON ALL SCHEMAS IN DATABASE my_db TO ROLE thoughtspot_role; GRANT SELECT ON ALL TABLES IN SCHEMA my_db.my_schema TO ROLE thoughtspot_role;
Create a service user and assign the role: CREATE USER thoughtspot_svc DEFAULT_ROLE = thoughtspot_role;
In ThoughtSpot: use username/password or key-pair auth. Set DEFAULT_WAREHOUSE on the service user to avoid connection errors.
Allowlist ThoughtSpot's IP ranges in your Snowflake network policy. IP list → Snowflake connection docs
Create a dedicated GCP Service Account in IAM with a descriptive name (e.g. thoughtspot-svc@your-project.iam.gserviceaccount.com).
Grant the following roles to the service account: BigQuery Data Viewer + BigQuery Job User.
Download the JSON key file from IAM → Service Accounts → Keys → Add Key → JSON.
In ThoughtSpot: upload the JSON key file in the connection form. BigQuery connections use project-level scoping — ensure the service account has access to the correct project.
Create a dedicated DB user with SELECT on target schemas:
CREATE USER thoughtspot_svc WITH PASSWORD 'strong_password'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO thoughtspot_svc;
Ensure ThoughtSpot's IP ranges are added to your Redshift security group inbound rules (port 5439).
Use Redshift's native auth (username/password). IAM auth is not currently supported by ThoughtSpot.
For Redshift Serverless: use the serverless endpoint URL format (e.g. workgroup.account.region.redshift-serverless.amazonaws.com).
Generate a personal access token in Databricks (User Settings → Developer → Access Tokens) or use service principal OAuth for production.
Use the SQL Warehouse HTTP path (not the all-purpose cluster path). Find it in SQL Warehouses → your warehouse → Connection Details.
Ensure the token/principal has CAN USE permission on the target SQL Warehouse.
Grant SELECT on the target catalog/schema to the service principal or token user.
Choose your auth method: SAML SSO (recommended for enterprise), Local Authentication, or MFA. SAML configuration requires your IdP metadata and ThoughtSpot's SP metadata.
Auth guide →Create users manually, sync via IdP, or automate via REST API. Group membership drives both content sharing permissions and row-level security rules.
Groups & Privileges →Something not working? Find your issue below — each entry includes an actionable fix.
SELECT CURRENT_ROLE(); to verify.GRANT SELECT ON ALL TABLES IN SCHEMA <schema> TO ROLE <role>;BigQuery Data Viewer at the dataset level, not just project level.Still stuck? Submit a Support Case →
Weeks 3–7 · The most important phase — get the foundation right before creating Liveboards
Rename database columns to business-friendly terms. "cust_id" → "Customer ID". "rev_ytd_usd" → "Revenue YTD". Your users will search using natural language — make sure the column names match how they speak.
Add synonyms so search understands that "sales" = "revenue", "client" = "customer", "headcount" = "employees". This dramatically improves search hit rates and Spotter accuracy.
Every measure needs the right default aggregation (SUM, COUNT, AVG, etc.). Wrong defaults mean users get unexpected results without realising it. Validate each measure against known data.
Keep joins simple. A clean star schema (one fact table, multiple dimension tables) outperforms complex multi-hop joins for search performance and accuracy.
Before inviting users, ask 10–15 realistic business questions in ThoughtSpot Search. If the answers look wrong or Search doesn't understand the question, fix the model — not the query.
Once your model is ready, open Spotter and ask it questions using your own business vocabulary — not generic terms. The goal isn't just to get an answer; it's to verify that Spotter understands your data and returns numbers you'd trust. Use question patterns like these as a template, adapted to your model:
RLS restricts which rows of data each user or group can see — if appropriate or required for your use case. It works via a rule referencing ThoughtSpot group membership or username. Where needed, define and test RLS for every persona before go-live.
RLS guide →Restrict visibility of sensitive columns (PII, cost, margin) to specific groups — if appropriate or required for your use case. Apply CSR at the table level and verify it propagates correctly to Models and Liveboards.
CSR docs →Use RBAC to control who can edit, view, or download content. Assign the least privilege necessary. Analyst role for builders, Viewer role for consumers.
RBAC guide →Build your MVP around the one use case you defined in Phase 1. Limit each Liveboard to 5–8 visualisations. Focused Liveboards load faster and get higher adoption.
Liveboard guide →Run 20+ representative search queries against your Model. If search doesn't return expected results, refine column names, synonyms, or aggregations — don't move on until this works.
Search docs →ThoughtSpot formulas let you create custom metrics in the semantic layer. Define key business metrics (e.g. "Gross Margin %") once in the model — don't recreate them in every Liveboard.
Formula guide →Once your Model and Liveboards are validated, you can export them as YAML files using TML. You don't need TML on day one — but if you have multiple environments or a CI/CD pipeline, explore it before your second use case.
Weeks 7–9 · Training, go-live, and making it stick
Track these milestones at 30, 60, and 90 days post-launch
Spotter lets your users ask questions in plain English and get instant visual answers. No SQL, no formulas — just natural language. The better your semantic model, the better Spotter works.
Understands natural language questions and generates charts, tables, and narrative explanations from your data.
A well-built semantic model with business-friendly names, synonyms, and correct aggregations. Spotter is only as good as your model.
Enable Spotter in the Admin panel, configure which data sources it can access, and set up user permissions per group.
Run a 30-minute "first questions" session with end users. Show them 5 example queries — adoption spikes when users have a concrete starting point.
Stay connected to the ThoughtSpot community and keep learning after go-live
ThoughtSpot's roadmap webinar series. See what's coming next, ask the product team questions directly, and align your rollout with upcoming features.
Register →Regular live sessions featuring customer stories, best practices, and hands-on demos. A great way to pick up tips your implementation team may have missed.
Register →Conversations with CDOs and data leaders on building data culture, driving analytics adoption, and making self-service analytics stick across organisations.
Listen →Everything you need, in one place
Full product docs — admin guide, data modeling, security, APIs, and release notes.
Free eLearning paths for Admins, Analysts, and End Users. Certifications available.
Ask questions, share ideas, and connect with thousands of ThoughtSpot practitioners.
Step-by-step instructions for connecting Snowflake, BigQuery, Redshift, and more.
Best practices and design patterns for implementing row-level security in ThoughtSpot.
How to build, share, and schedule Liveboards for your organisation.
Export, version-control, and migrate your Models and Liveboards as code.
Automate user provisioning, content promotion, and usage reporting via REST API v2.0.
How to raise and track support tickets with the ThoughtSpot support team.