Mastra AI: The Complete Guide to the TypeScript Agent Framework (2026)

Stan Sedberry
Stan Sedberry
25 min read88 views
Mastra AI: The Complete Guide to the TypeScript Agent Framework (2026)

If you've spent any time building AI applications in JavaScript or TypeScript, you've probably felt the friction. Most of the serious AI tooling lives in the Python ecosystem, and the TypeScript options have historically been either ports of Python libraries or thin wrappers that leave you wiring together a dozen packages by hand. Mastra changes that equation entirely.

Mastra is an open-source TypeScript framework for building AI agents, workflows, and retrieval-augmented generation (RAG) pipelines. It launched in October 2024, reached 22,000+ GitHub stars within 15 months, and hit 300,000+ weekly npm downloads by its 1.0 release in January 2026. Those are not vanity metrics. They reflect a genuine gap in the market that Mastra fills: a production-grade, batteries-included agent framework designed for the language most web developers already know.

This guide covers everything you need to know about Mastra, from what it does and who built it to how it compares against LangChain, how its agent and workflow systems work, and whether it's the right fit for your next project.

What Is Mastra AI?

Mastra is a framework for building AI-powered applications using TypeScript. Think of it as the full toolkit for going from "I want an AI agent that does X" to actually running that agent in production with memory, tool access, observability, and evaluation built in.

The framework is built on top of Vercel's AI SDK, which handles the low-level model interactions and streaming. Mastra adds the higher-level abstractions that production applications need: autonomous agents that can reason and use tools, deterministic workflows for multi-step orchestration, RAG pipelines for knowledge retrieval, persistent memory systems, evaluation frameworks for testing agent quality, and comprehensive Model Context Protocol (MCP) support for connecting to external tool ecosystems.

The key distinction is that Mastra is TypeScript-native from scratch. It was not ported from Python. Every API, every pattern, and every convention feels natural to JavaScript developers. You define tool schemas with Zod, you compose agents with familiar functional patterns, and you deploy on any Node.js runtime.

The core package is @mastra/core, and it ships with six primary building blocks:

Agents handle open-ended tasks. You give them instructions, a model, and access to tools. They decide what to call, when to stop, and how to respond. They support both complete responses (.generate()) and real-time token streaming (.stream()).

Workflows handle structured, multi-step operations where you need deterministic control. They give you explicit branching, parallel execution, loops, and the ability to pause mid-execution for human approval before resuming.

RAG provides the full retrieval pipeline: document chunking, embedding generation, vector storage, similarity search, and reranking. It works with a wide range of vector databases including Pinecone, Qdrant, ChromaDB, pgvector, and many others.

Memory gives agents persistent context across conversations. This includes conversation history, semantic recall (finding relevant past messages via embeddings), working memory (structured facts and preferences), and observational memory (a compression system that condenses old conversations into dense observations).

Evals provide model-graded, rule-based, and statistical evaluation for testing agent quality. You can assess relevance, faithfulness, toxicity, tone consistency, and custom metrics.

MCP support lets Mastra agents connect to external tools through the Model Context Protocol standard, and also lets you expose your Mastra tools and agents to external MCP-compatible clients like Cursor, Claude Desktop, and VS Code.

Who Owns Mastra AI? The Gatsby Founders' Second Act

Mastra was founded in October 2024 by three people who already built one of the most successful JavaScript frameworks in recent memory: Gatsby.js. After Gatsby was acquired by Netlify, the core team went their separate ways. Then they came back together with a new thesis: the AI agent stack for TypeScript developers is broken, and we can fix it.

Sam Bhagwat is the CEO. He co-founded Gatsby around 2015 and scaled it to $5 million in annual recurring revenue before the Netlify acquisition. Before Gatsby, he was an early engineer at Zenefits (Y Combinator W13, where he was among the first 20 engineers) and PlanGrid (YC W12). He's a Stanford graduate (class of 2011) and has since authored two books on building AI agents: Principles of Building AI Agents and Patterns for Building AI Agents, which together have surpassed 170,000 copies.

Abhi Aiyer is the CTO. He studied Management Science at UC San Diego (2009 to 2013) and served as Principal Engineer at both Gatsby and Netlify, where he led engineering organizations of 100+ people. He built the cloud infrastructure that handled tens of thousands of build nodes and billions of files.

Shane Thomas is the CPO. With over 15 years in open source, Thomas was Staff Engineer and Head of Product at Gatsby, then Staff Product Manager at Netlify. Between Netlify and Mastra, he founded Audiofeed.ai, an AI podcasting tool. He's based in Sioux Falls, South Dakota.

The origin story has an interesting twist. The team didn't set out to build a framework. They started building an AI-powered CRM called "Kepler" (the legal entity is still Kepler Software Inc.). While building it, they found the existing TypeScript AI frameworks so inadequate that they ended up building the framework instead. The CRM became the proof-of-concept for the tooling, and the tooling became the product.

They joined Y Combinator's Winter 2025 batch in January 2025. The timing was fortunate. In mid-February 2025, Mastra hit the front page of Hacker News and exploded from 1,500 to 7,500 GitHub stars in a single week. That momentum carried into a $13 million seed round announced in October 2025, which they describe as "the largest post-YC cap table in several years."

The investor list reads like a who's-who of developer tools and AI infrastructure: Y Combinator, Gradient Ventures (Google's AI fund), basecase capital, Paul Graham (YC founder), Guillermo Rauch (Vercel CEO), Amjad Masad (Replit CEO), Shay Banon (Elastic founder), Arash Ferdowsi (Dropbox co-founder), Balaji Srinivasan, and the entire PlanGrid founding team. Over 120 investors participated in total.

As of March 2026, the team has grown to roughly 26 employees, many of whom are former Gatsby colleagues. The founders describe the hiring process as "getting the band back together."

Is Mastra AI Free? Pricing, Licensing, and the Open-Source Model

The short answer: yes, Mastra is free and open source. The longer answer involves understanding what exactly is covered.

Mastra's core framework is released under the Apache License 2.0, one of the most permissive open-source licenses available. You can use it, modify it, distribute it, and build commercial products on top of it without paying Mastra anything. The full source code lives at github.com/mastra-ai/mastra.

The Apache 2.0 license also includes explicit patent protections, which is a meaningful detail for companies that care about IP risk. You get a patent grant from all contributors, and the license spells out the attribution and change-tracking requirements clearly.

There are two exceptions to the "everything is free" story.

First, enterprise features located in the ee/ directories of the codebase (including authentication with SSO, role-based access control, and access control lists for Mastra Studio) are covered by a separate Mastra Enterprise License. These features work without a license during development and testing, but require a commercial license for production use.

Second, Mastra Cloud is a hosted platform that adds cloud-based Studio, GitHub-connected deployments with autoscaling and instant rollbacks, centralized observability (structured logs, AI-aware tracing, eval dashboards), and managed infrastructure. The cloud platform's pricing page states it is "free to start" with pricing "launching Q1 2026." As of March 2026, exact pricing details have not been publicly published. An enterprise tier offering on-premises deployment, custom SLAs, and dedicated Slack support is available by contacting sales.

For most developers and startups, the open-source framework provides everything needed to build and deploy AI agents. You only hit the enterprise boundary when you need features like RBAC in a production Studio deployment or want managed cloud infrastructure.

Get insights like this in your inbox

Join our newsletter for deep dives on AI, technology, and building the future. No spam, unsubscribe anytime.

What Is Mastra Used For? Real-World Use Cases and Production Deployments

The question of what Mastra is "used for" has a deceptively simple answer: anything involving AI agents in a TypeScript environment. But the real picture is better understood through the companies and developers actually building with it.

On the enterprise side, the production deployments are substantial. Replit uses Mastra for its Agent 3 product. PayPal and Sanity run Mastra in production environments. Brex's CTO mentioned Mastra as part of their AI engineering stack on the Latent Space podcast. Marsh McLennan, a company with 75,000 employees, built an agentic search system with Mastra. SoftBank built "Satto Workspace" for document creation, reportedly transforming hours-long processes into minutes. 11x uses Mastra to power "Alice," an AI SDR agent that generates 50,000 AI-driven emails per day.

Factorial, an HR software company, built an internal platform agent that respects employee permissions, positioning Mastra as a way to keep sensitive data inside the product rather than pasting it into third-party chat tools. WorkOS uses Mastra in production and has published quickstart guides and conducted workshops around it.

During Y Combinator's W25 batch, startups built diverse applications with Mastra: automated customer support systems, CAD diagram generation from aerospace PDFs, web scraping for contact extraction, medical transcription automation, financial document generation, and code generation products.

Individual developers have built WhatsApp bots, production bug monitors (running on Cloudflare Workers with Telegram alerts), Reddit sentiment analysis bots, hotel booking assistants, GitHub insights agents, and video transcript RAG agents.

The MASTRA.BUILD hackathon attracted over 300 participants who submitted roughly 100 projects. A developer documented how they built a prize-winning RAG agent for video transcription in a detailed LogRocket blog post. WorkOS published a guide for building a GitHub insights agent in five minutes.

The common thread across these use cases is that Mastra shines when you need more than a simple chatbot. Its sweet spot is applications that combine LLM reasoning with structured workflows, external tool access, persistent memory, and production observability. Customer support agents that look up orders, multi-step document processing pipelines, internal assistants that integrate with company databases while respecting access controls, and orchestration systems that coordinate multiple AI agents: these are where Mastra's integrated stack pays for itself.

What Models Are Supported by Mastra AI?

Mastra's model support is one of its most compelling features, and it's broader than almost any competing framework. Through its unified model router, Mastra provides access to thousands of models from nearly 100 providers through a single, consistent API.

The Mastra team's Model Router announcement in October 2025 initially described "600+ models from 40+ providers." By March 2026, the models index page on mastra.ai lists over 3,300 models from 94 providers. The catalog is dynamic and continues to expand.

Models are specified as simple strings in the format provider/model-name. For example:

  • 'openai/gpt-4o' or 'openai/gpt-4.5-preview'
  • 'anthropic/claude-sonnet-4-6' or 'anthropic/claude-opus-4-6'
  • 'google/gemini-2.5-flash'
  • 'deepseek/deepseek-chat'
  • 'groq/llama-3.3-70b-versatile'
  • 'mistral/mistral-large-latest'
  • 'xai/grok-4'

The major providers include OpenAI, Anthropic, Google (Gemini), DeepSeek, Groq, Mistral, xAI (Grok), Amazon Bedrock, Azure OpenAI, Nvidia, Ollama (for running local models), Together AI, Fireworks AI, Hugging Face, Cohere, Perplexity, and Cerebras. Gateway support covers OpenRouter, Netlify, Vercel, and Azure.

The model router provides full IDE autocomplete for model names, so you get type-safe model selection with IntelliSense in VS Code or any TypeScript-aware editor. It also supports model fallbacks (automatic switching to a backup provider during outages), dynamic model selection at runtime, and provider-specific options like OpenAI's reasoningEffort and Anthropic's cacheControl.

Under the hood, Mastra delegates LLM interactions to Vercel's AI SDK. This means Mastra inherits AI SDK's mature streaming infrastructure without building a redundant abstraction layer. You can also use AI SDK provider packages directly (like @ai-sdk/groq) if you need provider-specific functionality.

For embeddings, Mastra supports embedding models through the same router interface and through dedicated packages like @mastra/fastembed for local embedding generation. This matters for RAG pipelines and semantic memory, where embedding quality and cost are important considerations.

Is Mastra Easy to Use? Developer Experience and Getting Started

The consensus across independent reviews, benchmark studies, and community feedback is that Mastra's developer experience is genuinely good, and meaningfully better than the Python-dominant alternatives for JavaScript developers.

A production benchmark from NextBuild (December 2025) scored Mastra 9 out of 10 for developer experience, compared to 5 out of 10 for LangChain. Setup time is consistently cited as a strength. Multiple developers report having working agents within minutes using the CLI scaffolding tool.

Getting started requires Node.js 22.13.0 or later. You run:

npm create mastra@latest

This launches an interactive CLI wizard that asks which components you want (agents, workflows, RAG, memory, etc.) and scaffolds a complete project with the right package structure, configuration files, and example code. You can also install manually by adding @mastra/core@latest alongside zod@^4.

The framework uses Zod for type-safe schemas throughout, which means your tool inputs, workflow step schemas, and structured outputs all benefit from TypeScript's type system. If you've used Zod before (and most TypeScript developers have by now), Mastra's patterns will feel immediately familiar.

One feature that gets consistently praised is Mastra Studio, a local development UI that runs at localhost:4111. It lets you chat with your agents, inspect every tool call (inputs and outputs), view memory state, visualize workflow execution step by step, and iterate on prompts, all without building any frontend code. For developers who've struggled with the "black box" problem of AI agent debugging, Studio is a significant quality-of-life improvement.

TypeScript community leader Matt Pocock, who initially approached Mastra with skepticism, shifted to endorsement during a live workshop. His assessment: the framework sells tools you can debug, extend, and trust. Another developer, writing on Medium, noted that the first time he replayed an agent run and actually understood why it failed, he realized how rare that experience is in AI development. Multiple reviewers highlight the responsiveness and helpfulness of the Mastra team on their Discord community (5,500+ members).

Documentation is bolstered by several innovations beyond standard reference docs. The MCP Docs Server (@mastra/mcp-docs-server) installs into coding assistants like Cursor and Windsurf, giving your AI pair programmer real-time access to Mastra's complete documentation. Mastra 101 is an interactive course delivered inside code editors via MCP. And Bhagwat's two books (Principles of Building AI Agents and Patterns for Building AI Agents) provide conceptual foundations that go deeper than any framework documentation can.

That said, Mastra has real limitations worth acknowledging. The workflow API's fluent chaining syntax was called unintuitive for complex branching logic in early Hacker News discussions (though the team has iterated on it since). Peer dependency conflicts with AI SDK versions have caused friction for some users. The framework is younger than LangChain, which means fewer copy-paste examples, fewer Stack Overflow answers, and less accumulated community knowledge. And being TypeScript-only means Python-first teams or data science-heavy organizations need to factor in a language boundary.

How Mastra Agents Work: Architecture and Capabilities

Agents are the centerpiece of Mastra's architecture. An agent is an autonomous entity backed by an LLM that can reason about goals, decide which tools to use, maintain memory across conversations, and iterate until it reaches a satisfactory answer or hits a stop condition.

Creating an agent is straightforward:

import { Agent } from '@mastra/core/agent'

const agent = new Agent({
  id: 'support-agent',
  name: 'Customer Support Agent',
  instructions: 'You are a helpful customer support assistant for Acme Corp.',
  model: 'anthropic/claude-sonnet-4-6',
  tools: { ticketLookup, orderStatus, refundProcessor },
})

The agent takes an ID, a name, a system prompt (instructions), a model specification, and a set of tools. Tools are created with Mastra's createTool() function using Zod schemas for input validation:

import { createTool } from '@mastra/core/tools'
import { z } from 'zod'

const ticketLookup = createTool({
  id: 'ticket-lookup',
  description: 'Look up a support ticket by ID',
  inputSchema: z.object({ ticketId: z.string() }),
  execute: async ({ context }) => {
    const ticket = await db.tickets.find(context.ticketId)
    return ticket
  },
})

When the agent receives a query, it enters a reasoning loop. It reads the instructions, considers the available tools, decides whether to call one (or several), processes the results, and either continues reasoning or returns a final response. This is the same agentic loop pattern used by all major agent frameworks, but Mastra's implementation handles the tool-calling mechanics, result parsing, and iteration automatically.

Agents expose two primary methods. .generate() waits for the complete response before returning. .stream() emits tokens in real-time as the model produces them, which is essential for chat interfaces where users expect to see responses forming progressively.

Structured output lets agents return typed objects instead of plain text. You define a Zod schema for the expected output format, and Mastra ensures the response conforms to that schema:

const result = await agent.generate('Summarize this ticket', {
  output: z.object({
    summary: z.string(),
    priority: z.enum(['low', 'medium', 'high']),
    actionItems: z.array(z.string()),
  }),
})

The memory system is where Mastra gets particularly interesting. Agents can maintain context across conversations through four complementary mechanisms:

Conversation history stores the raw message sequence. This is the simplest form of memory but consumes context window tokens quickly.

Working memory persists structured data (names, preferences, ongoing context) as a Markdown block that gets injected into the system prompt. It's essentially a scratchpad the agent can update between turns.

Semantic recall uses embedding-based similarity search over past messages. When enabled, Mastra embeds new messages and queries the vector store for relevant past context before generating a response. This lets agents recall relevant information from conversations that happened days or weeks ago.

Observational memory is Mastra's most technically novel feature, launched in February 2026. It uses two background agents (an Observer and a Reflector) that compress old conversation messages into dense, structured observations. As conversations grow long, the raw message history gets replaced by these compressed observations, keeping the context window stable while preserving important information. The system achieved 94.87% on the LongMemEval benchmark, which represents state-of-the-art performance. Notably, it requires no vector database and is prompt-cacheable.

For more complex scenarios, Mastra supports multi-agent systems through a supervisor pattern, where a coordinator agent delegates tasks to specialized sub-agents. Processors can intercept and transform messages before or after generation. Guardrails provide input/output safety checks, including prompt injection detection and PII redaction. And human-in-the-loop patterns let you pause agent execution to wait for human approval before proceeding.

How Mastra Workflows Work: Deterministic Orchestration for Complex Tasks

While agents handle open-ended reasoning, workflows handle structured, multi-step operations where you need predictable control flow. The two complement each other: agents decide what to do, workflows decide in what order things happen.

Mastra's workflow engine provides graph-based state machines. You define individual steps with createStep(), specifying input schemas, output schemas, and execute functions. Then you compose those steps into a workflow using createWorkflow() with a fluent API.

The composition API offers six control flow methods:

  • .then() for sequential execution (do A, then B, then C)
  • .parallel() for simultaneous execution (do A and B at the same time)
  • .branch() for conditional routing (if condition X, do A; otherwise, do B)
  • .foreach() for iterating over arrays with configurable concurrency
  • .dountil() and .dowhile() for looping patterns

One of the most powerful workflow features is suspend and resume. Workflows can pause at any point, serialize their state to storage, and resume later when triggered by an external event. This is critical for production use cases like approval workflows, where an agent's output needs human review before the next step executes. The state persists across restarts, so you can build genuinely durable workflows that survive server deployments.

Mastra also provides time travel for workflows, letting developers replay and inspect execution states for debugging. This pairs well with the observability tracing to give you a complete picture of what happened during a complex multi-step process.

Workflows can embed agents as steps, which is where the two systems combine most naturally. You might have a workflow that: (1) fetches data from an API, (2) passes it to an agent for analysis, (3) branches based on the agent's assessment, (4) suspends for human approval, and (5) executes a final action. Each step is deterministic and inspectable, even though the agent step involves probabilistic LLM reasoning.

Mastra AI and MCP: Connecting to the Tool Ecosystem

Model Context Protocol (MCP) support is a major piece of Mastra's integration story. MCP is an open standard, originally proposed by Anthropic, for connecting AI agents to external tools and data sources. Mastra implements both sides of the protocol.

The MCPClient (via the @mastra/mcp package, currently at v1.3.1 with roughly 138,000 weekly npm downloads) connects to external MCP servers to discover and use their tools. Servers can be local packages invoked via npx (using stdio transport) or remote HTTP endpoints (using streamable HTTP transport). Once connected, the tools from MCP servers can be passed directly to agents:

import { MCPClient } from '@mastra/mcp'

const mcp = new MCPClient({
  servers: {
    filesystem: {
      command: 'npx',
      args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
    },
    github: {
      url: new URL('https://mcp-github.example.com/mcp'),
      requestInit: { headers: { Authorization: 'Bearer ...' } },
    },
  },
})

const agent = new Agent({
  tools: { ...await mcp.getTools() },
  // ... other config
})

The MCPServer capability goes the other direction. It lets you expose your Mastra tools, agents, workflows, prompts, and resources to external MCP-compatible clients. This means any Mastra application can be consumed by Cursor, Windsurf, Claude Desktop, VS Code, Cline, Claude Code, and OpenAI's Codex. Both stdio and HTTP transports are supported.

Mastra also supports connecting to MCP registries like Klavis AI, mcp.run, Composio, and Smithery.ai for discovering and connecting to a broader ecosystem of pre-built tool servers.

The MCP Docs Server deserves special mention. It's a clever innovation that provides Mastra's complete knowledge base to AI coding assistants via MCP. You install it once in your IDE configuration, and your AI coding assistant gains accurate, real-time access to Mastra documentation. This meaningfully reduces hallucinations when using AI to write Mastra code.

Mastra AI vs LangChain: The Core Differences

The comparison between Mastra and LangChain is the most searched competitive query in this space, and the differences are real enough to matter for your technology choice.

Language and ecosystem. LangChain started as a Python framework and later added a JavaScript/TypeScript port (LangChain.js). Mastra was built for TypeScript from the ground up. This isn't just a marketing distinction. LangChain's TypeScript SDK has historically lagged behind the Python version in features and documentation. The 11x team's developer publicly stated they chose Mastra after finding LangGraph's TypeScript SDK frustrating to work with.

Architecture philosophy. LangChain uses deep abstraction layers and class inheritance patterns. Mastra uses functional patterns with Zod schemas and delegates LLM interactions to the lightweight Vercel AI SDK. In practice, this means LangChain code tends to involve more boilerplate and indirection, while Mastra code reads more like standard TypeScript.

Scope and integration. LangChain has an ecosystem of 1,000+ integrations and over 100,000 GitHub stars built up over years. Mastra has 22,300+ stars and is growing faster, but has a smaller integration surface. LangChain's orchestration layer, LangGraph, is a separate product with its own learning curve. Mastra bundles agents, workflows, memory, evals, and observability into a single cohesive framework.

Developer experience. The NextBuild production benchmark (a 90-day test building equivalent customer support agents) found meaningful differences. Development time was 18 hours with Mastra versus 41 hours with LangChain. Task completion rates were 94.2% versus 87.4%. P95 latency was 1,240ms versus 2,450ms. Error rates were 5.8% versus 8.9%. These numbers come from a consultancy (not a peer-reviewed study), so take them directionally rather than as gospel, but they align with the broader sentiment across developer reviews.

Observability and debugging. LangChain relies on LangSmith, a separate commercial product, for tracing and debugging. Mastra includes Studio and AI-native tracing as first-class framework features, with the option to export to LangSmith, Langfuse, Braintrust, or any OpenTelemetry-compatible backend.

Community maturity. LangChain has more Stack Overflow answers, more tutorials, more community "lore," and a larger base of developers who've already solved common problems. Mastra's community is smaller but growing rapidly (5,500+ Discord members) and tends to be more responsive for direct help.

The honest recommendation: if your team is Python-first, LangChain (or its newer competitor PydanticAI) is the natural choice. If your team works in TypeScript and you want a cohesive, integrated framework rather than assembling pieces, Mastra is the stronger option. If you're starting fresh with no language preference, the decision comes down to whether you value Python's broader AI/ML ecosystem or TypeScript's web development strengths.

How Mastra Compares to Other Alternatives

CrewAI (44,000+ GitHub stars) is Python-based and excels at multi-agent role-based collaboration using YAML configuration. It's best for teams that want to define agent "crews" with distinct roles and have them collaborate. Mastra provides a more unified single-agent experience with integrated workflows and memory, and is better suited for TypeScript teams.

AutoGen from Microsoft was merged with Semantic Kernel into the unified Microsoft Agent Framework in October 2025, placing AutoGen effectively in maintenance mode. If you're deeply embedded in the Microsoft/Azure ecosystem, Semantic Kernel may fit better organizationally. But Mastra offers more active development and better built-in developer tooling.

Vercel AI SDK is not a competitor but a foundation. Mastra is built on top of AI SDK. The relationship is library versus framework. AI SDK provides low-level primitives for model routing, streaming, and tool calling. Mastra adds agents with memory, workflow orchestration, RAG, evals, and observability. Many production systems use both: Mastra for backend agent logic, AI SDK for frontend React and Next.js UI components.

LlamaIndex (TypeScript) is strongest for RAG-heavy applications. If your primary need is document indexing, retrieval, and question answering over a knowledge base, LlamaIndex provides excellent retrieval abstractions. Mastra's RAG capabilities are solid but less specialized. However, Mastra offers a broader agent and workflow story that LlamaIndex does not.

The Mastra SDK: Packages, Integrations, and Framework Support

Mastra's SDK is TypeScript/JavaScript only. There is no Python SDK, and the team has been explicit that this is by design. They believe in building the best possible experience for one ecosystem rather than a mediocre experience across two.

The core package uses sub-path imports to keep bundles lean: @mastra/core/agent, @mastra/core/workflows, @mastra/core/mastra, @mastra/core/llm. The ecosystem extends through a collection of purpose-built packages for different backends and capabilities:

For storage and databases, you have: @mastra/pg (PostgreSQL), @mastra/libsql (LibSQL/Turso, the default), @mastra/mongodb, @mastra/upstash, @mastra/dynamodb, @mastra/mssql, and Cloudflare integrations (D1, KV, Durable Objects). Mastra also supports composite storage, routing different domains (memory, workflows, observability) to different backing stores. The docs recommend ClickHouse as the observability store for high-traffic production workloads.

For vector databases, the options include: @mastra/pinecone, @mastra/qdrant, @mastra/chroma, @mastra/astra (DataStax), @mastra/couchbase, @mastra/cloudflare-vectorize, @mastra/convex, @mastra/duckdb, @mastra/elasticsearch, and @mastra/lancedb.

For web frameworks, Mastra provides server adapters for Express, Hono, Fastify, and Koa, plus direct integration guides for Next.js and Astro. You can build Mastra as a standalone server with mastra build or embed it into an existing application.

For observability, Mastra integrates with Langfuse, Braintrust, Arize, LangSmith, Sentry, and any OpenTelemetry-compatible backend. It also includes an OpenTelemetry bridge for bidirectional context propagation (marked experimental as of March 2026).

The @mastra/evals package (63,000+ weekly downloads) provides evaluation primitives. The @mastra/ai-sdk package (67,000+ weekly downloads) bridges Mastra with AI SDK frontend utilities. A client SDK (@mastra/client-js) provides type-safe API calls from frontend applications.

Mastra on GitHub: Growth Metrics and Project Health

The main repository at github.com/mastra-ai/mastra tells a clear story of explosive, sustained growth. As of March 24, 2026:

  • 22,276+ stars (growing at approximately 30 to 35 per day)
  • 1,779 forks
  • 300+ contributors
  • 185 to 202 open issues, with 203 to 269 open pull requests at any given time
  • 993 versions of the CLI package published on npm

The npm download trajectory is perhaps the most telling metric. Downloads grew from roughly 60,000 per month in March 2025 to 1.8 million per month by February 2026. At the 1.0 launch in January 2026, weekly downloads exceeded 300,000. The Mastra team claims this makes it the third-fastest-growing JavaScript framework ever measured by time from 10,000 to 150,000 weekly downloads, faster even than Gatsby's own growth trajectory during its first five years.

The release cadence is intense. Multiple updates ship per week, and the team maintains detailed changelogs and blog posts for significant releases. Key milestones since launch include:

  • October 2024: Initial open-source launch
  • February 2025: Moved into beta; viral Hacker News moment
  • June 2025: Mastra Cloud public beta
  • October 2025: Model Router launch; $13M seed round announced
  • January 2026: Mastra 1.0 stable release
  • February 2026: Observational memory system; Mastra Code (AI coding agent) launch; supervisor pattern for multi-agent orchestration
  • March 2026: Enterprise RBAC with pluggable auth; remote sandbox support (Daytona, E2B, Blaxel); Studio Auth

The project's health indicators are strong. Active daily commits, responsive issue triage, a growing contributor base, and the financial backing to sustain development all point to a project with staying power rather than a flash-in-the-pan framework.

Workspaces and Sandboxes: Giving Agents a Computer

One of Mastra's newer and most distinctive capabilities is Workspaces, which give agents access to filesystems, command execution, and isolated sandbox environments. This is relevant for use cases where agents need to read and write files, run shell commands, execute code, or interact with development environments.

The workspace system combines filesystem tools with sandbox execution. As of March 2026, Mastra supports three remote sandbox providers: Daytona, E2B, and Blaxel. The key design principle is that untrusted agent code should never run on your application server. Remote sandboxes provide isolated filesystem, network, and process spaces to contain the blast radius of agent actions.

Each provider offers different isolation properties. Daytona supports network blocking and allowlisting. E2B provides ephemeral environments that are automatically destroyed. Blaxel focuses on agent-specific sandbox patterns. You choose the provider whose isolation model matches your threat model.

This capability matters because it's the difference between an agent that can only answer questions and an agent that can actually do work in the world. An agent with workspace access can write code, run tests, analyze data files, generate reports, and interact with development tools, all within a controlled environment.

Security, Guardrails, and Production Readiness

Mastra takes a "buildable security" approach. Rather than prescribing a single security model, it provides the primitives you need to construct guardrails appropriate to your risk profile.

On the compliance side, Mastra's Trust Center documents SOC 2 Type II attainment as of October 2025. The trust page references encryption at rest for datastores with sensitive customer data, secure data transmission protocols for encrypting data in transit, and formal retention and disposal procedures.

For agent safety, Mastra provides several guardrail mechanisms. Input processors can normalize input, detect prompt injection using an LLM classifier, moderate content, and detect or redact PII before it reaches the model. The team has documented optimizing these processors from roughly 5,000ms down to under 500ms per request, which matters if you're running multiple guardrail processors on every query.

Stream data redaction operates at the HTTP layer, stripping system prompts, tool definitions, API keys, and similar sensitive data from streaming responses before they reach clients. This is enabled by default in server adapters.

Observability redaction processes trace spans before export, replacing detected sensitive values with [REDACTED]. This prevents accidental leakage of customer data into logs and external observability platforms.

For multi-tenant applications, Mastra's RequestContext system provides authorization boundaries. Server-validated resource and thread IDs ensure that users can only access their own conversation threads, returning 403 errors for unauthorized access attempts.

A practical consideration: data retention automation is not yet a fully solved first-class capability in the framework itself. A public GitHub issue from January 2026 proposed a storage-agnostic retention manager, noting that while you can enforce retention at the database layer, application-level automation is still being developed.

Frequently Asked Questions About Mastra

Is Mastra AI open source?
Yes. The core framework is open source under the Apache License 2.0. Enterprise features (RBAC, SSO, ACL) require a commercial license for production use. The full source code is available on GitHub.

Who are the founders of Mastra AI?
Sam Bhagwat (CEO), Abhi Aiyer (CTO), and Shane Thomas (CPO). All three previously built Gatsby.js, the popular React-based static site generator that was acquired by Netlify.

Does Mastra support Python?
No. Mastra is TypeScript/JavaScript only by design. The team views this as a feature, not a limitation: it allows them to build the best possible experience for one ecosystem.

Can Mastra run locally?
Yes. Mastra runs as a standalone server on any Node.js runtime (including Bun and Deno), and can be embedded into existing servers via adapters. Mastra Cloud is optional.

What vector databases does Mastra support?
Pinecone, Qdrant, ChromaDB, pgvector (via PostgreSQL), Astra (DataStax), Couchbase, Cloudflare Vectorize, Convex, DuckDB, Elasticsearch, LanceDB, and others.

Is Mastra production-ready?
Yes. Mastra reached 1.0 stable in January 2026 and is used in production by companies including Replit, PayPal, Sanity, Brex, SoftBank, and Marsh McLennan.

How does Mastra handle rate limiting and model outages?
The model router supports automatic fallbacks, switching to backup providers when the primary provider is unavailable. Provider-specific configuration options let you set timeouts and retry behavior.

Does Mastra work with Next.js?
Yes. Mastra provides detailed integration guides for Next.js, both as a separate backend service and embedded directly within a Next.js application. The @mastra/ai-sdk package bridges Mastra with AI SDK frontend components for React and Next.js UIs.

The Bottom Line: Should You Use Mastra?

Mastra is the most complete AI agent framework available for TypeScript developers today. Its growth metrics, enterprise adoption, and technical capabilities all support that claim. But whether you should use it depends on your specific situation.

Choose Mastra if you're building AI agents or agentic applications in TypeScript, you want an integrated framework rather than a collection of loosely coupled libraries, you need production features like observability and evaluation from day one, or you want the developer experience advantages of TypeScript-native tooling (type safety, IDE autocomplete, Zod schemas, familiar patterns).

Look elsewhere if your team is Python-first and you'd prefer to stay in that ecosystem (consider LangChain, PydanticAI, or CrewAI), your needs are limited to simple LLM interactions without agents or workflows (Vercel AI SDK alone may suffice), or you need a framework with years of accumulated community answers and tutorials (LangChain has a significant head start here).

Adopt carefully if you need enterprise RBAC in production (budget for commercial licensing), you're building applications with code execution capabilities (treat sandbox configuration as a first-class security concern), or you need predictable cloud platform costs (wait for published pricing before committing to Mastra Cloud).

The Gatsby founders have done this before. They built a developer tool that scaled to millions of users, attracted institutional investment, and eventually earned an acquisition. With Mastra, they're applying the same playbook to a bigger market: the infrastructure layer for AI-powered applications. The early results suggest the playbook is working.

Get insights like this in your inbox

Join our newsletter for deep dives on AI, technology, and building the future. No spam, unsubscribe anytime.