Skip to content

Learn FastroAI

Build AI applications by understanding how they actually work.

This isn't a feature tour. We start from how language models work, build up to agents and tools, and end with production patterns. Each section builds on the previous, and by the end you'll understand not just how to use FastroAI, but why it's designed the way it is.

Choose Your Starting Point

New to LLMs? Start from the beginning. We cover tokenization, transformers, and what's actually happening when you call an API.

Already understand LLMs? Jump to Section 2 (Your First Agent) to start building.

Know PydanticAI? Skip to Section 3 (Letting Agents Do Things) for tools and what FastroAI adds.

Just want code? Check the Quick Start for the 2-minute setup.

What You'll Learn

Understand what you're building on

  • How language models actually work (not just API calls)
  • Tokenization and why it matters for costs
  • What LLMs do well and where they fail
  • The mechanics behind "AI agents"

Create real AI features

  • Agents with system prompts
  • Tools that let agents interact with the world
  • Structured output for type-safe responses
  • Cost tracking and error handling

Ship with confidence

  • Multi-step workflows with pipelines
  • Tracing and observability
  • RAG for when the model doesn't know enough

The Learning Path

  • 0. How Language Models Work

    Understanding what you're actually calling


    From early NLP to modern transformers. Tokenization, attention, and why "predict the next token" leads to surprisingly capable systems. You'll understand what happens between your API call and the response.

    Start here

  • 1. What LLMs Can and Can't Do

    Capabilities and limitations


    LLMs are good at specific things and terrible at others. Understanding this gap is what separates working applications from impressive demos that break in production.

    Continue

  • 2. Your First Agent

    From API calls to agents


    Creating a FastroAgent, writing system prompts that work, running queries, and understanding what comes back. The foundation for everything else.

    Continue

  • 3. Letting Agents Do Things

    Tools and function calling


    An agent that can only talk isn't very useful. Tools let agents call APIs, query databases, and interact with the world. @safe_tool makes them production-safe.

    Continue

  • 4. Getting Data Back, Not Just Text

    Structured output


    Your agent returns text, but your application needs data. Using Pydantic models to get type-safe responses you can actually work with.

    Continue

  • 5. Tracking What You Spend

    Tokens cost money


    Remember tokens from Section 0? Each one costs money. Why microcents matter for billing, how to track usage across calls, and setting budgets before you get surprised.

    Continue

  • 6. When Things Go Wrong

    Error handling


    APIs time out. External services fail. The model returns something unexpected. Building AI applications that handle failures gracefully instead of crashing.

    Continue

  • 7. Multi-Step Workflows

    Pipelines


    Real tasks need multiple steps: classify, then research, then write. Pipelines handle dependencies, run independent steps in parallel, and aggregate costs across the workflow.

    Continue

  • 8. Finding Problems in Production

    Tracing and observability


    Something's slow. Something's expensive. But what? Tracing lets you see inside your AI calls and correlate them with the rest of your application.

    (Coming soon)

  • 9. Testing LLMs

    Evals, not just unit tests


    You can't assert on LLM outputs the way you test deterministic code. Unit tests, integration tests, and evals - different tools for different problems.

    (Coming soon)

  • 10. Retrieval Augmented Generation

    When the model doesn't know enough


    LLMs have knowledge cutoffs and don't know your data. RAG combines retrieval (finding relevant documents) with generation (answering based on them). Embeddings, vector search, and grounding responses in real data.

    (Coming soon)


Alternative Learning Paths

  • 1 hour: Sections 0-2 → Understand LLMs and build your first agent
  • Half day: Sections 0-6 → Build a complete AI feature with tools
  • Full day: All sections → Production-ready with pipelines, tracing, and RAG
  • New to AI: Start from Section 0, don't skip the foundations
  • Know ML, new to LLMs: Skim Section 0, focus on 1-2
  • Know LLMs, new to PydanticAI: Start at Section 2
  • Know PydanticAI: Jump to Section 3 for tools and @safe_tool
  • "I want to understand how LLMs work" → Sections 0-1
  • "I want to build an AI feature" → Sections 2-6
  • "I want to build complex workflows" → Sections 2-5, then 7
  • "I want to add RAG to my app" → Sections 0-2, then 9

Prerequisites

You should be comfortable with:

  • Python async/await syntax
  • Basic Pydantic models
  • Environment variables and API keys

You don't need prior experience with:

  • Machine learning or NLP (we start from the beginning)
  • PydanticAI (we cover what you need)
  • Production infrastructure (we build up to it)

Start Learning → Browse Guides →