Join this practical, hands-on workshop to learn how to create a Retrieval-Augmented Generation (RAG) workflow using command-line tooling and modern database and vector search capabilities. You’ll begin with a simple AI chat interface and progressively enhance it into a full RAG system that enriches model outputs with your own domain knowledge. The session covers environment setup, connecting to a managed database service, preparing a knowledge base with embeddings, and orchestrating semantic search with generative models. By the end, you’ll be equipped to build efficient AI-driven applications that combine structured data, vector search, and large-language-model reasoning.

Learning Outcomes

  • Set Up Your Environment: Install the required command-line tools, configure access to a cloud-hosted database, and prepare your workspace for AI workflows.
  • Create an AI Chat Interface: Connect to the LLM provider of your choice and run conversational queries directly from the CLI.
  • Implement a RAG Pipeline: Generate embeddings, construct a searchable knowledge base, perform vector queries, and integrate retrieved context into model responses.
Mr Laurent Doguin

Mr Laurent Doguin

Director DevRel & Strategy, Couchbase

Find out more