Skip to main content
Literal is an all in one observability, evaluation and analytics platform for building production-grade LLM apps. It covers a wide range of use cases, from conversational applications to task automation. Literal can be used with any LLM framework, for example Chainlit and LangChain. It integrates well with many LLM providers, like OpenAI, Anthrophic and Mistral. Literal is developed by the builders of Chainlit, the open-source Conversational AI Python framework.
Literal Platform Example Thread

Key Features

Observability

Monitor your LLM app (including steps, feedback, prompts, token consumption) in a few minutes with our SDKs. Literal provides a unified view of all your data in one place.

Dataset

Create datasets mixing production data and hand written examples to run non regression tests.

Online Evaluations

Evaluate your threads and runs in real time using off the shelf and custom evaluators.

Prompt Collaboration

Safely design, try, debug, version and deploy prompts directly from Literal.

Next up

Get Started

Install the Literal SDK and get your API key.

Learn more about integrations

Learn about OpenAI, LangChain and Chainlit with Literal.

More

Use this documentation to