Back to Docs
Getting Started
Quickstart
Install, connect, benchmark, and evaluate in 5 minutes
Last updated: August 20, 2025
Category: getting-started
Let's say you've built a RAG pipeline:
def my_rag(query: str) -> tuple[list[str], str]:
# ...
return retrieved_chunk_ids, answer
You can install Vecta with pip install vecta.
Set your API key as an environment variable VECTA_API_KEY.
Now you can evaluate it in just a few lines:
from vecta import VectaAPIClient
client = VectaAPIClient()
data_source = client.upload_local_files(
file_paths=["knowledge_base.pdf", "faq.docx"],
) # you can also connect your vector db for more granular results
benchmark = client.create_benchmark(
data_source_id=data_source["id"],
questions_count=10,
) # you can also load custom benchmarks, including from huggingface
results = client.evaluate_retrieval_and_generation(
benchmark_id=benchmark["id"],
retrieval_generation_function=my_rag,
)
print(f"Retriever F1: {results.document_level.f1_score}")
print(f"Response Accuracy: {results.generation_metrics.accuracy}")
print(f"Groundedness: {results.generation_metrics.groundedness}")
6. View Results
Results are automatically uploaded to the Evaluations dashboard where you can:
- Compare runs side by side
- Drill into per-question detailed results
- Export PDF certification reports
What's Next
- Accessor Syntax — Understand how schemas map your data
- Vector DB Connectors — Connect ChromaDB, Weaviate, pgvector, and more
- Experiments — Group evaluations and compare configurations