Every change to the SEER API and dashboard, documented plainly. No surprises. We never make breaking changes within a major version.
The prescribe engine got a major upgrade this release. Recommendations are now more specific, ranked by financial impact, and include an estimated time to fix. We also added native CrewAI support and a new cost spike webhook event.
cost.spike webhook event. SEER now fires a webhook when cost per call doubles compared to your 7-day average for a context. Configure the sensitivity multiplier in your dashboard.seer_session_id, SEER now calculates and returns a quality score for the full session — not just individual calls. Useful for evaluating multi-turn conversations end to end.Reliability patch. Fixed two edge cases in the anomaly detector that were causing missed alerts under specific load patterns, and resolved a memory issue in the Node.js SDK that appeared after many hours of continuous use.
groundedness and task_completion keys to swap in the breakdown object. The scores themselves were always correct. Now the labels match.This release focuses on teams building with multiple AI providers. SEER now supports AWS Bedrock and Azure OpenAI natively, and introduces cross-provider comparison reporting so you can see which provider is performing better for each context.
seer.observe() the same way you would with OpenAI. Claude, Titan, Llama, and Mistral on Bedrock are all supported."customer-support" cannot record or read data from any other context. Useful for giving different teams access to only their own data.recommend() response shape. The impact field now returns an object with impact.description (plain English) and impact.estimated_saving_usd (number, monthly estimate) instead of a plain string. Update your code if you read this field programmatically.The evaluate() method graduates from beta. Also adds LangChain and LlamaIndex native integrations, plus a new dashboard view that lets you track quality trends over time without writing any queries.
SeerCallbackHandler from the Python SDK and pass it to any LangChain chain. SEER observes every LLM call in the chain without you wrapping each one individually.Webhooks ship in this release, along with the recommend() method and the first version of the SEER dashboard. This is the release where SEER became a complete product rather than just an API.
seer.recommend(context="...") and get back a ranked list of specific actions to improve that context. Each action includes a plain English explanation and an estimated impact.Initial public release. SEER launches with observe(), evaluate() in beta, trace(), OpenAI and Anthropic support, and the core intelligence layer. This is the one API that replaces four to six separate monitoring tools.