Vega for auto-remediation
Overview
About Vega
Vega is LaunchDarkly’s AI-powered agent. For general information about Vega, including eligibility and pricing, read Vega.
Vega for auto-remediation helps you understand, debug, and fix observability issues directly within LaunchDarkly. Vega works inside observability views like logs, traces, errors, and sessions, gathering relevant context, such as recent flag changes and alerts, to suggest improvements and next steps.
Vega is paired with observability alerts and can analyze observability data automatically when thresholds are breached. It’s like having an on-call engineer available at all times, ready to triage the error spike and remediate the issue.
Vega features
Vega for auto-remediation includes two primary features that work together inside LaunchDarkly:
Vega agent
Vega agent is an AI debugging assistant embedded in observability views. It investigates logs, traces, errors, and alerts, summarizing what happened and identifying causes. If you connect Vega agent to GitHub, it can suggest or open fixes.
Vega agent has two modes:
Investigate mode
In this mode, Vega focuses on understanding and diagnosing issues. It summarizes observability data, highlights anomalies, and identifies likely root causes, often correlating them with recent flag or code changes. This is the default mode when you launch Vega from an observability resource.
If you’ve connected Vega to GitHub, you can specify which repositories this mode can access for additional context, such as recent commits or deployments. However, Vega will never propose or modify code when it’s in investigate mode. It only reads your code to enhance its analysis. You can further improve Vega’s analysis by adding repository instructions. To learn more, read Customizing Vega with repository instructions.
Fix mode
When fix mode is enabled, Vega moves beyond diagnosing to suggest potential solutions. In this mode, Vega analyzes the relevant code paths, generates candidate changes, and can open a pull request with proposed edits and explanations.
Fix mode requires a connected GitHub account. To learn how to set this up, read Connecting Vega to GitHub.
Vega’s code suggestions are always visible and reviewable before any changes are merged, so you maintain complete control over your code.
Where to use Vega agent
There are two primary areas where Vega agent is useful:
- In observability views
- In alerts
You can launch Vega directly from logs, traces, errors, and session replays. When it opens, it automatically gathers the surrounding context, including related spans, recent flag changes, and correlated events, to explain what happened and why.
You can also enable Vega in your Alert Configuration settings by toggling on Vega Investigations. Once you enable it, alerts will include a “Run Vega Investigation” option. When you choose this option, Vega analyzes the triggering query, correlated telemetry, and recent flag or code changes to summarize what changed.
Vega currently operates on alerts in investigate mode, focusing on diagnosing and explaining the issue rather than performing remediation.
Connecting GitHub
GitHub authentication is optional for investigate mode but required for fix mode. To learn how to connect GitHub to Vega, read Connecting Vega to GitHub.
Vega search assistant
Vega search assistant is a natural-language search tool that lets you ask questions about your observability data in plain language such as, “Which traces increased error rates?”.
Vega automatically converts your question into a structured observability query, runs it across the appropriate datasets, and presents results within LaunchDarkly’s observability dashboards. Vega is prompted on the specific query language used by the product so you can focus on asking meaningful questions.
Where to use Vega search assistant
You can trigger the search assistant from any search bar in logs, traces, errors, or sessions. Type a natural-language question and Vega translates it into the equivalent structured query.
After you submit the question, Vega shows both the interpreted query, so you can learn the syntax, and the search results with relevant metrics, traces, or logs.