Changelog

Follow new updates and improvements to Currents.

January 29th, 2026

New

Introducing Test Suite Size, a new analytics view that helps you track how your test suite evolves over time. Whether you're expanding coverage, refactoring tests, or debugging unexpected changes, this view gives you complete visibility into your test suite's composition.

  • Track unique test cases and spec files discovered across all CI runs

  • See exactly when and what tests were added or removed from your suite

  • Monitor test growth by branch, tag, or team with flexible grouping

  • Use rolling presence mode to smooth daily fluctuations from partial runs

  • Drill down into any period to see the specific tests that changed

  • Available now for all customers

Unlike run-level metrics that show tests per individual execution, Suite Size aggregates unique tests across all runs in each period. This means you get an accurate picture of your actual test coverage—not just what ran in a single CI job.

The expandable metrics table lets you click into any time period and see exactly which tests were newly detected or no longer running. Combined with grouping by Playwright tags, Git branches, or test groups, you can track coverage evolution across different parts of your codebase.

For teams with tests that don't run every day (scheduled tests, branch-specific tests, or selective execution), the Rolling Presence mode provides a stable view by counting tests seen within a configurable window—helping you distinguish real changes from normal variance.

January 29th, 2026

Improved

You can now delegate action management without granting full admin access. The new Actions Admin role gives team members the ability to create, edit, and delete Actions.

This role is ideal for QA leads, DevOps engineers, or team members who need to configure test actions without full administrative privileges. Admins can invite new members as Actions Admins or update existing member roles from the Organization Settings page.

Read more about permissions and roles

January 27th, 2026

New

Notify Team Members and Slack Groups about failures and flaky results

Integrate Currents with Slack to receive real-time notifications and failure alerts directly in your team's channels.

The new Slack App helps teams stay informed about test results without leaving their workflow.

  • Threaded run notifications that keep channels organized

  • Individual test notifications for failed or flaky tests

  • Annotation-based mentions to notify the right people from test code

  • UI-configured mention rules to route failures to the right teams

  • Flexible filtering by git branch, tags, test title, and file path

  • Up to 10 destinations per project with independent configuration

  • Available now for all customers

Configure exactly when and how you're notified. Set up run notifications to alert on:

  • all runs

  • only failures, or

  • only flaky tests.

Add individual test notifications to get detailed failure context with error messages and direct links to the Currents dashboard.

Configure Test or Run Slack notifications

Route failures to the right people or teams using mention rules. Define conditions like test file path or tags, then specify who gets notified—by email, Slack handle, or user group.

Mention test owners or teams when a test fails

You can also add notify:slack annotations directly in your test code to mention specific users when tests fail.

Use Playwright Annotations to configure Slack notifications

November 4th, 2025

Improved

We’ve upgraded our automated reporting system to give you more flexibility and control! Previously, each project supported only a single automated report. Now you can create and manage multiple reports within the same project — each with its own settings, schedule, and recipients.

✨ What’s New

  • Multiple Reports: Create as many automated reports as you need per project.

  • Custom Labels: Give each report a clear name — it will also appear as the email subject line.

  • Report Management: Easily enable, disable, or archive reports directly from the new Reports dashboard.

October 14th, 2025

New

Improved

We’ve been quiet over the last three months, focusing on delivering major improvements to our platform. This update covers a lot — use the list below to jump to what matters most.

  • Analytics Engine - faster, more granular, scalable, and supporting upcoming features

  • Test Explorer - new columns, new API, new controls

  • Error Explorer - making sense of Playwright errors

  • Custom Metrics - report any custom metric to Currents

  • Data Redaction for Trace Files

  • Jira Integration Improvements

  • MCP

  • UI changes

New Analytics Engine

We’ve completed a major refactor of our analytics engine. Over 800 million rows were migrated to a new ClickHouse cluster — smoothly and without major incidents.

Results:

  • Sub-100ms P90 latency;

  • More detailed, real-time insights (see Test Explorer and Error Explorer)

This overhaul was essential to meet growing demand for fast, accurate, and granular data. The new infrastructure also unlocks several upcoming capabilities:

  • AI-powered insights across Currents Dashboard, GitHub PRs, CLI, Slack, email, and reports

  • Enhanced context for AI workflows — test history, health metrics, and trends

  • Flexible, expressive querying for fully customizable reports

  • Real-time health data for powering Currents Actions, MCP, and REST API integrations

Test Explorer Improvements

Test Explorer remains one of our most-used features, helping teams quickly identify unstable or flaky tests. With the recent release, it now goes beyond snapshots to reveal trends and behavioural changes over time. You can now easily see:

  • Which tests recently started to flake or fail

  • Whether your fixes actually reduced failures

  • Failure Rate Change — shows how failure rates shifted between two periods (e.g., last 30 days vs. previous 30 days)

    Flakiness Rate Change — compares test instability over time

Read more in Test Explorer documentation.

In addition, we’ve introduced a new /tests HTTP REST API resource that allows you to fetch Test Explorer data programmatically.

Making Sense of Playwright Errors with Error Explorer

CI pipelines generate a flood of test results, and meaningful insights are often buried in noise. Error Explorer classifies and enriches Playwright errors, transforming raw messages into structured, searchable data.

For example consider exploring the top errors affecting your runs and seeing this:

Error: expect(locator).toBeVisible() failed

This message is too generic — it can be associated with different CSS selectors and doesn’t tell the full story, nor does it allow you to take action to actually remove the errors. Wouldn’t it be better to have a more precise, application-aware data? For example:

  • What CSS selectors or components cause the most CI failures?

  • Are login issues due to hidden or disabled buttons?

  • How many failures are infrastructure-related vs. actual test issues?

When a test fails, the raw error message and stack trace are parsed by Currents’ Error Classification Engine. It enriches every captured test error with structured fields that turn unstructured text into searchable, tokenized data.

  • Target (e.g. CSS selector, URL)

  • Action (e.g. click, toBeVisible)

  • Category (e.g. Assertion, Timeout)

This structured data enables exploration from multiple angles — similar to using GROUP BY in SQL.

Now, instead of guessing what hides behind Error: expect(locator).toBeVisible() failed you can see that:

  • Most tests failed because [data-testid=fg-loader] was not visible

  • [data-testid=table-tbody….] visibility is the second most-common failure reason

Explore Playwright Error Explorer documentation.

Custom Test Metrics in Currents

Playwright has evolved beyond testing — it’s now a platform for performance, accessibility, and coverage insights. Our new analytics engine lets you track custom numeric metrics tied to your tests, such as:

  • Accessibility score

  • Lighthouse web vitals

  • Performance metrics

  • Resource usage metrics

You can attach numeric metric values to a test (as an annotation) to track it in Currents. See Playwright Custom Test Metrics documentation.

Here’s an example of sending resource usage metrics to Currents:


// Add annotations with a custom metric
test
    .info()
    .annotations.push([
    {
      type: "currents:metric",
      description: JSON.stringify({
        name: "memory_usage",
        value: getMemoryUsage(),
        type: "float",
        unit: "mb",
      })
    }, {
      type: "currents:metric",
      description: JSON.stringify({
        name: "cpu_usage",
        value: getCPUUsage(),
        type: "float",
        unit: "%",
      }),
    }]);

These metrics are automatically processed and surfaced in Currents charts — complete with filtering, aggregation, and trend analysis.

Data Redaction for Trace Files

To strengthen privacy and compliance for our Enterprise Customers, we launched an automatic removal of secrets (tokens, passwords, API keys) from trace files. This feature is available for customers on Enterprise plan.

Benefits:

  • Prevents accidental exposure of credentials

  • Simplifies GDPR and SOC 2 compliance

  • Reduces risk during debugging and collaboration

Read more about Data Redaction.

Improved Jira Integration

A couple of months ago we rolled out our native integration with Jira. The recent release includes a couple of improvements based on customer feedback:

  • Create or link an existing Jira Issue when browsing test results

  • Show previously linked Jira Issues

  • Show Jira Issue details when browsing a test

  • Add comment / failure details (with Markdown support) to previously linked issues

See the updated Jira Integration documentation.

Currents MCP Server 2.0

The new analytics platform enabled us to release MCP Server 2.0 released. See the Currents MCP Server documentation.

You can now ask your agent question about your testing suite:

  • What were the top flaky tests in the last 30 days — analyze CI test history to identify and resolve flakiness

  • What were the slowest specs this week — find the slowest files across recent CI executions


Moreover, you can use AI Agents + Currents MCP server to perform intelligent tasks

  • Fix all my flaky tests — investigate patterns in your CI pipeline failures, create a plan, and suggest fixes

  • Fix this test — pull the last CI test results and suggest fixes

UI Improvements

Beyond the major infrastructure and API updates, we’ve refreshed parts of the dashboard UI — addressing many of the annoyances reported by our users.

We’re deeply grateful for everyone who shares feedback, challenges, and ideas. Your input directly shapes Currents’ roadmap — and the next wave of features is already underway.

September 19th, 2025

We’ve released MCP 2.0, a major upgrade that makes agents truly autonomous when working with your Currents data.

Previously, MCP could only check specific run data if you manually provided an ID, meaning you often had to stop and grab info from the dashboard. It also had no access to historical data. With MCP 2.0, that friction is gone. Agents can now discover, analyze, and debug your CI test runs on their own.

What’s New

  • Seven new tools for exploring projects CI test runs, specs, and results.

  • Smarter workflows like finding flaky or slow tests, without needing run IDs.

  • Simplified responses and a cleaner, more modular codebase under the hood.

New Abilities for Agents

With MCP 2.0, you can now ask your agent things like:

  • 🔍 “Fix this test” → pull the last CI test results and suggest fixes.

  • 🐞 “What were the top flaky tests in the last 30 days?” → analyze CI test history to identify and resolve flakiness.

  • ⚡“What were the slowest specs this week?” → find the slowest files across recent CI executions.

  • 🧪 “Fix all my flaky tests” → investigate patterns in your CI pipeline, create a plan, and suggest fixes.

In short, MCP 2.0 makes your AI agent behave more like a senior engineer with direct access to your test suite, diagnosing issues, spotting patterns, and guiding you to faster fixes.


We’re just getting started with MCP. Your feedback will help us shape where it goes next, and we’re excited to keep making it smarter, faster, and more helpful with every release.

July 24th, 2025

New

We’re excited to launch our new Jira Cloud Integration, designed to bridge the gap between test insights and issue tracking.

  • Create Jira issues directly from test failures in Currents

  • Link existing issues and add comments with one click

  • Include test context automatically: error messages, stack traces and links

  • Autocomplete support for issue search (up to 50 results)

  • Supports multiple Jira installations

  • Rich formatting with Markdown

  • This integration is available now for all customers.

With this update, teams using Currents and Jira can now easily trace failed tests by creating or linking Jira issues directly from the test execution view. No more switching tabs, manually copying logs, or losing context — everything happens where the failure is already visible.

Once enabled, a Jira icon will appear on every test execution page.

Clicking it opens a lightweight dialog that allows engineers to create new Jira tickets or comment on existing ones with rich test metadata (like stack traces, titles, durations, and links to the full run).

The integration is built for speed and minimal friction, with support for multiple Jira installations, project and issue type selectors, and auto-filled context to reduce redundant typing. It’s a simple but powerful step toward tighter alignment between engineering and quality.

P.S.

Give us a ⭐️!

July 2nd, 2025

Improved

Currents Actions Engine now supports two new features:

  • applying an action based on the error message

  • dynamically adding tags using the new Add Tag action.

Requires @currents/playwright@1.15.0

Why it is useful

  • Conditionally apply action based on Error Message

    Conditionally quarantine tests if error message (e.g. Error: net::ERR_CONNECTION_RESET) matches a pattern.

  • Automate your triage process

    Automatically tag tests that fail with specific error patterns — like timeouts, network errors, or assertion mismatches.

  • Create focused reports

    Filter and analyze test recordings by tag. This helps you understand how often a certain error occurs, where it’s happening, and how it’s trending over time.

  • No code changes needed

    Tags are applied via the action engine based on matching rules, so you don’t need to modify your tests or test runner config.

May 29th, 2025

Improved

We’ve made a few upgrades to the Currents Integrations

  • Revamped the Integrations screen layout to make room for upcoming additions

  • Added a Label field to help you distinguish between multiple integration items with the same destination

  • Improved handling of misconfigured GitHub and GitLab integrations for smoother troubleshooting

More enhancements coming soon!

May 23rd, 2025

Improved

Error Explorer highlights the errors impacting your CI executions, making it easier to identify patterns and root causes.

The new, improved Error Explorer view introduces the following improvements:

Error Distribution Timeline

The Errors Explorer displays a timeline chart showing the daily distribution of error messages over the selected period. You can switch the metric and adjust how many top errors to display. Top errors are ranked by their total value for the selected metric across the period.

Error Metrics

  • Occurrences - how often an error has caused a failure or a flaky behaviour during the selected period, based on the active filters. This metric counts all occurrences — including repeated ones from the same test. For example, if the error message TimeoutError: Navigation timeout of 30000 ms exceeded occurred 5 times in test A and 10 more times across other tests, the total count will be 15.

  • Affected Tests - how many unique tests were impacted by this error during the selected period. Each test is counted once, even if the error occurred multiple times in it. For example, if the same error appears 5 times in one test and 3 times in another, the Affected Tests count will be 2.

  • Affected Branches - how many unique branches encountered this error during the selected period. Each branch is counted once, even if the error occurred multiple times on it. For example, if the error shows up 10 times on main and 3 times on feature/login, the Affected Branches count will be 2.

Individual Error Message Details

Clicking an error item reveals more details about that specific error

  • Affected Tests – A list of tests impacted by the error, sorted by how often it occurred. These are tests that failed or flaked due to this error. Click a test title to view its details in the Test Explorer.

  • Recent Executions – A chronological list of the most recent test runs affected by this error. Clicking on a test title reveal its details in the Test Explorer, clicking on the commit message opens the specific execution details.

  • Affected Branches – A list of branches where this error occurred, sorted by occurrence count.