Skip to content

feat(core): Migrate Vercel AI event processor to span streaming #46661

feat(core): Migrate Vercel AI event processor to span streaming

feat(core): Migrate Vercel AI event processor to span streaming #46661

Triggered via pull request April 30, 2026 11:06
Status Failure
Total duration 16m 31s
Artifacts 39

build.yml

on: pull_request
Get CI Metadata  /  Get Metadata
5s
Get CI Metadata / Get Metadata
Check lockfile
3m 23s
Check lockfile
Check file formatting
41s
Check file formatting
Check PR branches
3s
Check PR branches
Build Lambda layer
1m 7s
Build Lambda layer
Matrix: job_node_core_integration_tests
Matrix: job_node_integration_tests
Matrix: job_node_unit_tests
Matrix: job_remix_integration_tests
Lint
50s
Lint
Circular Dependency Check
1m 32s
Circular Dependency Check
Browser Unit Tests
4m 19s
Browser Unit Tests
Bun Unit Tests
1m 8s
Bun Unit Tests
Deno Unit Tests
1m 2s
Deno Unit Tests
Cloudflare Integration Tests
1m 49s
Cloudflare Integration Tests
Bun Integration Tests
42s
Bun Integration Tests
Check for faulty .d.ts files
39s
Check for faulty .d.ts files
Matrix: job_browser_loader_tests
Matrix: job_browser_playwright_tests
Matrix: job_optional_e2e_tests
Upload Artifacts
0s
Upload Artifacts
Matrix: job_e2e_tests
All required jobs passed or were skipped
6s
All required jobs passed or were skipped
Fit to window
Zoom out
Zoom in

Annotations

56 errors, 14 warnings, and 233 notices
Check file formatting
Process completed with exit code 1.
Lint
Process completed with exit code 1.
typescript-eslint(prefer-optional-chain): packages/core/src/tracing/vercel-ai/index.ts#L352
Prefer using an optional chain expression instead, as it's more concise and easier to read.
eslint(no-console): packages/core/src/tracing/vercel-ai/index.ts#L451
Unexpected console statement.
eslint(no-console): packages/core/src/tracing/vercel-ai/index.ts#L449
Unexpected console statement.
[chromium] › tests/server/getServerSideProps.test.ts:4:5 › Should report an error event for errors thrown in getServerSideProps: ../../_temp/test-application/tests/server/getServerSideProps.test.ts#L21
1) [chromium] › tests/server/getServerSideProps.test.ts:4:5 › Should report an error event for errors thrown in getServerSideProps Error: expect(received).toMatchObject(expected) - Expected - 5 + Received + 3 @@ -9,25 +9,23 @@ "exception": Object { "values": Array [ Object { "mechanism": Object { "handled": false, - "type": "auto.function.nextjs.wrapped", + "type": "auto.browser.browserapierrors.setTimeout", }, "stacktrace": Object { "frames": ArrayContaining [], }, "type": "Error", "value": "getServerSideProps Error", }, ], }, - "platform": "node", + "platform": "javascript", "request": Object { - "cookies": Any<Object>, "headers": Any<Object>, - "method": "GET", "url": StringMatching /^http.*\/error-getServerSideProps/, }, "timestamp": Any<Number>, - "transaction": "getServerSideProps (/[param]/error-getServerSideProps)", + "transaction": "/[param]/error-getServerSideProps", } 19 | }); 20 | > 21 | expect(await errorEventPromise).toMatchObject({ | ^ 22 | contexts: { 23 | trace: { span_id: expect.stringMatching(/[a-f0-9]{16}/), trace_id: expect.stringMatching(/[a-f0-9]{32}/) }, 24 | }, at /home/runner/work/_temp/test-application/tests/server/getServerSideProps.test.ts:21:35
[chromium] › tests/trpc-mutation.test.ts:4:1 › should create transaction with trpc input for mutation: ../../_temp/test-application/tests/trpc-mutation.test.ts#L0
2) [chromium] › tests/trpc-mutation.test.ts:4:1 › should create transaction with trpc input for mutation Test timeout of 30000ms exceeded.
[chromium] › tests/trpc-error.test.ts:4:1 › should capture error with trpc context: ../../_temp/test-application/tests/trpc-error.test.ts#L0
1) [chromium] › tests/trpc-error.test.ts:4:1 › should capture error with trpc context ──────────── Test timeout of 30000ms exceeded.
[chromium] › tests/orpc-error.test.ts:4:1 › should capture server-side orpc error: ../../_temp/test-application/tests/orpc-error.test.ts#L12
1) [chromium] › tests/orpc-error.test.ts:4:1 › should capture server-side orpc error ───────────── Error: page.goto: Test timeout of 30000ms exceeded. Call log: - navigating to "http://localhost:3030/", waiting until "load" 10 | }); 11 | > 12 | await page.goto('/'); | ^ 13 | await page.waitForTimeout(500); 14 | await page.getByRole('link', { name: 'Error' }).click(); 15 | at /home/runner/work/_temp/test-application/tests/orpc-error.test.ts:12:14
[chromium] › tests/orpc-error.test.ts:4:1 › should capture server-side orpc error: ../../_temp/test-application/tests/orpc-error.test.ts#L0
1) [chromium] › tests/orpc-error.test.ts:4:1 › should capture server-side orpc error ───────────── Test timeout of 30000ms exceeded.
Node (24) Integration Tests
Process completed with exit code 1.
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-EFIjIJKfQ7DDvw6Lfu2eyHI4", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "ca6cdfce30605f3a", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-EFIjIJKfQ7DDvw6Lfu2eyHI4", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:52.501Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-J5C1cJ5SF3pJSAhfASv2sArq", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "79a9423dd9b6ca24", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-J5C1cJ5SF3pJSAhfASv2sArq", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:51.923Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-gWhHRArLREkJrVdzhssPrRaX", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "eee26d9ef6d099f0", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-gWhHRArLREkJrVdzhssPrRaX", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:49.712Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-XWzxDNGM7g9a0xCAMCVFyKYr", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "32195fb7d28cabc3", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-XWzxDNGM7g9a0xCAMCVFyKYr", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:49.142Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:312:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:302:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-XqpNSR6MmGC57YnnQego24Cg", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "848f2b8c09ea2d0e", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-XqpNSR6MmGC57YnnQego24Cg", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:37.460Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547617.4614668, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "b03854f3afad2a4b", + "span_id": "4dd3ce43c1eaec9f", + "sta
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-s71BdMoSAKaWCiFq4XtefBd2", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "787e232d52ccd9f4", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-s71BdMoSAKaWCiFq4XtefBd2", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:36.745Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547616.7451239, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "5550aee83da396ca", + "span_id": "28395906deada2c0", + "sta
Node (18) Integration Tests
Process completed with exit code 1.
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-L8M5P3uoXv2G3wX7d4SJWuFo", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "3eb07f5d4db783ca", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-L8M5P3uoXv2G3wX7d4SJWuFo", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:03.970Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-c4yfpIKGGwUDzqee7TJWzH0Z", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "da6db2932ca126b6", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-c4yfpIKGGwUDzqee7TJWzH0Z", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:03.309Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-iMCPsVXDx786WTTHvwlU8YsA", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "21224479f567a0fd", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-iMCPsVXDx786WTTHvwlU8YsA", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:00.633Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-4T55El9coWjrKZa0fAOSLZdK", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "ef8b87994bdb38d1", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-4T55El9coWjrKZa0fAOSLZdK", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:59.770Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:312:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:302:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-B6uZSsjZuvgDay1FizFNOj07", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "b116c467af23d4ea", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-B6uZSsjZuvgDay1FizFNOj07", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:54.851Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547634.8513653, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "7635c11ae9dd06ba", + "span_id": "81f79641514af937", + "sta
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-4DVeceXpaamTwYYSKOtxg980", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "40f336a41c0cf709", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-4DVeceXpaamTwYYSKOtxg980", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:53.881Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547633.8815486, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "1f7674785f798fdc", + "span_id": "cf1e50ae5eb1b031", + "sta
Node (22) Integration Tests
Process completed with exit code 1.
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-9TWns3Kkr9V5q77GwWHgda6Z", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "100d511728738b10", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-9TWns3Kkr9V5q77GwWHgda6Z", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:11.834Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-TkrN6EO6qlW0tgaedlmd8z0P", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "b25c21cb88f2a31a", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-TkrN6EO6qlW0tgaedlmd8z0P", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:11.149Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-MUMo27sF5yS8TdeLiUQmprQv", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "692b07b2df0aaa95", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-MUMo27sF5yS8TdeLiUQmprQv", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:08.428Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-HDKgELvdz7DEuNevGXnu1C5J", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "0c5d9a344e1f49c7", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-HDKgELvdz7DEuNevGXnu1C5J", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:07.669Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:312:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:302:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-QJI6iK8QE44PNrEn40xz9fbZ", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "955b9f3be8bf0360", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-QJI6iK8QE44PNrEn40xz9fbZ", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:54.301Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547634.3016617, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "7ff3b25aa31c05b2", + "span_id": "097f9f937ba3bc22", + "sta
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-skmEd40nVWNgCjYaR0zgQKM3", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "368062d375a4e423", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-skmEd40nVWNgCjYaR0zgQKM3", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:53.546Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547633.547378, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "b9b0b88c725e3a3e", + "span_id": "cd0eba1c6c69ae20", + "star
Node (20) Integration Tests
Process completed with exit code 1.
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-A3JaxCWWPzQnFzB1b8shKpyi", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "b2df35d76bc400f8", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-A3JaxCWWPzQnFzB1b8shKpyi", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:11.954Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-7M0oKIReed5twMAKShUQLEZp", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "bb623b3869d247e5", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-7M0oKIReed5twMAKShUQLEZp", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:11.335Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-V6weGQf9SqOMvn0j3DMpcy3D", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "07b948e6354b2968", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-V6weGQf9SqOMvn0j3DMpcy3D", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:08.834Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-SPr24IE4ZUiQ1owPJfGYnky2", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "6e0dec80fbedb023", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-SPr24IE4ZUiQ1owPJfGYnky2", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:14:08.094Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:312:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:302:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-zbB9r4OwePqffCiEHOWDRar1", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "6770a8647bf68f27", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-zbB9r4OwePqffCiEHOWDRar1", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:51.896Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547631.8959696, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "712d7f501868247a", + "span_id": "cc01bbb5b3a622e2", + "sta
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-zprq98mhWpEBwyboZGdkX8z1", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "be517e1191551b5c", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-zprq98mhWpEBwyboZGdkX8z1", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:13:51.075Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547631.0754375, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "3d4c6e4a9ffb50e2", + "span_id": "38dde1d4092e7af9", + "sta
Node (24) (TS 3.8) Integration Tests
Process completed with exit code 1.
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-ZDwAhuJnW2JdwhqzFNMW7h3c", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "83a92196ea86383e", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-ZDwAhuJnW2JdwhqzFNMW7h3c", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:16:44.780Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "invoke_agent", + "items": [ + { + "attributes": { + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-ESntwtAh7QZz9pbv7qzGYsuF", }, - "gen_ai.request.model": ObjectContaining { + "gen_ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", "value": 15, }, - "gen_ai.usage.output_tokens": ObjectContaining { + "gen_ai.usage.output_tokens": { + "type": "integer", "value": 25, }, - "gen_ai.usage.total_tokens": ObjectContaining { + "gen_ai.usage.total_tokens": { + "type": "integer", "value": 40, }, - "sentry.op": ObjectContaining { - "value": "gen_ai.invoke_agent", + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", }, - "sentry.origin": ObjectContaining { + "sentry.origin": { + "type": "string", "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.segment.id": { + "type": "string", + "value": "646601a4c8456bee", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", }, - "name": "invoke_agent", - "status": "error", + "vercel.ai.response.finishReason": { + "type": "string", + "value": "tool-calls", }, - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.operation.name": ObjectContaining { - "value": "generate_content", + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-ESntwtAh7QZz9pbv7qzGYsuF", }, - "gen_ai.request.model": ObjectContaining { + "vercel.ai.response.model": { + "type": "string", "value": "mock-model-id", }, - "gen_ai.usage.input_tokens": ObjectContaining { - "value": 15, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:16:44.130Z", }, - "gen_ai.usage.output_tokens": ObjectContaining { - "value": 25, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, }, - "gen_ai.usage.total_tokens":
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-U5CKxbs5iJOKCDsOqtI76YNw", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "fb5bd740f30cf2e3", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-U5CKxbs5iJOKCDsOqtI76YNw", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:16:41.805Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v6/test.ts > Vercel AI integration (streaming, V6) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-qpSHoKPRw5o9IV8Hx9VS4TGF", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "d3eba3078222e0f9", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.request.headers.user-agent": { + "type": "string", + "value": "ai/6.0.170", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-qpSHoKPRw5o9IV8Hx9VS4TGF", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:16:41.105Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + "vercel.ai.usage.inputTokenDetails.noCacheTokens": { + "type": "integer", + "value": 10, + }, + "vercel.ai.usage.totalTokens": { + "type": "integer", +
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:312:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > normalizes error status in streaming mode: dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts#L305
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/tracing/vercelai/span-streaming-v4/test.ts:305:5 ❯ utils/runner.ts:302:7
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > cjs > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-CZBxNQb6g4IYI0BqKzgFnHfD", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "d7f4b03df71d88b3", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-CZBxNQb6g4IYI0BqKzgFnHfD", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:16:30.770Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547790.7709615, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "5376d067925fd421", + "span_id": "02f86f2f9ee71936", + "sta
suites/tracing/vercelai/span-streaming-v4/test.ts > Vercel AI integration (streaming) > esm/cjs > esm > creates ai related spans in streaming mode with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L100
AssertionError: expected { version: 2, …(1) } to match object { items: ArrayContaining{…} } (1 matching property omitted from actual) - Expected + Received { - "items": ArrayContaining [ - ObjectContaining { - "attributes": ObjectContaining { - "gen_ai.input.messages": ObjectContaining { + "items": [ + { + "attributes": { + "gen_ai.input.messages": { + "type": "string", + "value": "[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Where is the first span?\"}]}]", + }, + "gen_ai.operation.name": { + "type": "string", + "value": "generate_content", + }, + "gen_ai.output.messages": { + "type": "string", + "value": "[{\"role\":\"assistant\",\"parts\":[{\"type\":\"text\",\"content\":\"First span here!\"}],\"finish_reason\":\"stop\"}]", + }, + "gen_ai.request.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.response.id": { + "type": "string", + "value": "aitxt-pl0LNabY0t3xBncxYqySSVfZ", + }, + "gen_ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "gen_ai.system": { + "type": "string", + "value": "mock-provider", + }, + "gen_ai.usage.input_tokens": { + "type": "integer", + "value": 10, + }, + "gen_ai.usage.output_tokens": { + "type": "integer", + "value": 20, + }, + "gen_ai.usage.total_tokens": { + "type": "integer", + "value": 30, + }, + "sentry.op": { + "type": "string", + "value": "gen_ai.generate_content", + }, + "sentry.origin": { + "type": "string", + "value": "auto.vercelai.otel", + }, + "sentry.release": { + "type": "string", + "value": "1.0", + }, + "sentry.sdk.name": { + "type": "string", + "value": "sentry.javascript.node", + }, + "sentry.sdk.version": { + "type": "string", + "value": "10.51.0", + }, + "sentry.sdk_meta.gen_ai.input.messages.original_length": { + "type": "integer", + "value": 1, + }, + "sentry.segment.id": { + "type": "string", + "value": "1490c90e1c35e44e", + }, + "sentry.segment.name": { + "type": "string", + "value": "main", + }, + "vercel.ai.model.provider": { + "type": "string", + "value": "mock-provider", + }, + "vercel.ai.operationId": { + "type": "string", + "value": "ai.generateText.doGenerate", + }, + "vercel.ai.pipeline.name": { + "type": "string", + "value": "generateText.doGenerate", + }, + "vercel.ai.prompt.format": { + "type": "string", + "value": "prompt", + }, + "vercel.ai.response.finishReason": { + "type": "string", + "value": "stop", + }, + "vercel.ai.response.id": { + "type": "string", + "value": "aitxt-pl0LNabY0t3xBncxYqySSVfZ", + }, + "vercel.ai.response.model": { + "type": "string", + "value": "mock-model-id", + }, + "vercel.ai.response.timestamp": { + "type": "string", + "value": "2026-04-30T11:16:29.985Z", + }, + "vercel.ai.settings.maxRetries": { + "type": "integer", + "value": 2, + }, + "vercel.ai.streaming": { + "type": "boolean", + "value": false, + }, + }, + "end_timestamp": 1777547789.985261, + "is_segment": false, + "name": "generate_content mock-model-id", + "parent_span_id": "3b66feff2316ff57", + "span_id": "ccda832463b2f9a5", + "star
All required jobs passed or were skipped
Process completed with exit code 1.
eslint(no-unused-vars): packages/core/src/tracing/vercel-ai/index.ts#L7
Type 'SpanOrigin' is imported but never used.
Deno Unit Tests
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: denoland/setup-deno@v2.0.3. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
E2E deno-streamed Test
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: denoland/setup-deno@v2.0.3. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
E2E aws-serverless-layer (Node 18) Test
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: aws-actions/setup-sam@v2. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
Node (24) Unit Tests
❌ Patch coverage check failed: 50.00% < target 80%
E2E aws-serverless-layer Test
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: aws-actions/setup-sam@v2. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
E2E aws-serverless-layer (Node 22) Test
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: aws-actions/setup-sam@v2. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
E2E deno Test
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: denoland/setup-deno@v2.0.3. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
E2E aws-serverless Test
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: aws-actions/setup-sam@v2. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
Node (18) Unit Tests
❌ Patch coverage check failed: 50.00% < target 80%
Browser Unit Tests
Patch coverage defaulted to 100% because no changed files matched coverage data. Unmatched diff files: dev-packages/node-integration-tests/suites/tracing/vercelai/scenario-error-in-tool.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/scenario.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/instrument-with-pii.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/instrument-with-truncation.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/instrument.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/scenario-error-in-tool.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/scenario-truncation.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/scenario.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/test.ts, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v6/instrument-with-pii.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v6/instrument.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v6/scenario-error-in-tool.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v6/scenario.mjs, dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v6/test.ts, dev-packages/node-integration-tests/suites/tracing/vercelai/test.ts, packages/core/src/index.ts, packages/core/src/tracing/vercel-ai/index.ts Sample coverage paths: ./mute.js, /Users/isaacs/dev/js/events-to-array/etoa.js, /home/runner/work/sentry-javascript/sentry-javascript/packages/browser/src/client.ts This usually indicates a path format mismatch between your coverage tool and the repository.
Node (22) Unit Tests
❌ Patch coverage check failed: 50.00% < target 80%
Node (20) Unit Tests
❌ Patch coverage check failed: 50.00% < target 80%
Size Check
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: ./dev-packages/size-limit-gh-action. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/
🎭 Playwright Run Summary
1 passed (2.0s)
🎭 Playwright Run Summary
1 passed (3.2s)
🎭 Playwright Run Summary
3 passed (2.5s)
🎭 Playwright Run Summary
3 passed (2.5s)
🎭 Playwright Run Summary
4 passed (7.2s)
🎭 Playwright Run Summary
2 passed (4.4s)
🎭 Playwright Run Summary
4 passed (2.6s)
🎭 Playwright Run Summary
14 passed (4.7s)
🎭 Playwright Run Summary
2 passed (2.2s)
🎭 Playwright Run Summary
4 passed (2.2s)
🎭 Playwright Run Summary
3 passed (2.6s)
🎭 Playwright Run Summary
22 passed (10.1s)
🎭 Playwright Run Summary
3 skipped 17 passed (2.3s)
🎭 Playwright Run Summary
4 passed (6.9s)
🎭 Playwright Run Summary
1 passed (4.9s)
🎭 Playwright Run Summary
43 passed (4.7s)
🎭 Playwright Run Summary
3 passed (4.7s)
🎭 Playwright Run Summary
3 passed (3.4s)
🎭 Playwright Run Summary
3 passed (3.3s)
🎭 Playwright Run Summary
9 passed (9.7s)
🎭 Playwright Run Summary
3 passed (6.2s)
🎭 Playwright Run Summary
11 passed (6.2s)
🎭 Playwright Run Summary
11 passed (6.5s)
🎭 Playwright Run Summary
1 passed (3.3s)
🎭 Playwright Run Summary
1 passed (3.3s)
🎭 Playwright Run Summary
3 passed (5.0s)
🎭 Playwright Run Summary
11 passed (2.7s)
🎭 Playwright Run Summary
3 skipped 17 passed (2.7s)
🎭 Playwright Run Summary
1 passed (6.1s)
🎭 Playwright Run Summary
9 passed (10.7s)
🎭 Playwright Run Summary
2 passed (4.1s)
🎭 Playwright Run Summary
3 passed (3.4s)
🎭 Playwright Run Summary
23 passed (12.5s)
🎭 Playwright Run Summary
2 passed (3.3s)
🎭 Playwright Run Summary
1 passed (3.4s)
🎭 Playwright Run Summary
1 passed (3.4s)
🎭 Playwright Run Summary
5 passed (7.9s)
🎭 Playwright Run Summary
7 passed (5.1s)
🎭 Playwright Run Summary
11 passed (5.3s)
🎭 Playwright Run Summary
11 passed (5.6s)
🎭 Playwright Run Summary
7 passed (8.8s)
🎭 Playwright Run Summary
4 passed (2.7s)
🎭 Playwright Run Summary
3 passed (15.7s)
🎭 Playwright Run Summary
3 skipped 21 passed (17.3s)
🎭 Playwright Run Summary
12 passed (6.5s)
🎭 Playwright Run Summary
9 passed (12.1s)
🎭 Playwright Run Summary
11 passed (14.5s)
🎭 Playwright Run Summary
5 passed (17.4s)
🎭 Playwright Run Summary
8 passed (3.5s)
🎭 Playwright Run Summary
24 passed (16.0s)
🎭 Playwright Run Summary
3 passed (18.6s)
🎭 Playwright Run Summary
7 passed (11.7s)
🎭 Playwright Run Summary
4 passed (4.4s)
🎭 Playwright Run Summary
1 passed (5.8s)
🎭 Playwright Run Summary
12 passed (5.4s)
🎭 Playwright Run Summary
3 passed (13.6s)
🎭 Playwright Run Summary
4 passed (3.9s)
🎭 Playwright Run Summary
5 passed (7.9s)
🎭 Playwright Run Summary
12 passed (9.1s)
🎭 Playwright Run Summary
11 passed (15.9s)
🎭 Playwright Run Summary
5 passed (7.8s)
🎭 Playwright Run Summary
18 passed (20.5s)
🎭 Playwright Run Summary
43 passed (4.6s)
🎭 Playwright Run Summary
5 passed (8.8s)
🎭 Playwright Run Summary
13 passed (25.2s)
🎭 Playwright Run Summary
4 passed (7.3s)
🎭 Playwright Run Summary
43 passed (3.4s)
🎭 Playwright Run Summary
12 passed (6.6s)
🎭 Playwright Run Summary
7 passed (8.6s)
🎭 Playwright Run Summary
9 passed (11.7s)
🎭 Playwright Run Summary
29 passed (26.6s)
🎭 Playwright Run Summary
5 passed (6.9s)
🎭 Playwright Run Summary
1 passed (4.3s)
🎭 Playwright Run Summary
16 passed (23.3s)
🎭 Playwright Run Summary
10 passed (27.6s)
🎭 Playwright Run Summary
10 passed (5.1s)
🎭 Playwright Run Summary
10 passed (5.5s)
🎭 Playwright Run Summary
5 passed (10.0s)
🎭 Playwright Run Summary
9 passed (10.3s)
🎭 Playwright Run Summary
4 passed (8.3s)
🎭 Playwright Run Summary
7 passed (6.1s)
🎭 Playwright Run Summary
3 passed (5.8s)
🎭 Playwright Run Summary
10 passed (27.5s)
🎭 Playwright Run Summary
8 passed (3.2s)
🎭 Playwright Run Summary
8 passed (3.3s)
🎭 Playwright Run Summary
7 passed (31.5s)
🎭 Playwright Run Summary
1 skipped 12 passed (4.7s)
🎭 Playwright Run Summary
3 passed (2.8s)
🎭 Playwright Run Summary
10 passed (9.9s)
🎭 Playwright Run Summary
12 passed (27.1s)
🎭 Playwright Run Summary
12 skipped 1 passed (2.9s)
🎭 Playwright Run Summary
15 passed (8.9s)
🎭 Playwright Run Summary
12 skipped 1 passed (3.3s)
🎭 Playwright Run Summary
5 passed (8.8s)
🎭 Playwright Run Summary
3 passed (17.5s)
🎭 Playwright Run Summary
3 passed (9.4s)
🎭 Playwright Run Summary
13 passed (28.5s)
🎭 Playwright Run Summary
11 passed (18.1s)
🎭 Playwright Run Summary
12 passed (27.8s)
🎭 Playwright Run Summary
8 passed (6.6s)
🎭 Playwright Run Summary
12 skipped 1 passed (4.8s)
🎭 Playwright Run Summary
15 passed (19.8s)
🎭 Playwright Run Summary
4 passed (8.5s)
🎭 Playwright Run Summary
7 passed (27.4s)
🎭 Playwright Run Summary
9 passed (13.5s)
🎭 Playwright Run Summary
2 skipped 20 passed (25.8s)
🎭 Playwright Run Summary
12 passed (28.8s)
🎭 Playwright Run Summary
13 passed (28.4s)
🎭 Playwright Run Summary
12 passed (29.0s)
🎭 Playwright Run Summary
8 passed (8.8s)
🎭 Playwright Run Summary
3 passed (16.5s)
🎭 Playwright Run Summary
3 passed (10.0s)
🎭 Playwright Run Summary
12 passed (27.2s)
🎭 Playwright Run Summary
10 passed (36.8s)
🎭 Playwright Run Summary
12 passed (27.5s)
🎭 Playwright Run Summary
4 passed (9.8s)
🎭 Playwright Run Summary
5 passed (8.0s)
🎭 Playwright Run Summary
9 passed (12.1s)
🎭 Playwright Run Summary
2 skipped 20 passed (26.1s)
🎭 Playwright Run Summary
40 passed (21.6s)
🎭 Playwright Run Summary
18 passed (19.4s)
🎭 Playwright Run Summary
16 passed (27.5s)
🎭 Playwright Run Summary
1 passed (5.1s)
🎭 Playwright Run Summary
9 passed (8.8s)
🎭 Playwright Run Summary
9 passed (11.4s)
🎭 Playwright Run Summary
7 passed (27.9s)
🎭 Playwright Run Summary
13 passed (12.1s)
🎭 Playwright Run Summary
8 passed (29.6s)
🎭 Playwright Run Summary
3 passed (5.0s)
🎭 Playwright Run Summary
14 passed (26.2s)
🎭 Playwright Run Summary
10 passed (6.4s)
🎭 Playwright Run Summary
10 passed (5.8s)
🎭 Playwright Run Summary
10 passed (36.6s)
🎭 Playwright Run Summary
14 skipped 12 passed (21.4s)
🎭 Playwright Run Summary
4 passed (17.3s)
🎭 Playwright Run Summary
4 passed (3.6s)
🎭 Playwright Run Summary
3 passed (6.9s)
🎭 Playwright Run Summary
14 skipped 12 passed (20.8s)
🎭 Playwright Run Summary
12 passed (20.7s)
🎭 Playwright Run Summary
8 passed (28.6s)
🎭 Playwright Run Summary
12 skipped 1 passed (4.4s)
🎭 Playwright Run Summary
40 passed (19.6s)
🎭 Playwright Run Summary
4 passed (10.9s)
🎭 Playwright Run Summary
8 passed (10.2s)
🎭 Playwright Run Summary
17 passed (55.9s)
🎭 Playwright Run Summary
2 passed (26.5s)
🎭 Playwright Run Summary
4 passed (9.2s)
🎭 Playwright Run Summary
10 passed (27.2s)
🎭 Playwright Run Summary
14 skipped 12 passed (22.2s)
🎭 Playwright Run Summary
1 flaky [chromium] › tests/server/getServerSideProps.test.ts:4:5 › Should report an error event for errors thrown in getServerSideProps 3 skipped 26 passed (39.0s)
🎭 Playwright Run Summary
30 passed (12.2s)
🎭 Playwright Run Summary
4 skipped 10 passed (20.9s)
🎭 Playwright Run Summary
4 skipped 10 passed (7.2s)
🎭 Playwright Run Summary
2 flaky [chromium] › tests/trpc-error.test.ts:4:1 › should capture error with trpc context ───────────── [chromium] › tests/trpc-mutation.test.ts:4:1 › should create transaction with trpc input for mutation 1 passed (38.4s)
🎭 Playwright Run Summary
3 passed (3.2s)
🎭 Playwright Run Summary
4 passed (34.9s)
🎭 Playwright Run Summary
4 passed (7.4s)
🎭 Playwright Run Summary
1 skipped 14 passed (11.6s)
🎭 Playwright Run Summary
1 flaky [chromium] › tests/orpc-error.test.ts:4:1 › should capture server-side orpc error ────────────── 2 passed (44.4s)
🎭 Playwright Run Summary
3 passed (7.5s)
🎭 Playwright Run Summary
4 skipped 10 passed (28.9s)
🎭 Playwright Run Summary
2 skipped 12 passed (6.4s)
🎭 Playwright Run Summary
3 skipped 27 passed (41.5s)
🎭 Playwright Run Summary
30 passed (14.2s)
🎭 Playwright Run Summary
3 passed (35.9s)
🎭 Playwright Run Summary
3 passed (9.4s)
🎭 Playwright Run Summary
4 skipped 10 passed (33.3s)
🎭 Playwright Run Summary
2 skipped 12 passed (6.6s)
🎭 Playwright Run Summary
13 passed (47.8s)
🎭 Playwright Run Summary
13 passed (9.7s)
🎭 Playwright Run Summary
13 passed (48.8s)
🎭 Playwright Run Summary
13 passed (9.1s)
🎭 Playwright Run Summary
5 passed (48.1s)
🎭 Playwright Run Summary
5 passed (11.9s)
🎭 Playwright Run Summary
13 passed (51.7s)
🎭 Playwright Run Summary
13 passed (9.2s)
🎭 Playwright Run Summary
1 skipped 29 passed (19.0s)
🎭 Playwright Run Summary
7 skipped 23 passed (10.3s)
🎭 Playwright Run Summary
8 skipped 22 passed (9.6s)
🎭 Playwright Run Summary
4 passed (20.8s)
🎭 Playwright Run Summary
2 skipped 48 passed (1.0m)
🎭 Playwright Run Summary
7 skipped 23 passed (9.9s)
🎭 Playwright Run Summary
4 passed (18.3s)
🎭 Playwright Run Summary
51 passed (1.0m)
🎭 Playwright Run Summary
4 passed (18.5s)
🎭 Playwright Run Summary
51 passed (1.0m)
🎭 Playwright Run Summary
5 skipped 25 passed (16.7s)
🎭 Playwright Run Summary
5 skipped 25 passed (12.0s)
🎭 Playwright Run Summary
15 passed (25.9s)
🎭 Playwright Run Summary
4 skipped 26 passed (19.2s)
🎭 Playwright Run Summary
2 skipped 29 passed (1.2m)
🎭 Playwright Run Summary
2 skipped 29 passed (36.9s)
🎭 Playwright Run Summary
482 skipped 191 passed (36.6s)
🎭 Playwright Run Summary
43 passed (2.2m)
🎭 Playwright Run Summary
5 skipped 36 passed (1.8m)
🎭 Playwright Run Summary
2 skipped 39 passed (1.0m)
🎭 Playwright Run Summary
5 skipped 36 passed (1.8m)
🎭 Playwright Run Summary
2 skipped 39 passed (1.1m)
🎭 Playwright Run Summary
5 skipped 36 passed (1.8m)
🎭 Playwright Run Summary
2 skipped 39 passed (1.0m)
🎭 Playwright Run Summary
348 skipped 325 passed (1.5m)
🎭 Playwright Run Summary
478 skipped 195 passed (38.2s)
🎭 Playwright Run Summary
2 skipped 29 passed (2.0m)
🎭 Playwright Run Summary
2 skipped 29 passed (37.2s)
🎭 Playwright Run Summary
483 skipped 190 passed (38.1s)
🎭 Playwright Run Summary
5 skipped 29 passed (2.0m)
🎭 Playwright Run Summary
6 skipped 28 passed (1.0m)
🎭 Playwright Run Summary
2 skipped 29 passed (2.0m)
🎭 Playwright Run Summary
2 skipped 29 passed (37.3s)
🎭 Playwright Run Summary
346 skipped 327 passed (1.5m)
🎭 Playwright Run Summary
1 skipped 168 passed (2.5m)
🎭 Playwright Run Summary
2 skipped 32 passed (2.5m)
🎭 Playwright Run Summary
2 skipped 32 passed (1.1m)
🎭 Playwright Run Summary
7 skipped 34 passed (2.9m)
🎭 Playwright Run Summary
2 skipped 39 passed (1.0m)
🎭 Playwright Run Summary
4 passed (25.2s)
🎭 Playwright Run Summary
7 skipped 34 passed (2.8m)
🎭 Playwright Run Summary
2 skipped 39 passed (59.8s)
🎭 Playwright Run Summary
7 skipped 34 passed (3.1m)
🎭 Playwright Run Summary
2 skipped 39 passed (1.0m)
🎭 Playwright Run Summary
2 skipped 32 passed (3.0m)
🎭 Playwright Run Summary
2 skipped 32 passed (1.1m)
🎭 Playwright Run Summary
3 skipped 165 passed (3.9m)
🎭 Playwright Run Summary
3 skipped 165 passed (2.5m)
🎭 Playwright Run Summary
190 skipped 483 passed (3.9m)
🎭 Playwright Run Summary
202 skipped 471 passed (3.7m)
🎭 Playwright Run Summary
59 skipped 614 passed (4.8m)
🎭 Playwright Run Summary
5 skipped 163 passed (3.6m)
🎭 Playwright Run Summary
56 skipped 617 passed (4.7m)
🎭 Playwright Run Summary
59 skipped 614 passed (4.8m)
🎭 Playwright Run Summary
195 skipped 478 passed (3.9m)
🎭 Playwright Run Summary
54 skipped 619 passed (4.8m)
🎭 Playwright Run Summary
54 skipped 619 passed (4.9m)

Artifacts

Produced during runtime
Name Size Digest
build-bundle-output Expired
20.4 MB
sha256:e45bcba834df4423a78a1a4c7612ad6c79e88c75d3ca7e22786bc2a1a0f4875c
build-layer-output Expired
1.73 MB
sha256:bbd4c25a7b7306d230d11804935d562a285c9cb3f2469133ab0bb9d9080c009a
build-output Expired
10.5 MB
sha256:38b20cdcea7973d554c34cce62633b64ee189d9806c5624c781f85e64dc35ed0
build-tarball-output Expired
5 MB
sha256:747c22d5917178e2fc8b8efb1ccefd8558fef3e7274c0fa7e63756da18455fc5
codecov-coverage-results-nh-span-streaming-vercelai-migration-job_browser_unit_tests
88.5 KB
sha256:ce13caaa87fdf9ab303d9995f7568483b6e6c8abd9fa25f9bba0d20d4e647a50
codecov-coverage-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-18
256 KB
sha256:5c403825eb21022ce31d81a0d20f4ea07681b2d0649a8529e7374344b99ac503
codecov-coverage-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-20
263 KB
sha256:bae051af73e9310768ca2ac32ed9d4b2e20ed1bf46bd3bbaa4a69a4dfec78a47
codecov-coverage-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-22
263 KB
sha256:57ee9ddfb679e2efaa28ae8ca34efd9941d77003909f2715c49b11c55c459fc5
codecov-coverage-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-24
263 KB
sha256:b0fe0395e512abc41e0514684527a3b498f8e0f0e66650cfcef3d9b0d86c5b91
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_base
243 Bytes
sha256:88e621686eb238ee037054fcdb8c23d3994eb4e313297577d17bacf004442b80
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_debug
243 Bytes
sha256:f20365c98cd9c9b0655f3e2405366ad96de6d8abbcfc0e12a2ef2b9d4e1dbf3d
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_eager
243 Bytes
sha256:107d42917145bcb72ac021d145543b9970ce7a37613764b2e7fc738b5fded015
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_replay
239 Bytes
sha256:8f4ce9230cf19af59cd8a3a027e26b31416fb6a1f5a4b934f39a58bc48ff31c0
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_replay_buffer
239 Bytes
sha256:a1477ef12e6862ac3f1d3eee5b5a27887424284050522cad91277fad8ca2ebe3
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_tracing
243 Bytes
sha256:ced7a7bafdf8305dc92b8897669eccd0451a3dac5066351cebfe00f36fcaa512
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_loader_tests-browser-loader-loader_tracing_replay
238 Bytes
sha256:59ca0d6367818c927099efb1c388c372627d04dc3ca664cc1a168aeb165022eb
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle-chromium
248 Bytes
sha256:ff35d8adda9eb23432b3284ae9f466d776bb9271cbc50b9c2ab71489f401bc13
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_logs_metrics-chromium
248 Bytes
sha256:abd3b48ebe9a3cf659c37ce6bc0e792f1a2f988e565262af3201e1ed1d807e02
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_min-chromium
245 Bytes
sha256:a1abe9135aafa8e35f1fe5f7f768c9573ecbe1663872067f26e4ef4fb3dd651d
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_replay-chromium
248 Bytes
sha256:eed1f5ce52b5e068bfe1b4c500d7c4cb9d70f255cc51f0ccc293b617b24866b5
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_replay_logs_metrics-chromium
248 Bytes
sha256:bbfd8bd5fc68e4beb1ecfefca4a217aff08b6b0c39ebd738e0ca6f8021768222
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing-chromium
247 Bytes
sha256:766ce3ed6d8497232c457c20c981faa67f48b069af00ef0ba8192723b29c7b40
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_logs_metrics-chromium
246 Bytes
sha256:0b2d860b179bd049af691a32cafba1062fd482579aeecbe4c23103506714994b
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay-chromium
247 Bytes
sha256:967f81d8ce0ea8acc0907caa435cdb523a75f36b516779df4d3c306cd7766bee
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay_feedback-chromium
247 Bytes
sha256:1ea8c362ec645bc442525652698cf8bc62347e2ef3fbebcf80054518079b6f78
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay_feedback_logs_metrics-chromium
245 Bytes
sha256:9a6c781adffd305ce25f1b9414f08a088b6297783a948aabf1034a959c55fe2a
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay_feedback_logs_metrics_min-chromium
246 Bytes
sha256:921ad0742e51c287f1adba33307bb0c7e13269f507ff0b03538d47886328c241
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay_feedback_logs_metrics_min-firefox
236 Bytes
sha256:d89dc68da2a6c3164a4c3e205308405538bac73f6d62bd53dd59ecfb4ccc7c2c
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay_feedback_logs_metrics_min-webkit
245 Bytes
sha256:d935d90e67b863f0b5b4c3f23ba2237d39a50354fa23c3432c63de9ce8934132
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-bundle_tracing_replay_logs_metrics-chromium
247 Bytes
sha256:78aee84a28f5af2a99d6a7b4c69e393bcc96cc626bf7289b94490b7f91c36462
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-esm-chromium-1
244 Bytes
sha256:1e96905ca4707fe4d422ceebeddb8fd58fd5f6c20ec785214ce8c508b4d58bd0
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-esm-chromium-2
246 Bytes
sha256:4f329211f205dbf13eeb2a16fbd17ee51da6d7f5bd252ad0c56c44c24348da7e
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-esm-chromium-3
241 Bytes
sha256:75a96d33c91f110ec75f83a595b4d9fd1414c08604b55fc0affbe85c1370f89b
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_playwright_tests-browser-playwright-esm-chromium-4
241 Bytes
sha256:dd1ae35807f9b6e5b0e005326804c5b7fab71ee3a59d8581e15301bea1b20381
codecov-test-results-nh-span-streaming-vercelai-migration-job_browser_unit_tests
240 Bytes
sha256:d0c9f8dda5683d57e7f5b7bb369f10bda608e913e9a9245fd559fdb40ff5c354
codecov-test-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-18
254 Bytes
sha256:17a76a23c9f85494bd427a7c177838f320fdbaa28ec83032a8ede2a3706e8016
codecov-test-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-20
252 Bytes
sha256:f51cfd202c38f1792a6f4a1b36b6b6f5e96fd8c9cec68ddfdc11e7ad0ef5527b
codecov-test-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-22
253 Bytes
sha256:bcc08633cde90421b59d52336d2ab2a00aa99e1e09ac1b179cc34cf8534259c4
codecov-test-results-nh-span-streaming-vercelai-migration-job_node_unit_tests-24
254 Bytes
sha256:11ca1981cc375bd7752e93fe8107fb439c46d08e44cfc917a935a54d7e8727f8