Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.16.x regression: crashes/panics when YAML-stringify a large object #12885

Closed
nktpro opened this issue Nov 24, 2021 · 6 comments
Closed

1.16.x regression: crashes/panics when YAML-stringify a large object #12885

nktpro opened this issue Nov 24, 2021 · 6 comments
Labels
bug Something isn't working correctly

Comments

@nktpro
Copy link

nktpro commented Nov 24, 2021

Since 1.16.0, a deno process will crash/panic when serializing a relatively large object to YAML (via std@x.x.x/encoding/yaml.ts)

Here's a reproducer:

import { stringify as stringifyYaml } from "https://deno.land/std@0.115.1/encoding/yaml.ts";

const payload = await fetch(
  "https://gist.githubusercontent.com/nktpro/a0a626cd011c3217a0d55176d94543cf/raw/c80f9762f55bf113bd3e6c2fb62383fc8479c5ef/payload.json",
  {
    method: "GET",
    headers: {
      "Content-Type": "application/json",
    },
  },
).then((r) => r.json());

console.log("Payload", payload);

// Will crash in deno 1.16.x
console.log(stringifyYaml(payload));

Which will crash in deno 1.16.x with Trace/breakpoint trap (core dumped) output. The same snippet works fine in 1.15.x and below.

@bnoordhuis bnoordhuis added the bug Something isn't working correctly label Nov 28, 2021
@bnoordhuis
Copy link
Contributor

Thanks for the report, I'm able to reproduce.

I'm confident it's a V8 issue because it goes away when disabling the optimizing compiler with --v8-flags=--noopt.

The trap happens because the JIT-generated machine code hits an int3:

Thread 1 "deno" received signal SIGTRAP, Trace/breakpoint trap.
0x00002b4e00062a0c in ?? ()
(gdb) disassemble $rip-64,$rip+16
Dump of assembler code from 0x2b4e000629cc to 0x2b4e00062a1c:
   0x00002b4e000629cc:  push   %rbp
   0x00002b4e000629cd:  add    %al,(%rax)
   0x00002b4e000629cf:  call   *%r10
   0x00002b4e000629d2:  lea    -0x1(%rax),%r8
   0x00002b4e000629d6:  jmp    0x2b4e000628bc
   0x00002b4e000629db:  mov    0x178(%r13),%r8
   0x00002b4e000629e2:  mov    %r8,0x42e8(%r13)
   0x00002b4e000629e9:  mov    0x42e8(%r13),%r9
   0x00002b4e000629f0:  mov    %r8,0x42e8(%r13)
   0x00002b4e000629f7:  mov    -0x28(%rbp),%r8
   0x00002b4e000629fb:  mov    $0x82e0371,%r11d
   0x00002b4e00062a01:  cmp    %r11d,-0x1(%r8)
   0x00002b4e00062a05:  jne    0x2b4e00062a82
   0x00002b4e00062a0b:  int3   
=> 0x00002b4e00062a0c:  movabs $0x2b4e080cbe85,%r9
   0x00002b4e00062a16:  push   %r9
   0x00002b4e00062a18:  mov    %r8,-0x28(%rbp)
End of assembler dump.

The code falls through to the int3 because:

  1. V8 reads the JS object's map word (its hidden shape) at address %r8 - 1 (-1 because it's a tagged pointer)

  2. the map word should be != 0x82e0371 (caged value, not a full pointer, because of pointer compression)

  3. but it's actually equal

Not sure what (3) signifies, hard to trace back, but it's probably some kind of type confusion inside V8.

I can actually get the script to complete running set $rip = <addr>; continue a few times, where <addr> is the target address of the jne instruction preceding the int3. Not a good idea, obviously, but interesting.

@bnoordhuis
Copy link
Contributor

I've filed an upstream bug report: https://bugs.chromium.org/p/v8/issues/detail?id=12444

jersou added a commit to jersou/studio-pack-generator that referenced this issue Dec 8, 2021
jjallaire added a commit to quarto-dev/quarto-cli that referenced this issue Dec 11, 2021
After updating to Deno 1.16.4 we observed a core dump w/ "Trace/breakpoint trap" after rendering ~ 10 documents in quarto-web. A similar bug was reported for parsing large amounts of YAML here (denoland/deno#12885) and the problem was diagnosed as very likely a V8 optmizing compiler bug (and reported upstream here: https://bugs.chromium.org/p/v8/issues/detail?id=12444).

As with the other reported issue, disabling the optimizing compiler with `--v8-flags=--noopt` prevented the crash. I measured and for a render of a simple document this does create a ~ 20% slowdown. We'll take this slowdown for now so that we can sync to the latest version of Deno and assume that it will be resolved upstream soon.
@jjallaire
Copy link

Just wanted to note that we see this crash not just with a large YAML document but also with a process that reads a large number of smaller YAML documents. It looks like the bug has been triaged and assigned (@ Priority 2) upstream so hopefully this will be resolved reasonably soon.

watiko added a commit to watiko/zsh-history-utils-deno that referenced this issue Dec 18, 2021
@bnoordhuis
Copy link
Contributor

FWIW, still happens with V8 9.8.177.2.

@lucacasonato
Copy link
Member

Fixed by https://chromium-review.googlesource.com/c/v8/v8/+/3386595. Fix should be available in Deno 1.18.

@lino-levan
Copy link
Contributor

@lucacasonato I believe this issue should be closed(?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working correctly
Projects
None yet
Development

No branches or pull requests

6 participants