jest 25 performance

Created on 24 Jan 2020  ·  119Comments  ·  Source: facebook/jest

💥 Regression Report

we upgraded jest from 24 to 25 and saw our unit tests go from taking 5min 23sec in jenkins to now over 11 minutes. only a few snapshot tests broke in the upgrade, we -u'd them, but this is a severe regression imo. please help me understand how we can fix this. We clear cache in CI to ensure we always run the latest.

A clear and concise description of what the regression is.
run time went from 5:23, to 11:00

Last working version

24.8.0
Worked up to version:
24.8.0
Stopped working in version:
25.1.0

To Reproduce

sorry can't share our codebase
Steps to reproduce the behavior:

Expected behavior

A clear and concise description of what you expected to happen.

Link to repl or repo (highly encouraged)

Please provide either a repl.it demo or a minimal repository on GitHub.

Issues without a reproduction link are likely to stall.

Run npx envinfo --preset jest

Paste the results here:

  System:
    OS: macOS Mojave 10.14.6
    CPU: (12) x64 Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz
  Binaries:
    Node: 10.16.0 - ~/.nvm/versions/node/v10.16.0/bin/node
    Yarn: 1.19.0 - ~/.nvm/versions/node/v10.16.0/bin/yarn
    npm: 6.13.6 - ~/.nvm/versions/node/v10.16.0/bin/npm
Regression Needs Repro

Most helpful comment

@SimenB I did a git bisect to find out where the performance regression between 24.9 and 25.1 was introduced. I used the prettier tests because they run without modification all the way from 24.9 to 26.1.

I compared the accumulated runtime of three runs of the js subset (to safe some time) with disabled cache. More specially the command I use was (yarn run jest --no-cache tests/js/) with node 10.19. because node 10 was the recommended version for 24.9.

Results:

24.9.0-dev 3cdbd556948b4974b2cc23178977eb159d343df8 151.84s <- Good
25.1.0-dev 5dcc48075f22d581864f381f20bc8b257d2a73cd 223.29s <- Bad
24.9.0-dev bf6109591365a2b71c7b642fa33ed06d3db6cb26 122.58s
24.9.0-dev 77c3ceedfd61ddc841e11fec7b76e540924d3e60 120.42s
24.9.0-dev 30e08e9ae7d7d78f40df757c2ec4b49357285eda 221.28s
24.9.0-dev ad5377333daf6716af3465bba39f86b7db485e2b 222.33s
24.9.0-dev 8ddadfca76eb3fb979df81033d3d0ff564c415d6 120.32s
24.9.0-dev 966f22f2a08d9ac9300b6964ab91c4e75dba4761 120.46s
24.9.0-dev b9084101189639524863e15ef7557ea6bc6704b9 119.87s
24.9.0-dev 1d8245d77d47b4198d51e55da87893d7dfe1a759 129.93s

ad5377333daf6716af3465bba39f86b7db485e2b is the first bad commit
commit ad5377333daf6716af3465bba39f86b7db485e2b
Author: Simen Bekkhus <[email protected]>
Date:   Mon Dec 2 23:20:16 2019 +0100

    feat: add support for `compileFunction` allowing us to avoid the module wrapper (#9252)

Since there is a fallback if compileFunction is not defined I removed the compileFunction branch from ad5377333daf6716af3465bba39f86b7db485e2b which restored the performance.

Looking at 26.1 the code has moved around a bit but compileFunction and the fallback are still there. So:

26.1.0-dev 817d8b6aca845dd4fcfd7f8316293e69f3a116c5 242.99s <- with compileFunction
26.1.0-dev 817d8b6aca845dd4fcfd7f8316293e69f3a116c5 151.61s <- without compileFunction

i.e. removing the compileFunction branch (patch) brings 26.1 back to the runtime of 24.9. I’m sure that is not the solution, but at least we have something to work with.

All 119 comments

Sorry about the regression but

sorry can't share our codebase

means we can do absolutely nothing. This is the first I've heard of performance regressing, everywhere else I've heard from 10-40% _improvement_ in performance going from 24 to 25. You need to provide some sort of reproduction, otherwise we'll have to close this issue as it's not actionable at all as it stands.

If you want to see this adressed, you'll need to spend some time putting together a reproduction case, or hope somebody else does so.

ok i will see if i can pull our 10 slowest tests maybe, then try to run them in 24 vs 25. in the meantime, what do you recommend with regards to clearing cache before running tests in CI? do it? don't do it?

Your configuration, especially transforms and setup files are probably relevant as well

what do you recommend with regards to clearing cache before running tests in CI

I personally think it's a good idea just to be sure there's nothing stale laying around giving false negative or positives. Does it make a huge difference to the runtime of your tests to _not_ clear the cache?

it appears to be quite a bit slower when run after clearing cache. thanks for the tips i'll look into it and see if i can attempt a repro

FWIW, I've also noticed that v25 is either slightly slower or right on par with v24. Have not seen anywhere near 10-40% improvement.

I saw a significant speedup over jest 24 as noted here: https://github.com/facebook/jest/issues/7811#issuecomment-577057189

The above was tested on osx.

However, the exact same setup runs much, much slower on our CI which runs CentOS.

Linux specific issue? I/O related issues when writing cache files? Is it possible to turn off cache generation altogether to rule this out?

I think I found the culprit in our case, it's the --collectCoverage flag. When it is removed for both Jest 24 and 25, our tests run roughly twice as fast under 25. When it is enabled, our tests under 25 are almost 4 times as slow as the same ones under 24.

This is reproducible both on OSX and CentOS, so contrary to my previous comment the issue does not appear Linux-specific.

Interesting! We've updated Istanbul to v3, maybe something in there has regressed. We've added support for v8 code coverage, so I might also have messed up the refactoring when doing so

Yes! That's consistent with what I'm seeing as well. We are running with coverage in CI where it's 2x slower. And when I run locally without covg is quite fast. @SimenB is there any config option to use the older Istanbul? :)

Echoing what @csvan said it would be nice to turn off cache generation in CI if that is in fact a culprit since we delete it prior to building anyway

I'm unable to reproduce this - the repos I test have about the same performance with --coverage between v24 and v25. Would somebody be able to put together a repository with jest 24 and jest 25 where switching between them shows a difference?

just ran our CI build w/ coverage disabled, I think @csvan is on to something. The tests run in 4:00 w/ coverage turned off vs 11 min w/ coverage turned on. I will try to see if i can create repro this weekend at some point.

our exinfo from the build agent:

00:03:31.992   System:
00:03:31.992     OS: Linux 3.10 CentOS Linux 7 (Core)
00:03:31.992     CPU: (8) x64 Intel Core Processor (Skylake, IBRS)
00:03:31.992   Binaries:
00:03:31.992     Node: 10.16.0 - ~/workspace/grocery-electrode/tools/nix_64/nodejs-10.16.0/bin/node
00:03:31.992     npm: 6.9.0 - ~/workspace/grocery-electrode/tools/nix_64/npm-6.9.0/node_modules/.bin/npm
00:03:31.993   npmPackages:
00:03:31.993     jest: 25.1.0 => 25.1.0 

We're seeing a similar issue. Upgrading Jest 25 made our tests slower when using coverage (166s with Jest 24 vs. 381s with Jest 25). With Jest 25 displaying many of these errors while running the checks:

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0xb9c575dbe3d 
13: 0xb9c57ab091a 
14: 0xb9c579e7d65 
15: 0xb9c579ebaf3 

<--- Last few GCs --->

[733:0x102804000]    84639 ms: Mark-sweep 1335.2 (1449.6) -> 1325.4 (1452.1) MB, 1443.2 / 0.0 ms  (average mu = 0.135, current mu = 0.076) allocation failure scavenge might not succeed
[733:0x102804000]    85872 ms: Mark-sweep 1338.3 (1452.1) -> 1327.8 (1455.1) MB, 1159.4 / 0.0 ms  (average mu = 0.101, current mu = 0.059) allocation failure scavenge might not succeed


<--- JS stacktrace --->

Downgrading to Jest 24 makes those errors go away.

I also noticed Jest 25 handles the collectCoverageFrom differently as it seems to collect coverage from files we have explicitly disabled in that configuration. Did support for the glob syntax change there?

Any JS traces at all? Would be interesting to see where it died.

I also noticed Jest 25 handles the collectCoverageFrom differently as it seems to collect coverage from files we have explicitly disabled in that configuration. Did support for the glob syntax change there?

We upgraded to Micromatch 4, it might've changed something. No changes to it on purpose, though. Could you open up a separate issue with a reproduction?

Any JS traces at all?

This was printed:

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x521cca5be3d]
Security context: 0x0ebfa799e6e9 <JSObject>
    1: _clearMemoizedQueries [0xebf2a5aba99] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-jsdom/node_modules/jsdom/lib/jsdom/living/nodes/Node-impl.js:~208] [pc=0x521cd0d9a4e](this=0x0ebf5bee2aa9 <EventTargetImpl map = 0xebf7963d039>)
    2: _clearMemoizedQueries [0xebf2a5aba99] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-...

EDIT: Actually, I'm seeing heap errors even with coverage disabled.

We upgraded to Micromatch 4, it might've changed something. No changes to it on purpose, though. Could you open up a separate issue with a reproduction?

Will do.

Chiming in again. Coverage is definitely slower, and seems to be spurious. Here's the timings for OSX.

46.69
41.77
45.06

v24 coverage
78.60
75.79
80.38

v25
45.93
52.49
53.36

v25 circus
61.27
52.08

v25 coverage
310.98
227.15
153.81

Timings for CI (travis).

v24 coverage -w 4
101.634s

v25 coverage -w 4
178.306s

@milesj what is v25 circus?

It's jests new runner, which is supposed to be faster, but it never is from what I've seen. https://www.npmjs.com/package/jest-circus

@EvHaus Traces from JSDOM is interesting (might also be completely coincidental, of course). Could you try installing jest-environment-jsdom@24 and using that? We upgraded from 11 to 15, so something in there might have changed. Seems like a longshot, but who knows

@SimenB Rolling back just jest-environment-jsdom to <24.0.0 in my package.json definitely made an impact. The heap out of memory errors are gone and Jest seems to complete its runs without any issue.

Interesting! If you have time, it'd be lovely if you could test

Or just link in jsdom and bisect. I'll do that tomorrow, but I don't really have a good reproduction yet

For the following tests I don't have coverage enabled.

Stack traces

These are some of the stack traces from the jest-environment-jsdom-fourteen run:

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x20deef6dbe3d]
Security context: 0x36ee8219e6e9 <JSObject>
    1: _modified [0x36ee35982ec1] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-jsdom-fourteen/node_modules/jsdom/lib/jsdom/living/nodes/Node-impl.js:~189] [pc=0x20deefba6433](this=0x36eef3246e99 <EventTargetImpl map = 0x36ee99264ee9>)
    2: _insert [0x36eeb41f1e41] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-jsdom-fourte...
    0: ExitFrame [pc: 0x2aa5df5be3d]
Security context: 0x116a8d49e6e9 <JSObject>
    1: _clearMemoizedQueries [0x116a365133d1] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-jsdom-fourteen/node_modules/jsdom/lib/jsdom/living/nodes/Node-impl.js:~208] [pc=0x2aa5dfe7dae](this=0x116a8f16cd11 <EventTargetImpl map = 0x116ae7cc9b61>)
    2: _clearMemoizedQueries [0x116a365133d1] [/Users/evhaus/Git/zenhub/client/node_modules/jest-...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0x20deef6dbe3d 
13: 0x20deefba6433 
    0: ExitFrame [pc: 0xb8909f5be3d]
Security context: 0x09e628d9e6e9 <JSObject>
    1: childrenIterator [0x9e612e1d581] [/Users/evhaus/Git/zenhub/client/node_modules/symbol-tree/lib/SymbolTree.js:~367] [pc=0xb890a41010e](this=0x09e612e3eb01 <SymbolTree map = 0x9e6a7f56c09>,parent=0x09e685ca27d1 <EventTargetImpl map = 0x9e6061f36f1>,options=0x09e67b6026f1 <undefined>)
    2: arguments adaptor frame: 1->2
    3: _detach [0x9e65c4ae341] [/U...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0x2aa5df5be3d 
    0: ExitFrame [pc: 0x180d6e95be3d]
Security context: 0x02052079e6e9 <JSObject>
    1: _modified [0x205b86c1861] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-jsdom-fourteen/node_modules/jsdom/lib/jsdom/living/nodes/Node-impl.js:~189] [pc=0x180d6ede24fa](this=0x0205c8284411 <EventTargetImpl map = 0x205c1ea9769>)
    2: _attrModified [0x205b86ba771] [/Users/evhaus/Git/zenhub/client/node_modules/jest-environment-jsdom-fou...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0xb8909f5be3d 

Hope this helps

I don't know if this will help, but my organization had a massive slow down from Jest 24 to Jest 25 (18 minutes to 28 minutes) that went away after turning off coverage collection (down to 10 minutes).

@rosasynstylae out of curiosity, does your code have a lot of snapshot tests?

@benmonro It does, yes.

So does ours! @SimenB do you think lots of snapshots in a repo could cause this?

We are having the performance issues with no snapshots. We are collecting coverage though. Significant slowdown from 24 -> 25. 2 different projects. It varies but the slowdown is significant and consistent.

I can run the tests over and over with no changes and the tests are 10 times slower on average then they were with 24.

I switch back to 24 and the runs are back to the speed we were used to.

I can provide more info if needed. I wanted to make sure to mention our 2 projects with no snapshot tests.

From all the comments here, it definitely sounds like coverage is the problem, and probably a regression in istanbul?

From all the comments here, it definitely sounds like coverage is the problem, and probably a regression in istanbul?

I wouldn't be so fast to point the finger at istanbul. In my case, even with coverage disabled, I'm seeing significant performance and stability issues in Jest 25. See https://github.com/facebook/jest/issues/9457#issuecomment-579423330

It's possible there are two separate issues:

1) Issues with jest-environment-jsdom-fourteen, and
2) Issues with istanbul coverage

I downgraded micromatch to ^3.0.0 and saw a massive speedup using --coverage, more or less back to the performance we saw under Jest 24. Can anybody reproduce?

UPDATE: However, now running without --coverage is also back to Jest 24 levels of performance - twice as slow :-/

@EvHaus thanks for checking, very interesting! I'm still unable to reproduce this, unfortunately. So a reproduction would still be awesome, that way I can try to debug this.

I downgraded micromatch to ^3.0.0 and saw a massive speedup using --coverage, more or less back to the performance we saw under Jest 24. Can anybody reproduce?

UPDATE: However, now running without --coverage is also back to Jest 24 levels of performance - twice as slow :-/

What in the world... As far as I can see nothing in istanbul depends on micromatch, so why it should impact coverage is beyond me 🙁

The whole micromatch performance thing is getting a bit absurd, with coverage v3 is faster than v4, without v4 is faster than v3? 😅

@SimenB yep, ran it through our CI as well just to confirm. Changing nothing apart from adding

  "resolutions": {
    "micromatch": "^3.0.0"
  }

to our package.json shaved a solid 6 minutes off the run when using --coverage, bringing it roughly back to what we saw under Jest 24.

As far as I can see nothing in istanbul depends on micromatch

Found this comment in another thread which may be related to this:

https://github.com/facebook/jest/issues/9464#issuecomment-579733243

Just confirmed nothing in istanbul pulls in micromatch (they use minimatch in the babel plugin).

It might be something about exclusions not working properly, definitely. We use it to check what we should instrument: https://github.com/facebook/jest/blob/28f6da44cc58d41438bddfa9fcd741fd01b02ded/packages/jest-transform/src/shouldInstrument.ts. Could you perhaps stick some logging in there and see if we return true anywhere with micromatch@4 that we don't for micromatch@3?

Definitely feels like 2 separate issues though, one about jsdom and one about coverage

I can confirm it is back to normal speed for us in CI when we resolve micromatch@3 as well.

Jest + typescript + react codebase here. Seeing this issue and using npm-force-resolutions to force micromatch ^3.0.0 seemed to fix the crazy slowdown.

Do you have custom test file patterns pr coverage patterns in your config?

@EvHaus I'm super interested in if you see a difference by downgrading Micromatch as well, seeing as you saw a big difference with jsdom versions

If this is what you mean, then yes.

  collectCoverage: true,
  collectCoverageFrom: [
    'src/**/*.ts',
    'src/**/*.tsx',
    'src/**/*.js',
    'src/**/*.jsx',
    '!src/themes/**',
    '!src/**/Styled*.tsx',
    '!src/**/Styled*.ts',
    '!src/**/*Actions.ts',
    '!src/mainStore.ts',
    '!src/App.tsx',
    '!src/AppView.tsx',
    '!src/AppError.tsx',
    '!src/StyledAppComponents.tsx',
    '!src/index.tsx',
    'src/utility/redux/*.ts',
    '!src/testingUtils/*',
    '!src/**/index.ts',
    '!docs/**/**',
  ],

we also have that and ours looks quite similar in length/values

@Ancient123 yeah, exactly. Seems related to the Micromatch regression for negated patterns. Thanks!

Seems related to the Micromatch regression for negated patterns. Thanks!

Noted, I'll look into it ASAP.

The whole micromatch performance thing is getting a bit absurd

Sorry about the performance degradation. Generating regular expressions for globbing is a lot harder to do than it looks. Especially when it needs to handle negation and be cross-platform. I'm looking into this now.

@jonschlinkert it was not meant accusatory at all, the work you're putting into Micromatch and related libraries are extremely appreciated! :heart:

yes! what @SimenB said. nothing but ❤️

@EvHaus I'm super interested in if you see a difference by downgrading Micromatch as well, seeing as you saw a big difference with jsdom versions

In my package.json i set:

"resolutions": {
    "micromatch": "^3.0.0"
}

Re-ran npm install, and then manually deleted node_modules/jest/micromatch (which was at version 4). Then re-ran my tests.

Unfortunately, I'm still seeing many "JavaScript heap out of memory" errors.

Am I doing the downgrade correctly?

resolutions needs yarn, npm hasn't implemented it yet (it's on the roadmap for v7: https://blog.npmjs.org/post/186983646370/npm-cli-roadmap-summer-2019)

@EvHaus until npm v7 comes out you can use resolutions in npm w/ this package: https://www.npmjs.com/package/npm-force-resolutions

Sorry for the delay. Used npm-force-resolutions (which is doing the right thing) to lock micromatch to v3. Unfortunately, it didn't make my heap errors go away.

So for me, it's still [email protected] to blame, as mentioned here: https://github.com/facebook/jest/issues/9457#issuecomment-579423330

Resolving jsdom to thirteen is what fixes it.

Does anyone who have experienced a performance degradation in 25 have issues that are not fixed by either using jsdom@13 or micromatch@3? Memory leak in JSDOM is being fixed (https://github.com/jsdom/jsdom/pull/2840 and various issues/PRs linked from it) and the regression in micromatch has been reported and is being worked on: https://github.com/micromatch/micromatch/issues/179.

Fixed version of JSDOM has been released, you can use it by installing jest-environment-jsdom-sixteen. @EvHaus could you verify it fixes your issue?

@SimenB my issue is probably not related, but I tried jest-environment-jsdom-sixteen vs using the default, and saw a 20s increase in runtime for the same test suite over repeated runs.

over using v15 (which is what ships with jest by default) and no other changes? Could you test with 16.1.0 as well (although that leaks memory, so might be harder to test). JSDOM just landed with custom element support, there _might_ be some regression in there? Not sure

Fixed version of JSDOM has been released, you can use it by installing jest-environment-jsdom-sixteen. @EvHaus could you verify it fixes your issue?

Unfortunately still getting heap errors with jest-environment-jsdom-sixteen. The last stable working version of JSDom for me is jest-environment-jsdom-thirteen.

Fixed version of JSDOM has been released, you can use it by installing jest-environment-jsdom-sixteen. @EvHaus could you verify it fixes your issue?

The environment works with our codebase, but we're still seeing an almost 100% regression in runtime. Anecdotally jest-environment-jsdom-sixteen seems to improve runs time performance by 10%ish only when using 25.1 vs 24.9

Hi @SimenB,

I've made the reproducible case here https://github.com/olebedev/jest-perf-issue. Please take a look. Below result to compare. /cc @joscha

Results

Benchmarks were run on MBP 2019, 32Gb RAM, i9-8950HK CPU @ 2.90GHz.

| jest version | branch | time |
|:--------------|:------:|----------:|
| 24.9.0 |master| 348.077s|
| 25.1.0 |jest25| 591.33s|

In our case, where jest v25.1 were ~50% slower compared to v24.9, now the latest jest v25.2.0 is even further 20% slower compared to v25.1 🙈

@olebedev Woah, that's painful 😬

I'm getting similar numbers to you. If it's based on a real project I recommend using v8 coverage. It takes the runtime from 600s to 35s on my machine in your reproduction. The reason for the huge diff is probably that we don't try to instrument non-covered files with v8 coverage (we just say every byte is uncovered, which works with v8).

https://jestjs.io/docs/en/configuration#coverageprovider-string

Not sure why it's so slow... I'll try to find some time to look into it (won't be anytime soon though). It should at least not be any slower on v25 than v24

Do I understand the docs correctly that we can use v8 coverage together
with the ...-sixteen environment?

Cheers,
J

On Wed, 8 Apr 2020, 22:33 Simen Bekkhus, notifications@github.com wrote:

@olebedev https://github.com/olebedev Woah, that's painful 😬

I'm getting similar numbers to you. If it's based on a real project I
recommend using v8 coverage. It takes the runtime from 600s to 35s on my
machine in your reproduction. The reason for the huge diff is probably that
we don't try to instrument non-covered files with v8 coverage.

https://jestjs.io/docs/en/configuration#coverageprovider-string

Not sure why it's so slow... I'll try to find some time to look into it.
It should at least not be any slower on v25 than v24


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/facebook/jest/issues/9457#issuecomment-610931770, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AABN5BW6R45SS5Z5GXGFGF3RLRVJLANCNFSM4KK67LOA
.

Yes, either jest-environment-jsdom-sixteen, or the bundled jest-environment-node. Also note that it's only supported on node 10+, not node 8.

(I've ever only tested this with the Node env, but if it doesn't works with the jsdom env that's a bug - please open up a separate issue 🙂)

jest-environment-jsdom-sixteen + v8 as coverage provider was worse by about 20% for us on jest 25.3.0, node 12.16. We're also trying to debug why our test performance got worse by about 80% going from jest 24 to 25.

@joual did you try the micromatch workaround (downgrade to 3.x)?

Having a similar experience here, test times (_without_ coverage) double on v25 from 35-40 seconds to 80-90 and sometimes more. Tried locking micromatch on v3, no measurable difference. Fwiw, we have around 3k tests of which 58 are snapshot tests.
Attempted to downgrade jsdom but this seems to introduce lots of test breakage due to recent features we've been using. WIll see if I can get around this somehow and report back.

@SimenB The coverage collect job on prettier project is also very slow, can you check it? https://github.com/prettier/prettier/runs/579497097 Node.js 12 on ubuntu-latest collects coverage, other job don't.

Having a similar experience here, test times (_without_ coverage) double on v25 from 35-40 seconds to 80-90 and sometimes more. Tried locking micromatch on v3, no measurable difference. Fwiw, we have around 3k tests of which 58 are snapshot tests.
Attempted to downgrade jsdom but this seems to introduce lots of test breakage due to recent features we've been using. WIll see if I can get around this somehow and report back.

Was experimenting with different jsdom versions today on jest@24 (which is v11 by default). Up to v14 everything seems to work fine, but as of v15 test runs take consistently 50-60% longer. Same story in v16. Will see if I can get similar perf on jest@25 by downgrading jsdom to v14.

[email protected] has some memory fixes, @EvHaus could you try it out? Jest's own --detect-leaks finds leaks in previous versions, but not 16.2.2.

I've also landed some other improvements if you have lots of symlinks (which is super slow on windows), so if people could try out [email protected] that would be lovely 🙂

@SimenB What's the easiest way to test that? If I add [email protected] directly as a devDependency Jest ignores that and uses whatever is bundled with jest-environment-jsdom which is 15.2.1 today.

How can I trick npm to ensure it's using the jsdom version I want?

Install jest-environment-jsdom-sixteen and use it https://jestjs.io/docs/en/configuration#testenvironment-string

Alpha published, so you can try jest@next - it comes with jsdom 16 out of the box

@SimenB Sorry, not much luck with [email protected] and its [email protected]. Still getting many of these errors:

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

And eventually the runner dies.

My details:

> npx envinfo --preset jest

  System:
    OS: macOS 10.15.4
    CPU: (8) x64 Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz
  Binaries:
    Node: 10.17.0 - ~/.nvm/versions/node/v10.17.0/bin/node
    Yarn: 1.22.4 - /usr/local/bin/yarn
    npm: 6.14.4 - ~/.nvm/versions/node/v10.17.0/bin/npm
  npmPackages:
    jest: ^26.0.0-alpha.0 => 26.0.0-alpha.0 

Here are some of the full stacks returned from those:

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0x3f82f50dbe3d 
13: 0x3f82f5c630de 
14: 0x3f82f5c94431 
15: 0x3f82f5c7d3be 
16: 0x3f82f5c4e98b 
17: 0x3f82f5c3c38e 

<--- Last few GCs --->

[50818:0x102804000]   189738 ms: Mark-sweep 1288.8 (1450.6) -> 1280.2 (1454.1) MB, 890.1 / 0.1 ms  (average mu = 0.181, current mu = 0.061) allocation failure scavenge might not succeed
[50818:0x102804000]   190673 ms: Mark-sweep 1292.8 (1454.1) -> 1282.9 (1457.6) MB, 856.2 / 0.2 ms  (average mu = 0.136, current mu = 0.084) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x3f82f50dbe3d]
Security context: 0x274d67c9e6e9 <JSObject>
    1: createImpl [0x274d6d9ba1b9] [/Users/evhaus/Git/zenhub/client/node_modules/jsdom/lib/jsdom/living/generated/HTMLInputElement.js:~47] [pc=0x3f82f5c630de](this=0x274d51911261 <Object map = 0x274dd51fe489>,globalObject=0x274d89d38609 <Window map = 0x274d2fe6c211>,constructorArgs=0x274d832134b1 <JSArray[0]>,privateData=0x274d832134d1 <Object map = 0x274d69...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0x2cea552dbe3d 
13: 0x2cea55937c04 
14: 0x2cea5592618b 

<--- Last few GCs --->

[51263:0x102804000]    34292 ms: Mark-sweep 1332.4 (1452.5) -> 1320.5 (1453.5) MB, 902.6 / 0.0 ms  (average mu = 0.149, current mu = 0.104) allocation failure scavenge might not succeed
[51263:0x102804000]    35480 ms: Mark-sweep 1332.6 (1453.5) -> 1323.6 (1457.5) MB, 1049.3 / 0.0 ms  (average mu = 0.131, current mu = 0.116) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x2cea552dbe3d]
Security context: 0x1a4cb371e6e9 <JSObject>
    1: next [0x1a4ca627dcd1] [/Users/evhaus/Git/zenhub/client/node_modules/symbol-tree/lib/TreeIterator.js:~16] [pc=0x2cea55937c04](this=0x1a4c807c75b1 <TreeIterator map = 0x1a4c38b8a9c9>)
    2: shadowIncludingInclusiveDescendantsIterator(aka shadowIncludingInclusiveDescendantsIterator) [0x1a4ca627a641] [/Users/evhaus/Git/zenhub/client/node_modules/jsdom/li...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0x3e0d8aedbe3d 
13: 0x3e0d8b35eedc 

<--- Last few GCs --->

[51519:0x102804000]    28074 ms: Mark-sweep 1324.5 (1445.0) -> 1315.7 (1449.0) MB, 760.4 / 0.0 ms  (average mu = 0.182, current mu = 0.080) allocation failure scavenge might not succeed
[51519:0x102804000]    28906 ms: Mark-sweep 1328.5 (1449.0) -> 1317.7 (1452.0) MB, 770.4 / 0.0 ms  (average mu = 0.129, current mu = 0.074) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x3e0d8aedbe3d]
Security context: 0x3611d141e6e9 <JSObject>
    1: queueMutationRecord(aka queueMutationRecord) [0x361185f32321] [/Users/evhaus/Git/zenhub/client/node_modules/jsdom/lib/jsdom/living/helpers/mutation-observers.js:~33] [pc=0x3e0d8b35eedc](this=0x361116e826f1 <undefined>,type=0x3611aa0a3681 <String[9]: childList>,target=0x36110b275a91 <EventTargetImpl map = 0x3611a254a2f1>,name=0x361116e822b1 <null>,name...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003d041 node::Abort() [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 2: 0x10003d24b node::OnFatalError(char const*, char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 3: 0x1001b8e25 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 4: 0x100586d82 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 5: 0x100589855 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 6: 0x1005856ff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 7: 0x1005838d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 8: 0x10059016c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
 9: 0x1005901ef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
10: 0x10055fb34 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
11: 0x1007e7e14 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/evhaus/.nvm/versions/node/v10.17.0/bin/node]
12: 0x8d8aedbe3d 
<--- Last few GCs --->

[51526:0x102804000]    33125 ms: Mark-sweep 1318.6 (1425.0) -> 1317.7 (1424.0) MB, 874.8 / 0.0 ms  (average mu = 0.126, current mu = 0.038) allocation failure scavenge might not succeed
[51526:0x102804000]    33136 ms: Scavenge 1318.5 (1424.0) -> 1318.0 (1424.5) MB, 3.8 / 0.0 ms  (average mu = 0.126, current mu = 0.038) allocation failure 
[51526:0x102804000]    33148 ms: Scavenge 1318.7 (1424.5) -> 1318.2 (1425.0) MB, 4.2 / 0.0 ms  (average mu = 0.126, current mu = 0.038) allocation failure 


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x8d8aedbe3d]
    1: StubFrame [pc: 0x8d8ae8d40b]
    2: ConstructFrame [pc: 0x8d8ae8cfa3]
Security context: 0x3324ecd9e6e9 <JSObject>
    3: new NodeImpl(aka NodeImpl) [0x3324c2083e11] [/Users/evhaus/Git/zenhub/client/node_modules/jsdom/lib/jsdom/living/nodes/Node-impl.js:~125] [pc=0x8d8b357fd4](this=0x332437582801 <the_hole>,globalObject=0x3324b10f98e9 <Window map = 0x3324649cf0a1>,args=0x3324b1841471 <JSArray[0]>,...

Too bad 😞 Is that with or without coverage?

That was without coverage. Should have clarified.

Do you think that I'm running a fairly old version of Node (v10) would be factor in this?

You can try using newer versions, if nothing else it'd be an interesting data point. Other things to try is to try to make a heap dump right before it dies. What's on the heap?

It's interesting that nobody mentioned that micromatch@4 doesn't cache regexps anymore (see https://github.com/micromatch/micromatch/pull/151 and https://github.com/facebook/jest/pull/10131 introduces missing caching on jest side.

For me downgrade to micromatch@3 and upgrade to jest-environment-jsdom-sixteen saved 50% of time.
But with jest 26 and built-in jsdom I'm still getting leaks error when run jest with --detectLeaks in my case. Tried fresh repo and all works well.

Released in 26.1.0, very interested in hearing if it helps folks

@SimenB thanks a lot for the release! unfortunately on our side we're still facing a huge difference:

current:

os: osx
node: 12.6.1
jest: 24.9
-----------------
174 test suites
823 tests
322 snapshots
-----------------
done in 23.569s

on every version higher than 24.9

os: osx
node: 12.6.1
jest: 26.1.0
-----------------
174 test suites
823 tests
322 snapshots
-----------------
done in 133.763s

both with fresh caches and after a complete reinstall of all node modules. untouched configs.
running tests on watch mode it takes more than 3min on my machine to determine which tests to run. i can't really isolate the issue for a reproduction but if you give me some advices what to test, i'd be very interested. thanks for all the work you put into that!

@SimenB

For prettier project, still slow than v24 when collecting coverage.

image

image

https://github.com/prettier/prettier/pull/8636

I can confirm that versions 25 and 26 performances are lower than 24 on Bitbucket Pipeline. I've also noticed that the slowness increases with coverage enabled. Version 25 gets even worse compared to 26 and the pipeline crashes due the memory consume.

@SimenB I did a git bisect to find out where the performance regression between 24.9 and 25.1 was introduced. I used the prettier tests because they run without modification all the way from 24.9 to 26.1.

I compared the accumulated runtime of three runs of the js subset (to safe some time) with disabled cache. More specially the command I use was (yarn run jest --no-cache tests/js/) with node 10.19. because node 10 was the recommended version for 24.9.

Results:

24.9.0-dev 3cdbd556948b4974b2cc23178977eb159d343df8 151.84s <- Good
25.1.0-dev 5dcc48075f22d581864f381f20bc8b257d2a73cd 223.29s <- Bad
24.9.0-dev bf6109591365a2b71c7b642fa33ed06d3db6cb26 122.58s
24.9.0-dev 77c3ceedfd61ddc841e11fec7b76e540924d3e60 120.42s
24.9.0-dev 30e08e9ae7d7d78f40df757c2ec4b49357285eda 221.28s
24.9.0-dev ad5377333daf6716af3465bba39f86b7db485e2b 222.33s
24.9.0-dev 8ddadfca76eb3fb979df81033d3d0ff564c415d6 120.32s
24.9.0-dev 966f22f2a08d9ac9300b6964ab91c4e75dba4761 120.46s
24.9.0-dev b9084101189639524863e15ef7557ea6bc6704b9 119.87s
24.9.0-dev 1d8245d77d47b4198d51e55da87893d7dfe1a759 129.93s

ad5377333daf6716af3465bba39f86b7db485e2b is the first bad commit
commit ad5377333daf6716af3465bba39f86b7db485e2b
Author: Simen Bekkhus <[email protected]>
Date:   Mon Dec 2 23:20:16 2019 +0100

    feat: add support for `compileFunction` allowing us to avoid the module wrapper (#9252)

Since there is a fallback if compileFunction is not defined I removed the compileFunction branch from ad5377333daf6716af3465bba39f86b7db485e2b which restored the performance.

Looking at 26.1 the code has moved around a bit but compileFunction and the fallback are still there. So:

26.1.0-dev 817d8b6aca845dd4fcfd7f8316293e69f3a116c5 242.99s <- with compileFunction
26.1.0-dev 817d8b6aca845dd4fcfd7f8316293e69f3a116c5 151.61s <- without compileFunction

i.e. removing the compileFunction branch (patch) brings 26.1 back to the runtime of 24.9. I’m sure that is not the solution, but at least we have something to work with.

As another data point, our jest suite currently takes around 2767 seconds with [email protected] and in our update MR it takes around 3497 seconds, an increase around 27%.

Thanks for all the great work, jest team, I hope that the detective skills of @wurstbonbon can help you fixing that regression!

@wurstbonbon thanks for taking the time to dig! Very interesting that compileFunction is slower... That should mean that you can just use a custom test environment instead of applying a patch.

const NodeEnv = require('jest-environment-node');

class MyNodeEnv extends NodeEnv {}

delete MyNodeEnv.prototype.compileFunction;

module.exports = MyNodeEnv;

(and the same for jsdom env). Can you confirm?


It being such a bottleneck sounds weird though - Node itself switched to using it 18 months ago: https://github.com/nodejs/node/pull/21573. So it's probably something weird we're doing on our side. The linked https://github.com/nodejs/node/issues/26229 is very interesting though - maybe we need to do some more caching on our side?

@SimenB i just tried something similar to that custom env and it looks like it was a little better (but still slower than jest 24).

I had to do MyNodeEnv.prototype.getVmContext = null;, though, because I'm testing with jest 26, and it looks like it checks for if (typeof this._environment.getVmContext === 'function') { now. Not sure if this could cause other issues, though.

These are the results i'm seeing after a few runs:

Jest 26 w/testEnvironment: "node" => ~280s
Jest 26 w/custom test environment => ~210s
Jest 24 => ~160s

Let me know if i can help with any other information or something else!

As expected the custom env results in the same speedup for prettier.

I've also tried it on our codebase where the difference is ~270s vs ~200s so only about 25% and not 40% reduction. Unfortunately I can't run our tests with jest 24 because we rely on the new modern timers mocking.

I missed a delete in my example above, sorry about that.


I wonder if it's enough to just cache the compiled functions manually - could you try to apply this patch? (both transpiled JS and the TS source included here)

diff --git i/packages/jest-runtime/build/index.js w/packages/jest-runtime/build/index.js
index 1d094a6dc0..f6d059caa3 100644
--- i/packages/jest-runtime/build/index.js
+++ w/packages/jest-runtime/build/index.js
@@ -267,6 +267,7 @@ const getModuleNameMapper = config => {
 const unmockRegExpCache = new WeakMap();
 const EVAL_RESULT_VARIABLE = 'Object.<anonymous>';
 const runtimeSupportsVmModules = typeof _vm().SyntheticModule === 'function';
+const compiledFunctionCache = new Map();
 /* eslint-disable-next-line no-redeclare */

 class Runtime {
@@ -1169,23 +1170,30 @@ class Runtime {
       value: this._createRequireImplementation(localModule, options)
     });
     const transformedCode = this.transformFile(filename, options);
-    let compiledFunction = null; // Use this if available instead of deprecated `JestEnvironment.runScript`
+    let compiledFunction = undefined; // Use this if available instead of deprecated `JestEnvironment.runScript`

     if (typeof this._environment.getVmContext === 'function') {
       const vmContext = this._environment.getVmContext();

       if (vmContext) {
-        try {
-          compiledFunction = (0, _vm().compileFunction)(
-            transformedCode,
-            this.constructInjectedModuleParameters(),
-            {
-              filename,
-              parsingContext: vmContext
-            }
-          );
-        } catch (e) {
-          throw (0, _transform().handlePotentialSyntaxError)(e);
+        const params = this.constructInjectedModuleParameters();
+        const cacheKey = transformedCode + params;
+        compiledFunction = compiledFunctionCache.get(cacheKey);
+
+        if (!compiledFunction) {
+          try {
+            compiledFunction = (0, _vm().compileFunction)(
+              transformedCode,
+              params,
+              {
+                filename,
+                parsingContext: vmContext
+              }
+            );
+            compiledFunctionCache.set(cacheKey, compiledFunction);
+          } catch (e) {
+            throw (0, _transform().handlePotentialSyntaxError)(e);
+          }
         }
       }
     } else {
@@ -1194,13 +1202,13 @@ class Runtime {
       const runScript = this._environment.runScript(script);

       if (runScript === null) {
-        compiledFunction = null;
+        compiledFunction = undefined;
       } else {
         compiledFunction = runScript[EVAL_RESULT_VARIABLE];
       }
     }

-    if (compiledFunction === null) {
+    if (!compiledFunction) {
       this._logFormattedReferenceError(
         'You are trying to `import` a file after the Jest environment has been torn down.'
       );
diff --git i/packages/jest-runtime/src/index.ts w/packages/jest-runtime/src/index.ts
index 522adabd1e..8958a4cef8 100644
--- i/packages/jest-runtime/src/index.ts
+++ w/packages/jest-runtime/src/index.ts
@@ -137,6 +137,8 @@ type RunScriptEvalResult = {[EVAL_RESULT_VARIABLE]: ModuleWrapper};

 const runtimeSupportsVmModules = typeof SyntheticModule === 'function';

+const compiledFunctionCache = new Map<string, ModuleWrapper>();
+
 /* eslint-disable-next-line no-redeclare */
 class Runtime {
   private _cacheFS: StringMap;
@@ -1000,24 +1002,29 @@ class Runtime {

     const transformedCode = this.transformFile(filename, options);

-    let compiledFunction: ModuleWrapper | null = null;
+    let compiledFunction: ModuleWrapper | undefined = undefined;

     // Use this if available instead of deprecated `JestEnvironment.runScript`
     if (typeof this._environment.getVmContext === 'function') {
       const vmContext = this._environment.getVmContext();

       if (vmContext) {
-        try {
-          compiledFunction = compileFunction(
-            transformedCode,
-            this.constructInjectedModuleParameters(),
-            {
+        const params = this.constructInjectedModuleParameters();
+
+        const cacheKey = transformedCode + params;
+
+        compiledFunction = compiledFunctionCache.get(cacheKey);
+
+        if (!compiledFunction) {
+          try {
+            compiledFunction = compileFunction(transformedCode, params, {
               filename,
               parsingContext: vmContext,
-            },
-          ) as ModuleWrapper;
-        } catch (e) {
-          throw handlePotentialSyntaxError(e);
+            }) as ModuleWrapper;
+            compiledFunctionCache.set(cacheKey, compiledFunction);
+          } catch (e) {
+            throw handlePotentialSyntaxError(e);
+          }
         }
       }
     } else {
@@ -1028,13 +1035,13 @@ class Runtime {
       );

       if (runScript === null) {
-        compiledFunction = null;
+        compiledFunction = undefined;
       } else {
         compiledFunction = runScript[EVAL_RESULT_VARIABLE];
       }
     }

-    if (compiledFunction === null) {
+    if (!compiledFunction) {
       this._logFormattedReferenceError(
         'You are trying to `import` a file after the Jest environment has been torn down.',
       );

EDIT: nope, this breaks horribly 🙈 I've asked in the Node issue if it's possible to populate the compilation cache 🤞

I thought this might do the trick.

const params = this.constructInjectedModuleParameters();
const cacheKey = transformedCode + params;
const cachedData = compileFunctionCache.get(cacheKey);

try {
  compiledFunction = (0, _vm().compileFunction)(
    transformedCode,
    params,
    {
      filename,
      parsingContext: vmContext,
      cachedData,
      produceCachedData: !cachedData,
    },
  );

  if (compiledFunction.cachedDataProduced) {
    compileFunctionCache.set(cacheKey, compiledFunction.cachedData);
  } 
} catch (e) {
  throw (0, _transform().handlePotentialSyntaxError)(e);
}

It improves performance a little but Script is still a lot faster.

Tried the recommendation from @SimenB: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33252/diffs?commit_id=6d633c88caf70f712fa0ccaac42d952976161ec6

While it did improve performance a bit, it is still considerably slower than on jest 24.x:

  • Jest 24.x: 2580 seconds total runtime of our jest tests
  • Jest 26.x: 3166 seconds total runtime of our jest tests

@leipert Have you by any chance tried downgrading the jsdom environment to 14?

yarn add test-environment-jsdom-fourteen --dev + "testEnvironment": "test-environment-jsdom-fourteen" in your jest config. This still seems to be responsible for the bulk of the duration increase for us (adds 40-50%) but it's starting to look like there's multiple regressions in play.

@pleunv With jest 24.x we are on jsdom 16 with jest-environment-jsdom-sixteen, we had to upgrade due to some issues with testing web components. So the only change we do: jest 24.x + jest-environment-jsdom-sixteen -> jest.26x + jest-environment-jsdom, so the jsdom version doesn't even change.

Opened up https://github.com/nodejs/node/issues/35375 upstream about the issue found by @wurstbonbon

@SimenB are you aware of any viable alternatives to micromatch? That repo has been silent for over half a year now, and major issues affecting Jest like https://github.com/micromatch/micromatch/issues/179 are still open.

Not really, it's what most libraries use. Could look at e.g. minimatch, but I doubt it'd be viable

@SimenB What makes micromatch better than all the alternatives?

Based on the feedback in the issue I've opened, I'm thinking we should revert to using Script for now as it seems it's gonna need a bit of work in Node to fix it there.

@leipert @wurstbonbon or anyone else, can you try this patch in your node_modules/jest-runtime/build/index.js?

diff --git i/packages/jest-runtime/build/index.js w/packages/jest-runtime/build/index.js
index 851d8e12cd..7235082546 100644
--- i/packages/jest-runtime/build/index.js
+++ w/packages/jest-runtime/build/index.js
@@ -1170,35 +1170,24 @@ class Runtime {
       value: this._createRequireImplementation(localModule, options)
     });
     const transformedCode = this.transformFile(filename, options);
-    let compiledFunction = null; // Use this if available instead of deprecated `JestEnvironment.runScript`
+    let compiledFunction = null;
+    const script = this.createScriptFromCode(transformedCode, filename);
+    let runScript = null; // Use this if available instead of deprecated `JestEnvironment.runScript`

     if (typeof this._environment.getVmContext === 'function') {
       const vmContext = this._environment.getVmContext();

       if (vmContext) {
-        try {
-          compiledFunction = (0, _vm().compileFunction)(
-            transformedCode,
-            this.constructInjectedModuleParameters(),
-            {
-              filename,
-              parsingContext: vmContext
-            }
-          );
-        } catch (e) {
-          throw (0, _transform().handlePotentialSyntaxError)(e);
-        }
+        runScript = script.runInContext(vmContext, {
+          filename
+        });
       }
     } else {
-      const script = this.createScriptFromCode(transformedCode, filename);
-
-      const runScript = this._environment.runScript(script);
+      runScript = this._environment.runScript(script);
+    }

-      if (runScript === null) {
-        compiledFunction = null;
-      } else {
-        compiledFunction = runScript[EVAL_RESULT_VARIABLE];
-      }
+    if (runScript !== null) {
+      compiledFunction = runScript[EVAL_RESULT_VARIABLE];
     }

     if (compiledFunction === null) {

I'll need to tweak how v8 code coverage works, but I'll try to open a PR tomorrow or next week.

I tested out the patch to use Script on our test suites and here's the result I came to
Time is in min:sec

Name | Suite 1 | Suite 2 | Suite 3 | Suite 4
-- | -- | -- | -- | --
jest 24 | 3:25 | 3:30 | 3:29 | 0:53
jest 26 patched | 3:32 | 4:36 | 3:48 | 0:53
jest 26 unpatched | 5:10 | 6:12 | 5:11 | 1:07
26 patched vs 24 | 4% | 31% | 9% | 1%
26 unpatched vs 24 | 52% | 76% | 49% | 27%
26 patched vs unpatched | 46% | 35% | 36% | 25%

Iteration | Suite 1 | Suite 2 | Suite 3 | Suite 4
-- | -- | -- | -- | --
jest 24 - 1 | 2:58 | 3:37 | 3:33 | 0:47
jest 24 - 2 | 3:18 | 3:34 | 3:32 | 0:51
jest 24 - 3 | 3:27 | 3:08 | 3:48 | 0:59
jest 24 - 4 | 3:37 | 3:44 | 3:38 | 0:53
jest 24 - 5 | 3:45 | 3:31 | 2:56 | 0:55
jest 26 patched - 1 | 3:42 | 4:31 | 4:08 | 0:57
jest 26 patched - 2 | 3:11 | 4:18 | 3:28 | 0:57
jest 26 patched - 3 | 3:55 | 5:12 | 3:19 | 0:55
jest 26 patched - 4 | 3:22 | 4:25 | 4:20 | 0:46
jest 26 unpatched - 1 | 4:30 | 6:12 | 4:28 | 1:08
jest 26 unpatched - 2 | 5:16 | 6:17 | 5:18 | 1:05
jest 26 unpatched - 3 | 5:46 | 6:07 | 5:49 | 1:09

All tests ran on same commit & similar test environment (Azure DevOps Hosted Ubuntu 18)
I only took the time taken for running jest on my test suites.
Most of my suites are similar in nature (all backend unit tests)

From what I can tell, the patch to use Script does make a huge difference on the perf.
I can't tell if the slowdown on Suite 2 is an outlier or a real regression (only did 4 runs).
It does look like there's still a perf regression, but not as bad

still interesting that v26 still doesn't improve on v24...

Thanks @Cellule! That's good enough for me - I'll put together a PR when I have some time

Awesome stuff! That leaves only the Micromatch issue then, hopefully that repo will come under active maintenance again.

BTW, there is also perf regression in JSDOM I assume. As I did such tests on a big web project. No patches mentioned above are applied.
And it looked like that.

Jest 24  (testEnvironment: "jsdom") (no rewires latest CRA)
144.014s

Jest 24 (testEnvironment: "jest-environment-jsdom-sixteen") (rewire latest CRA that changes testEnvironment)
409.473s (also few failed tests)

Jest 26 (testEnvironment: "jsdom") (no rewires latest CRA) so old jsdom? Whatever is the default for Jest 26 I assume? (I used react-app-rewired to rewire jest config and pnpmfile.js to override what version of Jest was installed with `react-scripts` as it still ships Jest 24 (like resolutions in yarn))
310.275s

Jest 26 (testEnvironment: "jest-environment-jsdom-sixteen") (rewire latest CRA that changes testEnvironment + pnpmfile.js)
over 1200s+ (some tests failed plus test run just stuck forever)

Surely this is a super vague and unstable performance report I must stay, but I think every input helps :)

https://github.com/facebook/jest/releases/tag/v26.5.0 has the vm.Script change discussed here

(Edit: updated after additional runs)

Preliminary results on the same test suite:

Jest 26.5
Cold: 59.992
Hot: 43.976

Jest 26.4:
Cold: 90.213
Hot: 47.408

A very significant speedup on cold runs <3

And here are the results with my test suite:

Jest 26.5
Cold: 149s

Jest 26.4
Cold: 226s

Great news 🙂 I think we're back to just the micromatch regression, then

If you guys are using npm-force-resolutions to forcely install micromatch 3. It may not work in [email protected]

// package.json
  ...
  "preinstall": "npx npm-force-resolutions",
  ..
  "resolutions": {
    "micromatch": "^3.0.0"
  }

Error when run test:

TypeError: _micromatch(...).default.scan is not a function
    at globs.map.glob (/home/travis/build/removed/node_modules/jest-util/build/globsToMatcher.js:65:47)
    at Array.map (<anonymous>)
    at globsToMatcher (/home/travis/build/removed/node_modules/jest-util/build/globsToMatcher.js:61:26)
    at new SearchSource (/home/travis/build/removed/node_modules/@jest/core/build/SearchSource.js:197:49)
    at contexts.map.context (/home/travis/build/removed/node_modules/@jest/core/build/runJest.js:265:16)
    at Array.map (<anonymous>)
    at runJest (/home/travis/build/removed/node_modules/@jest/core/build/runJest.js:264:34)
    at startRun (/home/travis/build/removed/node_modules/@jest/core/build/cli/index.js:479:35)
    at runWithoutWatch (/home/travis/build/removed/node_modules/@jest/core/build/cli/index.js:494:10)
    at _run10000 (/home/travis/build/removed/node_modules/@jest/core/build/cli/index.js:416:13)
npm ERR! Test failed.  See above for more details.

@SimenB Thank you very much for the update. It saves us 20% of running time on Travis after updating to [email protected]

Our results:

Jest v26.5

  • 323.98s
  • 321.24s

Jest v24.9

  • 262.17s
  • 275.96s

Thanks @SimenB! This is amazing. Our results for our ~22000 tests in ~2000 suites:

  • Jest 24.x: 2864 s
  • Jest 26.5.2: 2967 s

which is about 3% slower and thus in the margin of error, compared to the ~27% slowdown we saw before. Thank you, now we just need to merge: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33252#note_425616404

Just wanted to note that all of the "heap out of memory" issues that I was having before are gone after upgrading to Jest 26.5.2 and Node 14 (was on Node 10 before). Not sure how much of the problem was caused by Jest vs. by Node, but if others are seeing similar problems try upgrading to both.

UPDATE: Nevermind. I've started getting OOM errors again. I guess it's right on the borderline of what my laptop can handle and the first few runs were good, but now it's dying again. Will have to still stick to 24.x.x. :(

If somebody is interested, I have created a DOM that has very good performance in comparison to JSDOM. It has support for Jest.

| Operation | JSDOM | Happy DOM |
| ------------------------------------ | ------- | --------- |
| Import / Require | 333 ms | 45 ms |
| Parse HTML | 256 ms | 26 ms |
| Serialize HTML | 65 ms | 8 ms |
| Render custom element | 214 ms | 19 ms |
| querySelectorAll('tagname') | 4.9 ms | 0.7 ms |
| querySelectorAll('.class') | 6.4 ms | 3.7 ms |
| querySelectorAll('[attribute]') | 4.0 ms | 1.7 ms |
| querySelectorAll('[class~="name"]') | 5.5 ms | 2.9 ms |
| querySelectorAll(':nth-child(2n+1)') | 10.4 ms | 3.8 ms |

Link to project:
https://github.com/capricorn86/happy-dom/

@capricorn86 Looks nice. Is it spec compliant?

@capricorn86 Looks nice. Is it spec compliant?

Thank you @milesj!

The functionality that has been implemented has been implemented according to specs, but there is no detailed overview of which specs that are covered yet. I am thinking about adding that. However, all functionality is covered by unit tests.

The initial goal of the DOM was to be able to render web components server side with good performance, as I needed it for some other projects.

FWIW, I just tried bumping our project from react-scripts@3 with Jest 24.9, to @react-scripts@4 with Jest 26.6.

Our server API test suite had previously been executing in about 180-190 seconds. After switching to Jest 26.6, it was consistently taking about 220 seconds. I even tried forcing resolution of minimatch to 4.0.2. Switching the test runner to jest-circus seemed to knock off a couple seconds, but overall, 26.6 seems noticeably slower.

react-scripts@4 uses jest-circus by default, fwiw. It's also micromatch we use, not minimatch. Seems we've broken rolling back micromatch via #10131, however, so not as easy to test if that is the cause of the regression anymore

@SimenB : we've got a weird migration setup atm - it's a legacy MEAN / AngularJS app that I converted to build using CRA. The test config is all our own, vs the built-in CRA Jest config - we're just taking advantage of the fact that CRA comes with Jest as a dependency.

I don't have my work box in front of me atm, so now I can't remember if I actually meant micromatch or if I really focused on the wrong package name there :) I'll have to look at it again next week.

I just noticed that v26 runs _a lot_ slower in iTerm than in macOS's default terminal. On a set of 6500 tests I consistently get the following results:

  • v24, Terminal: ~90s
  • v24, iTerm2: ~90s
  • v26, Terminal: ~110s
  • v26, iTerm2: ~150s

This blew my mind a little bit after months of trying various things to sort out the slowdown. Any chance anyone else with perf issues on a mac can try this out? Mind you, this is with jsdom@14 on v26.

@pleunv I believe this may be related: https://github.com/facebook/jest/pull/9294. I noticed that hyperlinks slow down iTerm2 on my machine, making it choppy. But haven't investigated the overall speed of execution, nor found another person that had issues with it.

Eureka. Searching for "iTerm" brought me to this PR. I had noticed these underlines before and didn't realize they were hyperlinks. After seeing that PR I disabled hyperlinks in iTerm which brought my runtime down to 130s. After applying the code from the PR and removing the hyperlinks I'm back down to 120s. Sanity slightly restored.

Any chance that PR can get put back in?

edit: @thymikee beat me to it 😄

@pleunv I'll try to find some time this week to bring it back. Although the real deal would be to fix this for iTerm, as other terminals e.g. on Linux have no issues with hyperlinks. Would you mind to file an issue to the iTerm project?

I just made this change and it gave me 1s for a single test file. And you can still click on the url, it's just no longer underlined.
image

This might be huge for larger runs. ❤️

//edit
Funny thing it made no difference before if it was iterm or terminal. After the change iterm is faster for me.

@pleunv I'll try to find some time this week to bring it back. Although the real deal would be to fix this for iTerm, as other terminals e.g. on Linux have no issues with hyperlinks. Would you mind to file an issue to the iTerm project?

I've created an issue here (they're on GitLab). If anyone has additional details or a repro project, feel free to add.

I was experimenting some more in the mean time and I found that, when only running it on a smaller subset of tests (couple dozen test files), hyperlinks generally don't make that much of a difference. When running it on our full set though (700 files) the impact is very much measurable.

I also have the impression that on a long run jest's console output starts to get really glitchy/flashy. The progress lines on the bottom for example are more hidden than visible.

Was this page helpful?
0 / 5 - 0 ratings