Jest: Jest 24 is slower than Jest 23.6.0 on the same test suite

Created on 6 Feb 2019  ·  138Comments  ·  Source: facebook/jest

🐛 Bug Report

Filing bug per #7732 https://github.com/facebook/jest/issues/7732#issuecomment-460676324
Jest v24 takes 38 seconds vs v23 28 seconds to run 50 suites (296 tests).

To Reproduce

Steps to reproduce the behavior:
Jest v23 setup:
.babelrc

{
  "presets": ["es2015", "react", "stage-3", "env"]
}

package.json:

...
    "babel-core": "6.26.0",
    "babel-jest": "22.4.1",
    "babel-loader": "7.1.2",
    "babel-preset-env": "1.6.1",
    "babel-preset-es2015": "6.24.1",
    "babel-preset-react": "6.24.1",
    "babel-preset-stage-3": "6.24.1",
    "jest": "23.6.0",
...

Jest v24 setup:
.babelrc

{
  "presets": ["@babel/preset-env", "@babel/preset-react"],
  "plugins": [
    "@babel/plugin-syntax-dynamic-import",
    "@babel/plugin-syntax-import-meta",
    ["@babel/plugin-proposal-class-properties", { "loose": false },
    "@babel/plugin-proposal-json-strings"
  ]
}

package.json:

...
    "@babel/core": "7.2.2",
    "@babel/plugin-proposal-class-properties": "7.3.0",
    "@babel/plugin-proposal-json-strings": "7.2.0",
    "@babel/plugin-syntax-dynamic-import": "7.2.0",
    "@babel/plugin-syntax-import-meta": "7.2.0",
    "@babel/preset-env": "7.3.1",
    "@babel/preset-react": "7.0.0",
    "jest": "24.1.0",
...

Jest config is not changed between v23 and v24:

module.exports = {
  'verbose': false,
  'coverageThreshold': {
    'global': {
      'branches': 40,
      'functions': 45,
      'lines': 50,
      'statements': 50
    }
  },
  'projects': [
    '<rootDir>/src/test/js/reactapp'
  ],
  'moduleDirectories': [
    'src',
    'node_modules'
  ],
  'testURL': 'http://mockserver:9999',
  'collectCoverage': true,
  'coverageDirectory': '<rootDir>/build/coverage/',
  'coverageReporters': [
    'json',
    'html'
  ]
};

Expected behavior

Jest v24 running performance matches v23.

Link to repl or repo (highly encouraged)

Not available

Run npx envinfo --preset jest

Paste the results here:

System:
    OS: Windows 7
    CPU: (4) x64 Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
  Binaries:
    Node: 8.12.0 - C:\Program Files\nodejs\node.EXE
    npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
Regression Help Wanted

Most helpful comment

All I want for Christmas is y...Jest 25 <3

All 138 comments

Would be great if you could attach a repro that one can checkout and analyze.

We're also seeing quite significant performance regression for Jest 24.

  System:
    OS: Linux 4.15 Ubuntu 18.04.1 LTS (Bionic Beaver)
    CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
  Binaries:
    Node: 10.15.0 - ~/.nvm/versions/node/v10.15.0/bin/node
    Yarn: 1.13.0 - ~/.nvm/versions/node/v10.15.0/bin/yarn
    npm: 6.4.1 - ~/.nvm/versions/node/v10.15.0/bin/npm
  npmPackages:
    jest: 24.1.0 => 24.1.0 

This is with 1400 tests on a large code base. Running a custom typescript transformer.

--all
Jest 23.6.0: 13s
Jest 24.1: 17s

--all --no-cache
Jest 23.6.0: 60s
Jest 24.1: 150s

--all --runInBand
Jest 23.6.0: 36s
Jest 24.1: 40s

--all --no-cache --runInBand
Jest 23.6.0: 60s
Jest 24.1: 65s

I'd love to provide repo for a faster resolution but as most people here i can't share private projects. If someone could advise an open source project that uses jest for tests we could tweak it out and see if it happens there.

Here's an open source I built with jest for testing. Runs both traditional units tests and integration tests with live API calls. I definitely noticed significant slow downs when trying to upgrade to jest@24.

Unit tests and integration tests typically run in ~10s each, so should be easy to quickly do some verification.

https://github.com/mkreiser/ESPN-Fantasy-Football-API

To be more useful, I ran some stats. Unfortunately, ESPN's API appears to be down so I could not run the integration tests (which could be more interesting) as well.

env (via npx envinfo --preset jest)

System:
    OS: macOS 10.14
    CPU: (8) x64 Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
  Binaries:
    Node: 8.15.0 - ~/.nvm/versions/node/v8.15.0/bin/node
    npm: 6.7.0 - ~/.nvm/versions/node/v8.15.0/bin/npm

Branches

master = [email protected], jest-24 = [email protected]

Unit test performance (451 tests in 16 suites) (npm run test:unit)

[email protected]:

PASSED OPTIONS | TIME
-- | --
(no options) | 3.5s
--maxWorkers=2 | 3.4s
--no-cache | 8.6s
--no-cache --maxWorkers=2 | 6.8s
--runInBand | 3.8s
--runInBand --no-cache | 8.3s

[email protected]:

PASSED OPTIONS | TIME
-- | --
(no options) | 4.4s
--maxWorkers=2 | 4.7s
--no-cache | 13.4s
--no-cache --maxWorkers=2 | 17.6s
--runInBand | 5.1s
--runInBand --no-cache | 9.3s

Checked on this repo tests https://github.com/LambdaSchool/React-Testing, results:
v23:
```node_modules/.bin/jest --verbose
PASS src/__tests__/App.spec.js

✓ renders without crashing (13ms)

PASS src/__tests__/Panel.spec.js

✓ renders without crashing (13ms)

PASS src/__tests__/Display.spec.js

✓ renders without crashing (14ms)

PASS src/__tests__/Button.spec.js

@thymikee Can we remove the Windows-tag here? This is definitely present on MacOS and Ubuntu as well.

Jest v24:

Test Suites: 96 passed, 96 total
Tests:       276 passed, 276 total
Snapshots:   129 passed, 129 total
Time:        113.429s

Jest v23:

Test Suites: 96 passed, 96 total
Tests:       276 passed, 276 total
Snapshots:   129 passed, 129 total
Time:        17.288s

Woah, 17s vs 113s, that's huge! Any possibility of sharing that project?


I think we need to start really considering adding performance monitoring to Jest. @aaronabramov talked about this in #4471. Node comes with essentially the same performance API the browser does (available since node 8): https://nodejs.org/api/perf_hooks.html, maybe we can use that to measure? Having some way of emitting how long different things takes should be a first step probably (so #4471, but that only talks about what happens inside of tests)

Sure, it is this repo.

Currently I do not want to upgrade to v24 due to this difference 😞

Thanks!

On my machine, that takes 35-40 sec on a cold cache, and 7-8 sec on a warm cache. With jest 24, I get 40-45 sec on cold cache and 17-18 sec on warm cache. Definitely shows the problem (although not as severe as on your machine). I'll try to find some time digging into this

Currently I do not want to upgrade to v24 due to this difference

Understandable!

Maybe I ran v23 with cache and v24 without 🤔 .

But we also have another repo where the tests got noticeably slower after upgrading. I don't have any official timings though.

I suspect Babel 7 significantly contributes to the issue, check out the commit 1818d84 and the one before, i get 0.5 seconds increase for 4 simple tests.

Tested with npx jest; npx jest and the later time taken (to get the hot cache).

version| tests | time
---|---|---
23.6.0 | 5749 | 59.13s
24.1.0 | 5749 | 144.5s
24.8.0 | 5812 | 125.88s

The percentage difference between hot and cold cache is also much greater

version| hot | cold | difference
---|---|---|---
23.6.0 | 58s | 65s | +12%
24.1.0 | 140s | 190s | +35%
24.8.0 | 126s | 145s | +16%

Even worse is the difference between jest and jest --no-cache

version| jest (hot cache) | jest --no-cache | difference
---|---|---|---
23.6.0 | 59s | 113s | +92%
24.1.0 | 144s | >400s | + >200%
24.8.0 | 126s | 1318s | + >1000%

I aborted the --no-cache run on 24.1 after 400 seconds. With 2669/5749 tests 113 / 621 suites done. So the total execution time is probably in the 800-900 seconds range.

Edit: Jest 24.8.0 was a big improvement. Slightly faster run time despite more tests.

Is there someone working on this as there is a clear repro?
We're also seeing a massive performance regression when updating to Jest 24. Due to https://github.com/facebook/jest/issues/7709 we'd have to upgrade to Jest 24. Right now we're using a resolution to 2.4.1. for write-file-atomic which is surely not ideal...

There is #8032 which has identified one cause of regression

We've recently switched from Babel 6/Jest 23 to Babel 7/Node 24 and have noticed similarly stark speed regressions (from ~150s to ~200s with a clean cache and ~100s to ~160s cached). Doesn't appear to be Babel 7 related, since running Babel compilation by itself is actually faster now.

Hi There, I would like to share my experience here with Jest.
I like jest but the problem that I'm having it takes a lot of time to run 6 tests, it took 13.4 seconds

 PASS  test/domain/model/Actor.test.ts (12.193s)
  Actor creation
    √ should create an actor (5ms)
    √ should create an actor will publish ActorCreated event (3ms)
  Actor roles
    √ should create a role and to be an instance of it (15ms)
    √ should publish CreatorRoleAssignedToActor event when add a creator role to an actor (2ms)
    √ should publish UserRoleAssignedToActor event when add a user role to an actor (5ms)
    √ should create a user actor and publish UserRoleAssignedToActor domain event (1666ms)

Test Suites: 1 passed, 1 total
Tests:       6 passed, 6 total
Snapshots:   0 total
Time:        13.4s
Ran all test suites.

Watch Usage: Press w to show more.
npx envinfo --preset jest
npx: installed 1 in 2.847s

  System:
    OS: Windows 10
    CPU: (8) x64 Intel(R) Core(TM) i7-4700HQ CPU @ 2.40GHz
  Binaries:
    Node: 11.7.0 - C:\Program Files\nodejs\node.EXE
    npm: 6.5.0 - C:\Program Files\nodejs\npm.CMD

Also in the package.json

"scripts": {
  "test": "jest",
  "test:watch": "jest --watch",
},
"devDependencies": {
   "@types/jest": "^24.0.12",
   "jest": "^24.5.0",
   "jest-coverage-badges": "^1.1.2",
   "jest-junit": "^6.4.0",
   "ts-jest": "^24.0.2",
...etc
}

jest.config.js

module.exports = {
    preset: "ts-jest",
    testEnvironment: "node",
    coverageDirectory: "doc/code-coverage",
    collectCoverageFrom: [
        "src/**/*.{js,ts}",
        "!**/node_modules/**",
        "!**/vendor/**",
        "!src/index.ts"
    ],
    coverageReporters: ["json", "html", "json-summary", "jest-junit", "lcov"]
};

I'm using it with typescript:

in my sh test folder, I have a file with 6 tests:

Then I tried to use mocha

npm i -D chai @types/chai mocha @types/mocha sinon @types/sinon sinon-chai @types/sinon-chai source-map-support nyc

and I put mocha.opts inside 'test' folder

--require ts-node/register
--watch-extensions ts
test/**/*.ts

and in the package.json:

    "scripts": {
        "test:watch": "mocha --opts test/mocha.opts --watch"
    },
...etc

I ran the same tests (only I modifed the assertions to use sinon-chai), and it ran amazingly fast (about 1-2 second(s)),

Actor creation
    √ should create an actor
    √ should create an actor will publish ActorCreated event

  Actor roles
    √ should create a role and to be an instance of it
    √ should publish CreatorRoleAssignedToActor event when add a creator role to an actor
    √ should publish UserRoleAssignedToActor event when add a user role to an actor
    √ should create a user actor and publish UserRoleAssignedToActor domain event (325ms)


  6 passing (336ms)

Something is not good, and I don't know what is the problem.

Anything we can do regarding this? Any approach advice?
It's alarmingly slow btw.

We ended up migrating to Mocha (+Chai and Sinon), which reported a significant improvement on the testing time.

I'm considering going back to jest 23 until it's fixed, it's about 3-5 times slower than before for me.

The fastest jest release has been 22.4.4. All newer versions have been much slower.

@deleteme how is v23 compared to v22? Is the speed difference significant? I want to propose downgrade to 22 but my colleague said the version is too low (compared to 24)

Hi! I don't want to add to the noise of this Issue, so I'll keep it simple and add one more open-source reproduction:

Project with large code-size (in particular, a single 2-megabyte-ish generated lexer), where a single change of upgrading Jest 23.0.x to Jest v24.0.x caused test-times to jump from ~7 seconds (~5s per test-file), to ~147 (~70s for each individual test-file). Yes, a regression of 2,000%. 🤯

Project is ELLIOTTCABLE/excmd.js; commit ELLIOTTCABLE/excmd.js@f0982114 used Jest 23, and consistently reports <10s runtimes on jest --clearCache && time jest; the next commit, ELLIOTTCABLE/excmd.js@d4d3a08d, updates Jest to v24, and shows consistent cache-cleared timing of ≥100s.

Again, so it doesn't slip by — I strongly suspect the performance regression has something to do with filesystem copying or module loading; because the thing that makes my project an outlier is definitely the loading of a single, hairy, two megabyte .js file. (An unrelated bug.)

I also am not convinced it's related to Babel; Babel happily deoptimizes the styling of that file and continues on its merry way, showing a wall-clock build-time of 4.48s.


(Because I'm sure this issue is a stressful one (lots of complaining users, difficult to reproduce, and performance-related and thus difficult to debug …), I'd like to add: we love Jest, and your hard maintenance work, even the triage where you say “no”, is greatly appreciated. ❤️)

@ronayumik Ever since Jest 22.4.4, as new versions of Jest are released, and when I'm interested in upgrading, I compare how long it takes to run our tests (412 test suites, 2015 tests).

The following gist includes several benchmarks using Jest 22, 23, and 24.

https://gist.github.com/deleteme/25fcca7cd16aceb30cc5f50e022aa706

So far, every new version has been significantly slower.

@deleteme have you tried with micromatch pinned to v4?

@thymikee No. Do you want me to checkout the master branch of Jest, edit the micromatch dependency, and run it against our tests?

Nah, you can just use Yarn resolutions

@thymikee I've updated my benchmark gist to include a comparison of jest @ 24.8.0 with and without the micromatch 4.0.2 yarn resolution.

https://gist.github.com/deleteme/25fcca7cd16aceb30cc5f50e022aa706#benchmark-jest-2244-2480-with-micromatch-yarn-resolution-2480-without-micromatch-yarn-resolution

That's still pretty bad :/ Still feel like something must be wrong with jest-runtime, but idk what

This is not a jest issue. It's about babel 7.
We've update babel 6 to 7 in our project and got about 30% bundling performance degradation for the same codebase

@smelukov That's an interesting possibility, I hadn't thought about that before. Given that these performance issues rarely have a single cause or solution Babel could indeed be another factor. Have you managed to make this reproducible on a smaller scale and reported an issue with Babel?

@smelukov Our Babel compile times are faster with 7, but Jest is slower.

Edit: Also if I'm not mistaken, a Babel slowdown would only affect uncached builds, but we're seeing it for cached builds as well.

@jeysal I'm doing some things to find a "bottleneck"
I'll report when done ;)

@agrasley we are using a cached build

We moved out of Babel (now fully TS with ts-jest) and seeing this regression as well from 23 to 24 (+40% time increase for cached tests, +100% for non-cached tests). The memory also increases. Pinning micromatch to v4 had very little benefit. Using the most recent jsdom (15) also did not help.

We can't get past our CI build because of heap tko issues on the latest version of jest(24).

Anyone have any short term solutions? I'm willing to help PR something if someone can guide me on where to start :)

Looking at the code for the --coverage flag, that seems to be the root cause of our heap issues.

There are a ton of code pushed re: coverage over the past few months so I'm not sure what the source may be. I'll do a bit more digging.

This is not a jest issue. It's about babel 7.

It's absolutely a Jest issue. We used Babel 7 under both Jest 23 and 24 and saw a severe drop on 24 regardless.

I updated from 24.7.1 to 24.8.0 and it's working fast again, even better than 23.6.0 was.
For ~400 tests it used to take 50-90 seconds, now it takes 28-32 seconds.
Can anyone else try and confirm?

I updated from 24.7.1 to 24.8.0 and it's working fast again, even better than 23.6.0 was.
For ~400 tests it used to take 50-90 seconds, now it takes 28-32 seconds.
Can anyone else try and confirm?

It doesn't help. 1630 tests - 1152.588s. Private project.
--clearCache and type checking enabled ("isolatedModules": false)
micromatch 4.0.2 yarn resolution didn't help us either.

Will try the latest version(24.8.0) shortly and report back.

EDIT: Just as slow on 24.8.0. Anyone have a workaround? We have ~5500 tests and literally can't run them on the latest version(dies in CI due to timeouts or takes all eternity(20+ mins) vs the 3mins it used to take).

I'm going to look at the source code and do a few diffs to see what I can find. I may be able to at least get started with some help.

If anyone already started pls ping or reply in this thread, much love.

I hopefully spotted some places that slows jest 24 down comparing to jest 22, here's the link if someone is willing to try it out https://github.com/vitalibozhko/jest/tree/24.8.0-perf-spots (a couple of tweaks and some minor features disabled just to test performance, not a real fix)
I get nearly the same time on this branch as it was on jest 22 on our private project setup.

To try this out just follow https://github.com/facebook/jest/blob/master/CONTRIBUTING.md#how-to-try-a-development-build-of-jest-in-another-project
I'd also suggest to either run jest by means of package.json scripts or via ./node_modules/.bin/jest to make sure the linked version is utilized (i personally use the latter).
One way to verify the link is properly set is by running ./node_modules/.bin/jest --version in terminal - it should give 24.8.0-dev

That's great @vitalibozhko! Looking at the diff, I see 3 main changes

  1. Upgrade to micromatch@4
  2. Move stack collection (maybe even remove? I just skimmed through)
  3. Do not use pretty-format to format thrown non-errors

Does that seems right? If yes, which parts had the most impact? Some comments on the changes:

  1. We'll be upgrading micromatch in jest 25.
  2. I was afraid that being eager to collect the stacks would have a negative impact on perf: https://github.com/facebook/jest/pull/6965#issuecomment-421612367. Not sure what to do here - stacks are important for understanding what went wrong if tests fail. Maybe we could add an option for it, so people can opt-in to better traces? Which they'd activate when tests fail to (potentially) get a better trace
  3. I'm a bit surprised by the pretty-format change - I wouldn't expect that to be heavy as most of the time Errors are thrown and that code path is not activated.

Opting in sounds like a good idea - that way I can run jest fast on ci and more verbose locally when I really want to debug something.

@vitalibozhko assuming im linking your branch correctly, seems to be more or less the same as 24.8.0 for me. I feel like the first time I ran it it was faster, so maybe something off there, but now I'm consistently seeing roughly the same times as 24.8.0. Pretty sure it's using the build from your branch anyway. Just migrated to jest so most of our tests are still using chai-expect and sinon if that matters.

260 suites / 1583 tests / mbp quad core / 2nd run times shown

22.4.4: 47s
23.6.0: 54s
24.8.0: 72s
24.8.0-perf-spots: 72s

Edit: Node version seems to have a large effect on this as well, going to node 10 from node 8 takes 10-15s off of the above times.

(I tried to test off of the latest master commit too but was getting syntaxish errors, and was still getting same errors when i went back to re-link vitalibozhko's branch, and I don't feel like messing with it anymore so..)

@SimenB

I see 3 main changes

Upgrade to micromatch@4
Move stack collection (maybe even remove? I just skimmed through)
Do not use pretty-format to format thrown non-errors
Does that seems right?

That looks correct

which parts had the most impact?

I'd say the most noticeable impact comes from requiring of "heavy" modules like pretty-format even though those are only needed under certain circumstances (e.g. spec uses snapshots, failed spec need to format the error, etc.). The changes made were the most obvious places from profile flame chart.
In terms of CPU load there is one obvious thing - getAllCoverageInfo using deepCyclicCopyObject (7% of total time), replaced with JSON.parse(JSON.stringify()), it fits just fine for coverage cloning - in case of my project it reduces total cloning time from 2.3 s to 0.4s

I was afraid that being eager to collect the stacks would have a negative impact on perf: #6965 (comment). Not sure what to do here - stacks are important for understanding what went wrong if tests fail. Maybe we could add an option for it, so people can opt-in to better traces? Which they'd activate when tests fail to (potentially) get a better trace

Put the stack collection back - got plus 1-2s (15.5 > 16.5-17.x) to the total time on hot cache with in band execution.
Test Suites: 66 passed, 66 total
Tests: 16 skipped, 441 passed, 457 total
Snapshots: 9 passed, 9 total
Time: 17.398s

It might be worth trying to measure the retaining memory without stack evaluation in place, my guess is that it would only become an issue in watch mode.

I'm a bit surprised by the pretty-format change - I wouldn't expect that to be heavy as most of the time Errors are thrown and that code path is not activated.

As mentioned above it's not the execution time that makes difference, but the time to require a module

btw, i'm checking performance on node v11.10.0

Let me share my profiling here.
Specs:

  • Windows 10 Pro
  • Node 10.15.3
  • Intel Core i7-4702MQ 2.2GHz
  • 8GB RAM
  • x64
  • SSD

Steps:

  1. npx create-react-app my-app --typescript
  2. change App.test.tsx
  3. run npm test

CPU Profile:
image
CPU-20190801T141211.zip

Is it expected that 15 seconds are spent only with requring modules for single trivial React component and test?

Can someone with more experience on CPU profiling take a look?

I ran a test on 24.9 vs 24.8 on one of our suites. No improvements unfortunately.

24.8, cold cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        29.323s
Ran all test suites.
✨  Done in 30.65s.

24.8, warm cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        19.109s, estimated 26s
Ran all test suites.
✨  Done in 19.91s.

24.9, cold cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        29.684s
Ran all test suites.
✨  Done in 30.57s.

24.9, warm cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        20.909s, estimated 27s
Ran all test suites.
✨  Done in 21.65s.

We use jest to test our Angular 8 application with babel-jest and ts-jest. We used to run our 200 tests in less then 5mins on hest 23 and now with 24.9 it is over 20mins which is completely unacceptable. I'm using node 10. The time is greater for both cold start and also with cache.

Is there any workaround for now, anyone actively working on solving this issue? @SimenB @thymikee ? As it becomes more and more obvious that this performance issue starts to be a show stopper for most of the users.

That's a horrendous regression 🙁 Is your project open source?

Have you tried the branch from https://github.com/facebook/jest/issues/7811#issuecomment-516718977?

Another thing that will probably help is to update our code to target node 8 - that will stop transpiling async-await. You can do that by changing this line: https://github.com/facebook/jest/blob/e76c7daa33a7665396679527bba7d0fdfeb2591d/babel.config.js#L30

@SimenB no my project is not open source.

I don't want to build Jest on my own. Because this is what you suggest, right?

I added "name" to my Jest config as suggested in other issue report, but that just helps for having the cache. For cold start execution times are unacceptable.

Is there any chance fix can be merged and new version released?
Thanks!

Fully support @tscislo about performance issue comparing jest23 and jest24.
@SimenB, sorry for my directness, but every time somebody write here a comment about performance reduction you make a surprised face, after that question about open source/not and silence.
If you missed my previous message - micromatch 4.0.2 yarn resolution doesn't help.

@tscislo could you please clarify, what "name" you are talking about? I guess even cached speed up might help to me and someone else.

@DavyJohnes early after 24 was released there was a cache-busting bug which required users to set name manually in their config to make sure cache worked as intended. To my knowledged that has been patched for some time now, see https://github.com/facebook/jest/pull/7746

@dmitriivikorchuk

after that question about open source/not and silence.

Well, having access to a non-trivial project where the issue is readily reproducible sure does help in finding out why it happens. But what do I know.

As for the "silence", pretty sure the team are not hired to work on Jest and are just as busy with their day-to-day jobs as you and me. It's an open source project, nothing stops you from trying to figure the problem out for yourself.

Not trying to be unappreciative, (thanks all contribs :P) but just saying, if anyone _is_ working on this, the performance regressions should probably be a priority vs. adding any new features. I imagine lots of others will also be staying on 22 or 23 for the time being.

Are the majority (any?) of code bases out there not running into this issue? Most people just don't care?

What is the condition when Jest uses cache? Today pulled new code from repo and even though I have relevant cache directory, Jest doesn't seam to use it. Tests launch takes as much as clean launch without cache.

@tscislo cache is enabled by default and should work unless you override it.

https://jestjs.io/docs/en/cli#cache

@csvan thanks, but this not how it works for me

@tscislo How does your jest config look?

On topic: Of the things mentioned in this issue that helps, we've landed 2: we've dropped Node 6 and upgraded Micromatch in the next branch. These things reduced test run time by about 10% for Jest's own tests (more on Windows). It also reduced memory usage by a fair bit. We'll try to get an alpha release out soon so people can test it, but you can also checkout that branch locally, and build and link jest in your project to see how it goes. Instructions are in CONTRIBUTING.md in the root of this repo. It should also be easier to profile Jest now as we no longer transpile async-await which makes the functions calls much easier to follow

I'd like to look into why pretty-format (and other modules within the sandbox) is so heavy to require - its dependency tree isn't _that_ big. No promises though on _when_ though!


Off topic:

I don't want to build Jest on my own. Because this is what you suggest, right?

Yes, that's what I was suggesting. Why don't you want to help us figure out if those changes help?

I added "name" to my Jest config as suggested in other issue report, but that just helps for having the cache. For cold start execution times are unacceptable.

As @csvan points out, the name thing has been fixed since 24.1.0 (the PR was linked to). If it makes a difference adding name or not for releases after that, please open up a new issue as that's an optimization you shouldn't have to make.

What is the condition when Jest uses cache?

Jest always uses a cache. Passing in --no-cache just deletes the previous cache - Jest will still cache things within that run. We cache transforms and test results (so we know how long they took, and can prioritize when to run which tests).


Like @csvan says - nobody is actually paid to work on Jest so any time spent is either spare time or time spent at day jobs instead of doing what we're paid to do. Sometimes there's an overlap between what my day job needs and things Jest can do better (we use Jest at work), but that's not common. So when I _do_ spend time on it, I prefer to work on things that I find enjoyable, which digging for performance regressions is not.

Not trying to be unappreciative, (thanks all contribs :P) but just saying, if anyone _is_ working on this, the performance regressions should probably be a priority vs. adding any new features.

I can only speak for myself of course, but if the choice is between digging into performance regressions and doing nothing, I'll choose to do nothing. Life is too short to spend spare time on things you don't enjoy.

However, this project is open source, so anyone is more than welcome to dig into it and try to help the entire community by contributing fixes like @vitalibozhko and others has done. If it's an issue for you at work, maybe you can spend some time while at work trying to fix it?

I'd like to look into why pretty-format is so heavy to require

That might be, because it is made up of a lot of small single files. Bundling up pretty-format might help mitigate that.

This is the file-graph for the package:

  • index.js

    • ./collections.js

    • ansi-styles

    • ./plugins/AsymmetricMatcher.js

    • ./collections.js

    • ./plugins/ConvertAnsi.js

    • ansi-regex

    • ansi-styles

    • ./plugins/DOMCollection.js

    • ./collections.js

    • ./plugins/DOMElement.js

    • ./lib/markup.js



      • ./escapeHTML



    • ./plugins/Immutable.js

    • ./collections.js

    • ./plugins/ReactElement.js

    • ./lib/markup.js



      • ./escapeHTML



    • react-is

    • ./plugins/ReactTestComponent.js

    • ./lib/markup.js



      • ./escapeHTML.js



According to old google results I could find on the matter, node.js doesn't really like that. However, I'm not sure if that's true for modern node.js versions. But having 3 vs. 22 require() calls should help a little bit anyway.

We can definitely try out bundling our modules and see if it makes a difference! At least the ones that are required inside the sandbox. We currently bundle for browser, but that includes dependencies - here we'd keep them external I think.

Bringing this up again: I suspect that Node's regular require has lots of optimization that we miss out on because require really comes from jest-runtime. This could perhaps be verified by looking at how long module trees take to load in regular Node vs the Jest runtime.

Yeah, that's a great point. We should definitely try to figure out if our require algorithm is missing any optimizations we could apply. If it's constructing the module object itself, the resolver, or something else.

The require function is created here, if anyone wanna dig: https://github.com/facebook/jest/blob/d523fa84f2a65adf4d18019173dd44600c57bcad/packages/jest-runtime/src/index.ts#L884-L922

I can only speak for myself of course, but if the choice is between digging into performance regressions and doing nothing, I'll choose to do nothing. Life is too short to spend spare time on things you don't enjoy.

I feel you. To be honest any Windows users are free to use WSL, and the big performance issues will likely go away.

Thought I'd pop in with a report on my jest performance adventures the past month.

837 suites, 16259 tests, Node v10 TypeScript back-end, universal app with Vue + Nuxt

...and of course, a private repo :)

Initially, these tests took... 4m30s w/cold cache or 3m20s w/hot cache on both bare Windows and WSL. Both cases were about 30s faster on MacOS or Ubuntu.

I know this is just anecdotal, but here's stuff I tried...

  • Whitelisting working directories in Windows' security software had no effect.
  • Switching from WSL back to "raw" Windows had no effect.
  • Using runInBand severely worsened the performance (this workstation has a Xeon with 20 virtual cores)
  • I narrowed it down to maxWorkers=4 as the least-bad option.
  • Changing jest environment away from jsdom wasn't initially an option, since the code under test is a universal app, but see below.

Ultimately, here are the things that helped my team:

  • Switching to Ubuntu (uninstalled Windows) reduced test times by ~15%
  • Rolling back to jest v22 then reduced our test times by another ~40% or so
  • Splitting the test suites into client- and server-side (in our universal app), and running them individually gave us some flexibility, as follows...
  • When working locally in a given area, we could run only the one suite, cutting time by ~50%
  • Further, I was able to switch the "server-side" test suite from jest-environment-jsdom-global to jest-environment-node. This shaved off another ~25% or so. (I also tried node env but it gave us weird event loop stuff I didn't have time to explore, and there was no discernible performance benefit over the jest-environment-node)
  • I use a little shell script to run the two test suites in parallel for a complete run, and it only takes about 25% longer than running each alone, which is a significant net savings over the whole run in one go.

Hope this helps someone. Bottom line though: Rolling back to jest v22 was the biggest single gain. FWIW my colleagues were all experiencing the issues, and we run every OS with a variety of hardware configurations.

Thanks @lulubo ! Why do you suggest rolling back to 22 and not 23?

Thanks @lulubo ! Why do you suggest rolling back to 22 and not 23?

Whoops, sorry I forgot to mention that part. Our performance tests with v23 were only marginally better than v24. I'm sorry I can't recall the exact numbers or percentage, but v22 was where we saw the significant gains.

I dug into the repo provided by @ELLIOTTCABLE in https://github.com/facebook/jest/issues/7811#issuecomment-501876636, and the commit that made the test run go from 9s to 150s on my machine was #7170. However, if I remove collectCoverage from the config, 24.8.0 also completes in 9 seconds. So the change in that case is that we actually collect coverage from the files specified, which is expensive (_especially_ for large files). I've got a branch switching to use V8 coverage (#8596) which should speed it up some, but it will always be expensive.

Quick test:
image

With the bugfix that slowed it down, otherwise all code in jest is identical:
image


Thanks for the detailed information @lulubo! I think after we get 24 to back to where 23 was, we can look at getting back to where 22 was as well. Even if 24->23 did not make a big difference for you and your team, from this issue it seems like it's an issue for lots of people, so I think we should focus on that first.

Wow! Thanks for taking time to look into somebody else’s code! 😳

Well, sounds like the fix for me is to separate coverage into a different,
less-critical-path task. Back to 9s test-runs it is! 😍

On Tue, Aug 20, 2019 at 14:39 Simen Bekkhus notifications@github.com
wrote:

I dug into the repo provided by @ELLIOTTCABLE
https://github.com/ELLIOTTCABLE in #7811 (comment)
https://github.com/facebook/jest/issues/7811#issuecomment-501876636,
and the commit that made the test run go from 9s to 150s on my machine was

7170 https://github.com/facebook/jest/pull/7170. However, if I remove

collectCoverage from the config, 24.8.0 also completes in 9 seconds. So
the change in that case is that we actually collect coverage from the files
specified, which is expensive (especially for large files). I've got a
branch switching to use V8 coverage (#8596
https://github.com/facebook/jest/pull/8596) which should speed it up
some, but it will always be expensive.

Quick test:
[image: image]
https://user-images.githubusercontent.com/1404810/63378220-0a7b3900-c392-11e9-8ee4-f9642133bc1e.png

With the bugfix that slowed it down, otherwise all code in jest is
identical:
[image: image]
https://user-images.githubusercontent.com/1404810/63378229-12d37400-c392-11e9-8034-558091524986.png


You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
https://github.com/facebook/jest/issues/7811?email_source=notifications&email_token=AAAABSBCMXENWIL4T6734NTQFRB7XA5CNFSM4GURCVKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4XNV2A#issuecomment-523164392,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAABSEQECUMAV4ZPND6VR3QFRB7XANCNFSM4GURCVKA
.

>

⁓ ELLIOTTCABLE — fly safe.
http://ell.io/tt

I think after we get 24 to back to where 23 was, we can look at getting back to where 22 was as well.

@SimenB This is great news. I have another data point to add to my v22, 23, & 24 benchmarks posted here https://github.com/facebook/jest/issues/7811#issuecomment-502837365. The jest test suite is in the benchmarks is not using the collectCoverage option.

That test suite wouldn't happen to be open source would it? Seems like a great case study.

If it's not open source, I'd love it if you could test with the next branch in this repo (although it shouldn't be too different from 23.x). If you've got the time, it'd be great if you could bisect the jest repo against your project - that's what I did for the above post.

@SimenB I posted similar performance problems with 24.8 many moons ago up in this thread. I will retest with my company's private repo with your instructions tomorrow and post the results.

@SimenB thanks for looking carefully into this. So you think that coverage collecting is mostly behind poor performance? Do I understand correctly? Problem is that I do not collect coverage and I still see this performance drop.

@SimenB I posted similar performance problems with 24.8 many moons ago up in this thread. I will retest with my company's private repo with your instructions tomorrow and post the results.

Yeah, this thread is pretty big and unwieldy... Thanks for checking!

So you think that coverage collecting is mostly behind poor performance? Do I understand correctly?

No, just in that specific instance - I don't think coverage is the issue in the general case. However, the other reproductions I've looked at today have all had a regression of 8->7 seconds etc, so I haven't looked too closely at them yet. It's easier to find a culprit if the difference is huge.

@SimenB The test suite belongs to a private git repo.

I had difficulty using yarn link for jest-cli because of a babel 6 vs 7 incompatibility issue. So instead, I attempted to do a git bisect of the jest repo by doing yarn add /path/to/jest/packages/jest, between the v22.4.4 and v23.6.0 git tags.

I suspect that because I wasn't using yarn link, the technique of using yarn add /path/to/jest/packages/jest was not testing the actual changes of each commit, but whatever was the released version at that commit.

Anyhow, the result of this bisect (flawed as it was), was that 614f739378d4f1757d8dc99219309553db5b9403 was good, and 7922488d676aa7bc5a6334699df220c7b001e299 was bad.

614f739378d4f1757d8dc99219309553db5b9403 had the about the same median test duration time (9.36s). 7922488d676aa7bc5a6334699df220c7b001e299 had a significantly slower median test duration time (15.47s).

I will try again to get yarn link working for jest-cli.

We've released jest@next from the (now merged) next branch, if anyone wanna test it out


@deleteme what I did was delete jest from the repo I was checking, and just do ../jest/jest - no linking needed. I also ran yarn && yarn add -D -W @babel/core babel-core@bridge --ignore-scripts in order to avoid the babel-core version mismatch issue.

Tested 24.8 vs @next on our test suite. Seems to be a slight improvement, especially on cold cache:

Jest 24.8, cold cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests: 5 skipped, 494 passed, 499 total
Snapshots: 5 passed, 5 total
Time: 26.764s
Ran all test suites.
✨ Done in 27.88s.

Jest 24.8, warm cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests: 5 skipped, 494 passed, 499 total
Snapshots: 5 passed, 5 total
Time: 20.792s, estimated 24s
Ran all test suites.
✨ Done in 21.35s.

Jest 25.0 (@next), cold cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests: 5 skipped, 494 passed, 499 total
Snapshots: 5 passed, 5 total
Time: 24.008s
Ran all test suites.
✨ Done in 24.78s.

Jest 25.0 (@next) warm cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests: 5 skipped, 494 passed, 499 total
Snapshots: 5 passed, 5 total
Time: 20.267s, estimated 22s
Ran all test suites.
✨ Done in 20.86s.

That's great, thanks! Matches our results

@SimenB Node-gyp environment issues has delayed me from running jest@next, but I wanted to post some other results.

Windows 10, Node v10.14.2 Project with Typescript

Test Suites: 1492 total
Tests: 9668 total

It pieced some other config changes based on discussions in this thread. Primarily:

  • Turned off collecting code coverage
  • Reordered moduleFileExtensions to have ts and tsx at the front of the list

Jest 24.8 baseline, Cold Cache: 629 seconds
Jest 24.8 baseline, Warm Cache: 296 seconds

Jest 24.8 w/config changes, Cold Cache: 445 seconds
Jest 24.8 w/config changes, Warm Cache: 209 seconds

Compared to jest 23.6, I came to the following conclusions:

  • The warm cache time in 24.8 is pretty close to the warm cache time in 23.6. Unfortunately it's still over 3 minutes, but with a project of our size I think that's par for the course.
  • The cold cache time is still about 60 seconds slower than the cold cache time in 23.6. But based on findings from others, im sure once we run jest@next we should see this cold cache time decrease.

Once I fix my environment issues I'll update with that result too.

Sorry about the node-gyp issue - I just merged a PR which should fix it (#8869). Hopefully we can have a new release with it.

Also awesome that you were able to improve the time by a full third! We might want to add some sort of docs which documents some of the things one can do to improve perf.

E.g. Jest itself only collects coverage on node 10 on CI (linux) - locally it's not activated.

The following measurements are from a warm cache. Sample size for each is 10 runs.

jest version | mean s | std s | min s | max s
-- | -- | -- | -- | --
25.0.0 | 17.057 | 0.301 | 16.464 | 17.661
24.9.0 | 19.36 | 0.426 | 18.442 | 19.87
23.6.0 | 11.922 | 0.287 | 11.461 | 12.404
22.4.4 | 9.063 | 0.523 | 7.966 | 9.546

Thanks! Still significantly worse than 23... Any help bisecting would be greatly appreciated. Did you notice if the increased time is for tests themselves, or boot up?

Ok seems to be... https://github.com/facebook/jest/commit/8a7b4a9df0759142d148cf2f69ec3f23457c542e?

The difference between 22 and 23 wasn't a deal breaker for us, so bisecting between 24 and 23.6:

commit | cold | warn
-- | -- | --
25.0.0 | 87 | 53
24.9.0 | 81 | 52
24.7.1 | 88 | 55
24.0.0 | 86 | 84
fa0e81443 | 84 | 54
47cde7f24 | 84 | 52
bea5037a2 | 84 | 51
7d5de6844 | 83 | 53
ed6716787 | 83 | 53
8a7b4a9df | 84 | 52
7a64497a2 | 67 | 37
3abfa18d7 | 69 | 37
23.6.0 | 59 | 39

Can you provide the instructions how you did the bisecting? Would be really helpful for next time.

That's interesting, thanks @nol13! Do you use projects? That commit would in theory only affect multiple projects. But if it also affects warm cache, that's pretty weird. Maybe something about that refactor messed up a cache lookup? The source code changed in that commit now lives here, if you wanna poke around in it: https://github.com/facebook/jest/blob/522ed17dcebf988450b93ddb8bf748249c75de82/packages/jest-transform/src/ScriptTransformer.ts

Also, what does 25.0.0 look like for you?

@mucsi96
Sure, I think the main issue i was running into was solved by running 'yarn && yarn add -D -W @babel/core babel-core@bridge --ignore-scripts' as suggested above, after checking out and installing each commit.

Also, and this may be due to something I messed up in my local environment, for some inexplicable reason I had to change the name of the folder my cloned jest repo was in or the transforms would break. ../jest/jest = fail, ../jesty/jest = pass

Other than that mostly following the git-bisect instructions from the git docs. https://git-scm.com/docs/git-bisect

git clone https://github.com/facebook/jest.git
cd jest
yarn install;
git checkout v24.0.0;
git bisect start
git bisect bad
git checkout v23.6.0
git bisect good
yarn install
yarn && yarn add -D -W @babel/core babel-core@bridge --ignore-scripts
cd ../myProject
../jest/jest

then back in the jest folder, git bisect [bad/good] based on outcome

@SimenB No not using projects. I added 25.0.0 to the table above. Wasn't super scientific about shutting down all other programs but they should be mostly accurate. If it helps, configs below.

jest.config.js

module.exports = {
  clearMocks: true,
  setupFiles: ["dotenv/config"],
  setupTestFrameworkScriptFile: "./jest/setup.js",
  "snapshotSerializers": ["enzyme-to-json/serializer"],
  testRegex: "./src/test/.*Test[s]{0,1}.js(?!.snap)",
  testURL: "https://example.com",
  "moduleNameMapper": {
    "\\.(jpg|jpeg|png|gif|eot|otf|webp|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "<rootDir>/__mocks__/fileMock.js",
  },
  "transform": {
    "^.+\\.jsx?$": "babel-jest"
  }
};

@SimenB I'm happy to help. I want to try again with the yarn add -D -W @babel/core babel-core@bridge --ignore-scripts trick.

Did you notice if the increased time is for tests themselves, or boot up?

Sorry, I couldn't tell.

Can you provide the instructions how you did the bisecting? Would be really helpful for next time.

@mucsi96 The contributing doc was helpful. I'm using hyperfine and a fish shell function to start the benchmarks, and a spreadsheet to record measurements.

@SimenB:

Maybe something about that refactor messed up a cache lookup?

Probably. I doctored some console.log()s into the ScriptTransformer of my local jest installation:

if (!projectCache) {
      console.log("No cache found.");
      projectCache = {
        configString: (0, _fastJsonStableStringify().default)(this._config),
        ignorePatternsRegExp: calcIgnorePatternRegExp(this._config),
        transformRegExp: calcTransformRegExp(this._config),
        transformedFiles: new Map()
      };
      projectCaches.set(config, projectCache);
    }else{
      console.log("Cache found");
    }

Cache found isn't printed even once during the whole test execution. Only one project with a couple thousand tests.

Interesting! What happens if you do projectCaches.set(stableStringify(config), projectCache); instead? @thymikee said that they might not be differentially identical, meaning we'll always get a miss (and leak memory). stringify is probably not a long term solution, but testing it out would be interesting

@SimenB we'd need to use Map instead of WeakMap then, because strings are not valid keys for weak maps

Ergh yes, good point.

diff --git i/packages/jest-transform/src/ScriptTransformer.ts w/packages/jest-transform/src/ScriptTransformer.ts
index 6ee27c6dd..d91d661c7 100644
--- i/packages/jest-transform/src/ScriptTransformer.ts
+++ w/packages/jest-transform/src/ScriptTransformer.ts
@@ -43,10 +43,7 @@ const {version: VERSION} = require('../package.json');
 // This data structure is used to avoid recalculating some data every time that
 // we need to transform a file. Since ScriptTransformer is instantiated for each
 // file we need to keep this object in the local scope of this module.
-const projectCaches: WeakMap<
-  Config.ProjectConfig,
-  ProjectCache
-> = new WeakMap();
+const projectCaches = new Map<string, ProjectCache>();

 // To reset the cache for specific changesets (rather than package version).
 const CACHE_VERSION = '1';
@@ -74,17 +71,18 @@ export default class ScriptTransformer {
     this._transformCache = new Map();
     this._transformConfigCache = new Map();

-    let projectCache = projectCaches.get(config);
+    const stringifiedConfig = stableStringify(this._config);
+    let projectCache = projectCaches.get(stringifiedConfig);

     if (!projectCache) {
       projectCache = {
-        configString: stableStringify(this._config),
+        configString: stringifiedConfig,
         ignorePatternsRegExp: calcIgnorePatternRegExp(this._config),
         transformRegExp: calcTransformRegExp(this._config),
         transformedFiles: new Map(),
       };

-      projectCaches.set(config, projectCache);
+      projectCaches.set(stringifiedConfig, projectCache);
     }

     this._cache = projectCache;

Keeping the logging you already do is probably a nice idea as well in case this makes no difference

That makes the cache hit - but it doesn't seem to improve performance even a bit.

I suspected that the stringify would be the culprit, but even using "A" as a key doesn't make it run faster.

Then I used a plain object directly. Ala if (!projectChace ) { projectCache = ... }. Even that had no perf boon.

I guess you could rip out that entire caching thing with no ill-effect. Given that it a) hasn't worked this entire time anyway and b) doesn't seem to help performance even if it does work.

But for the regression, at least the cache misses are a dead end.

@StringEpsilon does getting rid of projectCaches help on its own?

Not sure what you mean. I didn't try ripping out the entire caching mechanism / reverting the commit (because I'm monkeypatching my node_modules). But when I tried using the plain object, I replaced const projectCaches = new Map() with let projectCache = null. With no effect.

None of the tinkering moved the baseline of my test execution down. But my setup isn't exactly stable, so there might be a small benefit to some of the things I tried, but the results were eaten by the noise. (My execution varied between 100s and 105 seconds for all iterations I tried)

I _think_ this is a valid revert. All tests pass except for the ones added in the linked commit. But if it doesn't make any difference, it's not worth trying for you.

Do you get any difference before and after that commit, btw? And of course, while it might not help in your case, it might very well work in other cases

diff --git i/packages/jest-transform/src/ScriptTransformer.ts w/packages/jest-transform/src/ScriptTransformer.ts
index 6ee27c6dd..d00996c62 100644
--- i/packages/jest-transform/src/ScriptTransformer.ts
+++ w/packages/jest-transform/src/ScriptTransformer.ts
@@ -30,23 +30,13 @@ import {
 import shouldInstrument from './shouldInstrument';
 import enhanceUnexpectedTokenMessage from './enhanceUnexpectedTokenMessage';

-type ProjectCache = {
-  configString: string;
-  ignorePatternsRegExp?: RegExp;
-  transformRegExp?: Array<[RegExp, string, Record<string, unknown>]>;
-  transformedFiles: Map<string, TransformResult>;
-};
-
 // Use `require` to avoid TS rootDir
 const {version: VERSION} = require('../package.json');

-// This data structure is used to avoid recalculating some data every time that
-// we need to transform a file. Since ScriptTransformer is instantiated for each
-// file we need to keep this object in the local scope of this module.
-const projectCaches: WeakMap<
-  Config.ProjectConfig,
-  ProjectCache
-> = new WeakMap();
+const cache: Map<string, TransformResult> = new Map();
+const configToJsonMap = new Map();
+// Cache regular expressions to test whether the file needs to be preprocessed
+const ignoreCache: WeakMap<Config.ProjectConfig, RegExp | null> = new WeakMap();

 // To reset the cache for specific changesets (rather than package version).
 const CACHE_VERSION = '1';
@@ -64,7 +54,6 @@ async function waitForPromiseWithCleanup(

 export default class ScriptTransformer {
   static EVAL_RESULT_VARIABLE: 'Object.<anonymous>';
-  private _cache: ProjectCache;
   private _config: Config.ProjectConfig;
   private _transformCache: Map<Config.Path, Transformer>;
   private _transformConfigCache: Map<Config.Path, unknown>;
@@ -73,21 +62,6 @@ export default class ScriptTransformer {
     this._config = config;
     this._transformCache = new Map();
     this._transformConfigCache = new Map();
-
-    let projectCache = projectCaches.get(config);
-
-    if (!projectCache) {
-      projectCache = {
-        configString: stableStringify(this._config),
-        ignorePatternsRegExp: calcIgnorePatternRegExp(this._config),
-        transformRegExp: calcTransformRegExp(this._config),
-        transformedFiles: new Map(),
-      };
-
-      projectCaches.set(config, projectCache);
-    }
-
-    this._cache = projectCache;
   }

   private _getCacheKey(
@@ -95,7 +69,12 @@ export default class ScriptTransformer {
     filename: Config.Path,
     instrument: boolean,
   ): string {
-    const configString = this._cache.configString;
+    if (!configToJsonMap.has(this._config)) {
+      // We only need this set of config options that can likely influence
+      // cached output instead of all config options.
+      configToJsonMap.set(this._config, stableStringify(this._config));
+    }
+    const configString = configToJsonMap.get(this._config) || '';
     const transformer = this._getTransformer(filename);

     if (transformer && typeof transformer.getCacheKey === 'function') {
@@ -146,13 +125,9 @@ export default class ScriptTransformer {
   }

   private _getTransformPath(filename: Config.Path) {
-    const transformRegExp = this._cache.transformRegExp;
-    if (!transformRegExp) {
-      return undefined;
-    }
-
+    const transformRegExp = this._config.transform;
     for (let i = 0; i < transformRegExp.length; i++) {
-      if (transformRegExp[i][0].test(filename)) {
+      if (new RegExp(transformRegExp[i][0]).test(filename)) {
         const transformPath = transformRegExp[i][1];
         this._transformConfigCache.set(transformPath, transformRegExp[i][2]);

@@ -402,7 +377,7 @@ export default class ScriptTransformer {
     if (!options.isCoreModule) {
       instrument = shouldInstrument(filename, options, this._config);
       scriptCacheKey = getScriptCacheKey(filename, instrument);
-      const result = this._cache.transformedFiles.get(scriptCacheKey);
+      const result = cache.get(scriptCacheKey);
       if (result) {
         return result;
       }
@@ -416,7 +391,7 @@ export default class ScriptTransformer {
     );

     if (scriptCacheKey) {
-      this._cache.transformedFiles.set(scriptCacheKey, result);
+      cache.set(scriptCacheKey, result);
     }

     return result;
@@ -514,9 +489,22 @@ export default class ScriptTransformer {
   }

   shouldTransform(filename: Config.Path): boolean {
-    const ignoreRegexp = this._cache.ignorePatternsRegExp;
-    const isIgnored = ignoreRegexp ? ignoreRegexp.test(filename) : false;
+    if (!ignoreCache.has(this._config)) {
+      if (
+        !this._config.transformIgnorePatterns ||
+        this._config.transformIgnorePatterns.length === 0
+      ) {
+        ignoreCache.set(this._config, null);
+      } else {
+        ignoreCache.set(
+          this._config,
+          new RegExp(this._config.transformIgnorePatterns.join('|')),
+        );
+      }
+    }

+    const ignoreRegexp = ignoreCache.get(this._config);
+    const isIgnored = ignoreRegexp ? ignoreRegexp.test(filename) : false;
     return (
       !!this._config.transform && !!this._config.transform.length && !isIgnored
     );
@@ -643,34 +631,6 @@ const getScriptCacheKey = (filename: Config.Path, instrument: boolean) => {
   return filename + '_' + mtime.getTime() + (instrument ? '_instrumented' : '');
 };

-const calcIgnorePatternRegExp = (config: Config.ProjectConfig) => {
-  if (
-    !config.transformIgnorePatterns ||
-    config.transformIgnorePatterns.length === 0
-  ) {
-    return undefined;
-  }
-
-  return new RegExp(config.transformIgnorePatterns.join('|'));
-};
-
-const calcTransformRegExp = (config: Config.ProjectConfig) => {
-  if (!config.transform.length) {
-    return undefined;
-  }
-
-  const transformRegexp: Array<[RegExp, string, Record<string, unknown>]> = [];
-  for (let i = 0; i < config.transform.length; i++) {
-    transformRegexp.push([
-      new RegExp(config.transform[i][0]),
-      config.transform[i][1],
-      config.transform[i][2],
-    ]);
-  }
-
-  return transformRegexp;
-};
-
 const wrap = (content: string, ...extras: Array<string>) => {
   const globals = new Set([
     'module',

@SimenB your revert applied to 24.9.0 gets us back down to ~35s! So roughly what we're seeing in 23.6.0.

Nice! Do you use multiple projects?

EDIT: Wait, I already asked about that... Very interesting. What about the first patch I posted (which replaces the weakmap with a map and stringifies the config as the key)?

Yup, I'm seeing a similar speedup with that one too!

Something different with our project/config vs. StringEpsilon's to make it act like 'projects' is being used?

I don't know... I'm unable to produce a cache miss vs a cache hit that differs with and without that patch, with or without projects and with or without runInBand. I get the same amount of hits and misses regardless. To verify it's about the cache hit/miss at all, could you add some logging?

diff --git i/packages/jest-transform/src/ScriptTransformer.ts w/packages/jest-transform/src/ScriptTransformer.ts
index 6ee27c6dd..432ec39c9 100644
--- i/packages/jest-transform/src/ScriptTransformer.ts
+++ w/packages/jest-transform/src/ScriptTransformer.ts
@@ -76,7 +76,10 @@ export default class ScriptTransformer {

     let projectCache = projectCaches.get(config);

-    if (!projectCache) {
+    if (projectCache) {
+      console.log('hit!', config.name);
+    } else {
+      console.log('miss!', config.name);
       projectCache = {
         configString: stableStringify(this._config),
         ignorePatternsRegExp: calcIgnorePatternRegExp(this._config),

Both with and without the WeakMap -> Map change. You probably don't need the config.name part as that should be stable

All hits and fast with patch, all miss and slow without.

I might need to revisit the tests and properly build jest, instead of monkeypatching @jest/transform in my node_modules folder.

But @nol13: What operating system where you using to bisect and to test the patches? I know that Jest performs much worse on windows already, so that the cache misses might not make a difference on that particular platform. But it could also just be my sloppy methods.

Mac. Yes windows def slower, have not tested the difference there though, can try next week. Keeping times reasonable on non-bleeding edge Windows machines really was he main concern.

My project was using the old one jest-dom/extend-expect, so I replaced it with @testing-library/jest-dom/extend-expect and upgraded jest to 24.9.0, now tests and coverage are lightning fast.

On windows with node 10 and warm cache:

85s without patch
56s with patch

This is amazing! @SimenB what are the prospects of this making it into a minor/patch of 24?

Pretty low, but we'll try to get a stable 25 out. It's currently a bit delayed due to issues rolling it out at FB (something with how they fake promises are broken). And making this part of 25 is pretty doable.

However, as mentioned, I was unable to get a difference in cache hits and misses when testing it out. I'd like to write a test for this when landing it. @nol13 can you share your jest config? What would be perfect would be a small repository we could pull down, and see it logging cache misses, but I very much understand if that's a bit much to ask 🙂

https://github.com/nol13/jest-test-repo

^Here's a minimal example. Without the patch, this hits with runInBand but misses without it.

Thank you! I'll dig into this later today. To verify my assumptions going in - that's fast on 23, upgrading to 24 or 25 is slow, but applying the patch makes those comparable to 23 again?


What I'm seeing (over 10 runs with a warmup run for the cache)

| jest version | time |
| ------------ | ---: |
| 23.6 | 6.2s |
| 24.9 | 8.5s |
| 24 w/patch | 6.7s |
| 25 | 8.0s |
| 25 w/patch | 6.5s |

I can also reproduce the hit vs miss in the caching. Thanks for the great repro @nol13! I'll land the patch later today

Ya, it's a super stripped down repo, but the patch applied to 24.9.0 still takes it from 4.8s to 4.2s consistently and makes the cache hit.

PR: #8890

This case with cache hitting and missing bothers us a lot. Is there any ETA for your PR @SimenB?

Hopefully today. We just need to make sure it works for FB (best test nothing is broken that exists 🙂)

You can use patch-package in the meantime. A diff:

diff --git a/node_modules/@jest/transform/build/ScriptTransformer.js b/node_modules/@jest/transform/build/ScriptTransformer.js
index 8b02912..2191456 100644
--- a/node_modules/@jest/transform/build/ScriptTransformer.js
+++ b/node_modules/@jest/transform/build/ScriptTransformer.js
@@ -199,7 +199,7 @@ const {version: VERSION} = require('../package.json'); // This data structure is
 // we need to transform a file. Since ScriptTransformer is instantiated for each
 // file we need to keep this object in the local scope of this module.

-const projectCaches = new WeakMap(); // To reset the cache for specific changesets (rather than package version).
+const projectCaches = new Map(); // To reset the cache for specific changesets (rather than package version).

 const CACHE_VERSION = '1';

@@ -224,16 +224,17 @@ class ScriptTransformer {
     this._config = config;
     this._transformCache = new Map();
     this._transformConfigCache = new Map();
-    let projectCache = projectCaches.get(config);
+    const configString = (0, _fastJsonStableStringify().default)(this._config);
+    let projectCache = projectCaches.get(configString);

     if (!projectCache) {
       projectCache = {
-        configString: (0, _fastJsonStableStringify().default)(this._config),
+        configString,
         ignorePatternsRegExp: calcIgnorePatternRegExp(this._config),
         transformRegExp: calcTransformRegExp(this._config),
         transformedFiles: new Map()
       };
-      projectCaches.set(config, projectCache);
+      projectCaches.set(configString, projectCache);
     }

     this._cache = projectCache;

@SimenB this patch is to which version of jest? 25.0.0, or 24.9.0?

  1. This is for 24.9
diff --git a/node_modules/@jest/transform/build/ScriptTransformer.js b/node_modules/@jest/transform/build/ScriptTransformer.js
index 0dbc1d7..b595ec6 100644
--- a/node_modules/@jest/transform/build/ScriptTransformer.js
+++ b/node_modules/@jest/transform/build/ScriptTransformer.js
@@ -207,7 +207,7 @@ const _require = require('../package.json'),
 // we need to transform a file. Since ScriptTransformer is instantiated for each
 // file we need to keep this object in the local scope of this module.

-const projectCaches = new WeakMap(); // To reset the cache for specific changesets (rather than package version).
+const projectCaches = new Map(); // To reset the cache for specific changesets (rather than package version).

 const CACHE_VERSION = '1';

@@ -239,16 +239,17 @@ class ScriptTransformer {
     this._config = config;
     this._transformCache = new Map();
     this._transformConfigCache = new Map();
-    let projectCache = projectCaches.get(config);
+    const configString = (0, _fastJsonStableStringify().default)(this._config);
+    let projectCache = projectCaches.get(configString);

     if (!projectCache) {
       projectCache = {
-        configString: (0, _fastJsonStableStringify().default)(this._config),
+        configString,
         ignorePatternsRegExp: calcIgnorePatternRegExp(this._config),
         transformRegExp: calcTransformRegExp(this._config),
         transformedFiles: new Map()
       };
-      projectCaches.set(config, projectCache);
+      projectCaches.set(configString, projectCache);
     }

     this._cache = projectCache;

A new release of 25 will come next week. You can build from master now and test it out if you want. After the release, if we still have regressions vs Jest 24, please open up a new issue. This issue has more than 100 comments, so it's getting a bit unwieldy... Things we've not yet tried that has been mentioned in this issue is to bundle up our modules to hit the FS less and look into what we do with stack traces.

Thank you to everyone who helped track this down!

@SimenB You said "A new release of 25 will come next week. " Any news about the release?

thanks

Last I heard it was being tested at FB on Friday.

Would it be possible to release a patch to version 24 while we wait for the version 25 release?

Hi @SimenB !

Thanks for the patch for 24.9!

I'm trying to use patch-package to patch our project in the meantime and have updated node_modules/@jest/transform/build/ScriptTransformer.js with the diff.

However, when I run yarn patch-package jest, it says there's no diff (possibly because technically the path is to node_modules/@jest instead of node_modules/jest. But if I run yarn patch-package @jest, then it says @jest is not in package.json

Any pointer for this situation? Thanks a lot!

Happy Friday everyone! Just wondering if anyone has tips on how to patch jest 24.9 with the fix using patch-package 🙏 Thank you!

any progress on releasing this?

Quick update on https://github.com/facebook/jest/issues/7811#issuecomment-540128239 and https://github.com/facebook/jest/issues/7811#issuecomment-541189080, @ds300 said we just had to do: yarn patch-package @jest/transform 🎉 !

Can confirm SimenB's patch works for 24.9!!

A new release of 25 will come next week. You can build from master now and test it out if you want. After the release, if we still have regressions vs Jest 24, please open up a new issue. This issue has more than 100 comments, so it's getting a bit unwieldy... Things we've not yet tried that has been mentioned in this issue is to bundle up our modules to hit the FS less and look into what we do with stack traces.

Thank you to everyone who helped track this down!

Any update on this by any chance?

All I want for Christmas is y...Jest 25 <3

If Jest 25 is delayed for some reason, would it be possible to release a patch for 24.9 with the fix? @SimenB

For the record, jest@next downloaded on Nov 22nd was extremely slow, but I redownloaded now and it looks much much faster (sorry for the imprecision)

@dpinol curious which version you ended up installing that worked? The version on the npm registry tagged as 25.0.0 appears to point to https://github.com/facebook/jest/commit/ff9269be05fd8316e95232198fce3463bf2f270e which is before the perf fix landed. Indeed looking at the code on unpkg for 25.0.0 (which is what you get when installing jest@next), the fix isn't applied: https://unpkg.com/browse/@jest/transform@25.0.0/build/ScriptTransformer.js

@SimenB is there any version of jest (either v25+ or a patched v24) that's available on the registry currently which includes this fix? While using something like patch-package is possible, I'd prefer not to fork or build a custom version given that the fix does already exist in master. If not, perhaps the simplest option from your end would be to update the jest@next tag to point to a new v25.0.1 or similar, and then developers willing to opt-in to an unstable build could at least do that?

For people who want to use patch-package today, the following steps should work 🤞

Steps:

  1. Update package.json:
 "scripts": {
+  "postinstall": "patch-package"
 }
  1. Get patch-package in your package.json [doc]:
    $yarn add patch-package postinstall-postinstall or $npm i patch-package

  2. Go to the root of your project, and open file node_modules/@jest/transform/build/ScriptTransformer.js in your favorite editor

  3. Update code according to the diff:
    jest 24.9: https://github.com/facebook/jest/issues/7811#issuecomment-527347645
    jest 25: https://github.com/facebook/jest/issues/7811#issuecomment-527107215

  4. yarn patch-package @jest/transform or npx patch-package @jest/transform.
    If it works, you should see Created file patches/@jest+transform+24.9.0.patch

  5. Now you can test the patch by $rm -rf node_modules && yarn install, and patch-package should patch the diff for you automatically

  6. Make sure to commit patches/@jest+transform+24.9.0.patch, so everyone gets the patch, including CI/CD pipeline!

When is this going to be in a release?

Ran 24.9 vs 25.1 on a local test suite. Amazing work @SimenB and friends!

Jest 25.1

Cold:

Test Suites: 3 skipped, 218 passed, 218 of 221 total
Tests: 12 skipped, 962 passed, 974 total
Snapshots: 17 passed, 17 total
Time: 46.692s
Ran all test suites.
✨ Done in 47.55s.

Warm:

Test Suites: 3 skipped, 218 passed, 218 of 221 total
Tests: 12 skipped, 962 passed, 974 total
Snapshots: 17 passed, 17 total
Time: 32.527s, estimated 44s
Ran all test suites.
✨ Done in 33.65s.

Jest 24.9

Cold:

Test Suites: 3 skipped, 218 passed, 218 of 221 total
Tests: 12 skipped, 962 passed, 974 total
Snapshots: 17 passed, 17 total
Time: 68.89s
Ran all test suites.
✨ Done in 70.14s.

Warm:

Test Suites: 3 skipped, 218 passed, 218 of 221 total
Tests: 12 skipped, 962 passed, 974 total
Snapshots: 17 passed, 17 total
Time: 57.806s, estimated 65s
Ran all test suites.
✨ Done in 58.92s.

I was just about to post that a stable Jest 25 was just released, and you've already tested it 😀

Fantastic numbers, thanks for sharing @csvan!

I'm not sure if this should be filed as a separate issue at this point, but sadly, 25.1.0 doesn't seem to have resolved the slowdown for us. We skipped version 24 because of the slowdowns we were seeing from 23.4.2. For the whole application (130 test files, 1104 tests) I see tests go from taking 17-18 seconds in v23 to taking 23-24 seconds in v25. Running the slowest test file in our application on its own went from taking 6 seconds in v23 to 8 seconds in v25.

Interesting, would you be able to post a reproduction in a new issue? This issue in particular was about a specific bug introduced in Jest 24 which has been fixed in #8890

There's a new issue for Jest 25 performance issues: https://github.com/facebook/jest/issues/9457

Was this page helpful?
0 / 5 - 0 ratings