Gatsby: Worse performance results with Lighthouse v6 (?)

Created on 22 May 2020  ·  115Comments  ·  Source: gatsbyjs/gatsby

Just wondering whether there is some information that could be of use here, since I've found in my sites a significant worsening of lighthouse results when comparing lighthouse v5.6 vs the new 6.0 (https://lighthouse-metrics.com/)

In a complex site (of mine) it goes (performance-wise) from a ~90 to ~50
In a simple starter (of mine) it lowers from ~98 to ~80

This doesn't happen in starters such as https://gatsby-starter-default-demo.netlify.app/ or https://gatsby-v2-perf.netlify.app/

But it does happen to www.gatsbyjs.org (from ~98 to ~73) or to https://theme-ui.com (from ~90 to ~75)

Since I spent some time achieving 98-100 punctuations in my code (which made me very happy), I kind of feel I don't have a lot of room for improvement (probably I do have), so I've thought I might ask here if there's something going on

Thanks

inkteam assigned performance question or discussion

Most helpful comment

I've been working on a gatsby-image successor. It's not 100% there yet, still need to write a composable version so you can create your own gatsby-image flavor but it will fix most of the bad lighthouse scores.

You can already use it but it's not yet battle-tested.
https://github.com/wardpeet/gatsby-image-nextgen/tree/main/gatsby-image

You can install it by npm install --save @wardpeet/gatsby-image-nextgen. There is a compat layer for current users of gatsby-image.

Things that aren't supported yet:

  • object-fit needs to be done by css outside of the component
  • art-direction

Current gatsby-image demo:
site: https://wardpeet-using-gatsby-image.netlify.app/
pagespeed-insights: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwardpeet-using-gatsby-image.netlify.app%2F
webpagetest: https://webpagetest.org/result/200928_4M_0879160e38bb6c5489f85534de2dd197/

New gatsby-image-nextgen demo:
site: https://gatsby-image-nextgen.netlify.app/
pagespeed-insights: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fgatsby-image-nextgen.netlify.app%2F
webpagetest: https://webpagetest.org/result/200928_C0_63317160bdfc71ece1a2057df8b08133/

All 115 comments

It looks like Lighthouse 6 introduces some new metrics and removes some others from v5 so a change in score is certainly likely. This article explains what has changed:

https://web.dev/lighthouse-whats-new-6.0/

There's also a link at the end to a score calculator which is really useful for breaking down the score and understanding what factors are contributing the most.

https://googlechrome.github.io/lighthouse/scorecalc

I get the impression there's more focus on main thread interactivity in v6 so if your site includes large JS bundles thats probably a significant factor.

Yes @shanekenney , I'm aware, but don't really know how to reduce it apart from removing parts of the site to see what parts are provoking this

Do you also see the impact on gaysbyjs and theme-ui sites? I'm curious / would love to know what optimizations on their site they may be thinking about, or if they have spotted some specific cause

This issue is great so we can discuss overall Lighthouse / PageSpeed insights scores and the possible regressions we're seeing with v6.

@kuworking one thing worth noting is that lighthouse-metrics.com seems to use _"Emulated Nexus 5X"_ for 5.6 and _"Emulated Moto G4"_ for 6.0 which could also add some noise to the comparison.

This benchmark over 922 sites claims in v5 the median Performance score for a Gatsby site is 75. I'll try to do a quick view using hosted solutions to prevent my local network from being yet another variability factor.

Currently (with Lighthouse v5.6 / PageSpeed Insights)

PSI runs on a Moto G4 on "Fast 3G". Source

Certain "flag" sites built using Gatsby are not really performing great on PageSpeed Insights (which is still using Lighthouse v5.6 I assume – subject to standard variability on every run, possibly running 3x or 5x and averaging would weight in more reliable metrics).

  • gatsbyjs.org (Mobile 72/100, TTI 9s) Source
  • reactjs.org (Mobile 62/100, TTI 9.5s) Source
  • gatsbyjs.com (Mobile 77/100, TTI 6.6s) Source

However some other sites (and most starters) are performing very well on PageSpeed Insights:

  • store.gatsbyjs.org (Mobile 99/100, TTI 2.5s) Source
  • thirdandgrove.com (Mobile 91/100, TTI 4.0s) Source

The average TTI is noticeable – and while v6 changes the overall weight of TTI from 33% to 15% and dropped First CPU Idle, it does add TBT with a weight of 25% which could possibly explain a regression of scores generally speaking just due to overall JS parsing and execution.

Lighthouse v6 (with WebPageTest.org)

  • This ran WPT on _Moto G (gen 4), Moto G4 - Chrome_ with a connection of _3G Fast (1.6mbps/768kbps 150ms RTT)_. This seems to be as close as a match of device/network as we can get before PSI updates their underlying lighthouse version.
  • Make sure to check _Capture Lighthouse Report_ under _Chromium_. I've disabled repeat view to keep the scope on a first time visitor, first load of the page.

Here are the results, you can see the Lighthouse report by clicking on its number. I'm extracting the values from that report.

  • gatsbyjs.org (72 -> 67/100, TTI 7.5s, TBT 2150ms) Source
  • reactjs.org (62 -> 66/100, TTI 7.8s, TBT 3520ms) Source
  • gatsbyjs.com (77 -> 66/100, TTI 8.4s, TBT 2440ms) Source

However, notice the regression on the following two sites:

  • store.gatsbyjs.org (99 -> 54/100, TTI 6.8s, TBT 1300ms) Source
  • thirdandgrove.com (91 -> 63/100, TTI 14s!, TBT 1330ms) Source

Some of the open questions I have.

  1. Is the overall TTI (and TBT) explained by JS parsing + executing, or are there other factors harming interactivity?
  2. If so, could we be more aggressive (either on Gatsby by default such as latest changes like enabling granular chunks, or under some experimental flag) when building the chunks to _only_ send what that first load needs (i.e. prevent the app-[hash].js from having excess)? It could also be simply documenting ways to play with extending webpack config with more guidance.
  3. Could patterns like Module/nomodule help decreasing chunks? Recommending/documenting usage of @loadable/components? Partial rehydration?
  4. This may be a second step towards pushing high scores, but since FMP is no longer a factor, is LQIP on gatsby-image helping or harming when it comes to LCP? LCP of store.gatsby.org on the run above was 4.7s!

(I'm using the links above just as examples – if anyone would like a certain link removed I can gladly edit the message)

My site (https://matthewmiller.dev/) appears to have gotten better (~92 to ~95), but some of the new tests reveal a few things that could probably be improved.

The unnecessary JavaScript test for example,
(First column is size, second column is amount that's unnecessary)
image
I assume this is due to items required for other pages being included here, so using something like loadable-components could help a bit.

To me I'm having big difficulties in understanding Largest Contentful Paint, in the sense that I'm getting very large values without knowing why, and seeing a discrepancy between the value in the report (for example 7.4 s, and the LCP label that appears in the Performance - ViewTrace tab (~800 ms)

I can see that something similar seems to happen in the starter https://parmsang.github.io/gatsby-starter-ecommerce/

As an update – seems that PageSpeed Insights has soft launched the update to run Lighthouse v6 (it may not be in all regions yet).

gatsbyjs org lighthouse

Link to test https://gatsbyjs.org/. Currently getting results varying from low 60s to mid 80s, mainly depending on the values of TBT and TTI.

@kuworking there might be an issue with Lighthouse v6 recognizing gatsby-image.

According to webdev

For image elements that have been resized from their intrinsic size, the size that gets reported is either the visible size or the intrinsic size, whichever is smaller.

In my case, I think Lighthouse isn't respecting the view size.
Screen Shot 2020-05-29 at 6 30 22 PM

And here's the image in question
Screen Shot 2020-05-29 at 6 28 55 PM

It might be accounting for the intrinsic size which is 3000 pixels hence the 13s LCP for me.

@daydream05 I had similar theories and findings as well so I tested my pages without images and still had a crazy long LCP (10-12sec). I have a lot going on in my project so it could be other variables as well, but I'm curious if you've tested a page with text content and no images yet.

Dropped from a 100 to 79~ https://dougsilkstone.com/ recently. Jumps up to 90when Google Tag Manager (and Google Analytics) are removed.

Will report back on more findings as I test things.

Edit: Hit 100 when removing typekit loaded font from gatsby-plugin-web-font-loader (also using preload-fonts cache).

GTM is overall affecting my project a chunk but it isn't that drastic of a change when I remove it (5-10 points tops on sub 50s scores after LH6). I still need to do more testing but just wanted to throw that out there.

@Jimmydalecleveland interesting! I also have another site where the is screen i just text and it’s blaming the hero text as the main cause for LCP. And LCP only accounts for whatever is in view, which doesn’t make sense. How can be a text be that big of a problem.

@dougwithseismic I also use typekit and it’s def one of the major culprits for lower lighthouse scores. I wish there was a way to fix typekit since they dont support font-display

I think Lighthouse v6 is really tough on JS frameworks because on how they changed weighting of the scores. (More focus on blocking time) And Gatsby sites have historically low script evaluation/main thread scores due to rehydration and other things.

@dougwithseismic how did you link typekit font without using the stylesheet?

I am having a similar experience, with lighthouse 5.7.1 my performance score was about 91, however lighthouse 6 has dramatically dropped my performance score to about 60.

Dropped from a 100 to 79~ https://dougsilkstone.com/ recently. Jumps up to 90when Google Tag Manager (and Google Analytics) are removed.

Will report back on more findings as I test things.

Edit: Hit 100 when removing typekit loaded font from gatsby-plugin-web-font-loader (also using preload-fonts cache).

I don't even have these plugins installed, but my mobile score dropped from 90+ to 60 ~ 70+

Same here. Massive drop from 90ish to 60ish on multiple sites.

+1 drop of about 30+ points

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

Do you have any numbers from Next?

I am wondering whether these scores are the new normal for fast webs (that are not static, JS-free and likely also image-free)

Do you have any numbers from Next?

https://nextjs.org/ has a score of 85, with both Largest Contentful Paint (3.8) and First Contentful Paint (2.8) being the main offenders. It also has a bunch of "Unused JS". That's down from ~95 in Lighthouse 5.7.1. It's "only" a drop of around 10 points, whereas gatsby sites seem to lose twice as many points.

I'm quite new to this world, but I'm following this issue after my gatsby site lost around 25 points when tested with lighthouse 6.0.0 from npm. Interestingly, if I'm using the pagespeed insights rather than npm lighthouse, my site goes back to around ~99. Whereas gatsbyjs.org gets ~70 on pagespeed insights, but ~84 with npm lighthouse. Something is probably being tweaked somewhere, I guess? All of them are getting 'Unused JS' warnings tho

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

Do you have any numbers from Next?
I am wondering whether these scores are the new normal for fast webs (that are not static, JS-free and likely also image-free)

A Next.js website -> https://masteringnextjs.com/ 77 mobile score. A lot of "Unused JS".

I see better scores with lighthouse-metrics https://lighthouse-metrics.com/one-time-tests/5edfbbb1cf858500080125f7

But I also don't see images there, and in my experience images seem to have a high (and legitimate IMO) impact

Yet, gatsbyjs.org neither has images and its score is (relatively) bad https://lighthouse-metrics.com/one-time-tests/5edfbc58cf858500080125ff (as compared with @cbdp example)

Let's see what gatsby devs think about this

With a few tweaks, site is back to top scores.

It seems to me like a case of Google moving the goal posts to be more
strict about performance- notably FCP.

Our sites aren't slow by any means, moreso just being judged with different
criteria. I'll help out on this one ✌️

On Tue, 9 Jun 2020, 18:48 kuworking, notifications@github.com wrote:

Is anyone addressing this? Seems like there is no point using Gatsby over
Next if it doesn't deliver better scores out-the-box.

Do you have any numbers from Next?
I am wondering whether these scores are the new normal for fast webs (that
are not static, JS-free and likely also image-free)

A Next.js website -> https://masteringnextjs.com/ 77 mobile score. A lot
of "Unused JS".

I see better scores with lighthouse-metrics
https://lighthouse-metrics.com/one-time-tests/5edfbbb1cf858500080125f7

But I also don't see images there, and in my experience images seem to
have a high (and legitimate IMO) impact

Yet, gatsbyjs.org neither has images and its score is (relatively) bad
https://lighthouse-metrics.com/one-time-tests/5edfbc58cf858500080125ff
(as compared with @cbdp https://github.com/cbdp example)

Let's see what gatsby devs think about this


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/gatsbyjs/gatsby/issues/24332#issuecomment-641433463,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ALSIKRH4G74CRN2FNCUO4NDRVZRVNANCNFSM4NHP7XCA
.

Just wanted to drop this useful calculator for comparing results from v6 with v5: https://googlechrome.github.io/lighthouse/scorecalc/

Lighthouse scores generally vary a lot, even when using it through PageSpeed Insights. For example, for https://www.gatsbyjs.org/ I received everything from 64 to 88 mobile performance after 5 runs. Hence, for tracking down this issue the calculator might be useful to see the consequences of different weights on the same run (note: as metrics are a little different, some values like FMP must be assumed using former measurements).

PS: Here I have a comparison of two runs from PageSpeed Insights for gatsbyjs.org:
Run 1 - v6: 67 - v5: 85
Run 2 - v6: 78 - v5: 87
Biggest impact is caused by the new metric "Total Blocking Time" which is below a score of 70 in both runs and also has a weight of 25%.

Yep, to add to @Pyrax: LCP (Largest Contentful Paint) and TBT weigh 25% each in Lighthouse v6. So we focussed our efforts on addressing those. We found:

LCP

  • Removing any animations that might trigger on load (e.g. cookie 💩 banner).
  • Switching to gatsby-img's tracedSVG seemed to help a little, since we have large hero images on most pages. (Not sure why really, without further investigation. UX improves a little too.)

TBT

  • By a long shot, Unused JS from Gatsby's bundling seems to be biggest cuplrit (backed up web.dev's docs), by a long shot. Page-level treeshaking could surely be improved here?

This recent Lighthouse update seems to have just screwed everyone's perf scores, including their own:

Screen Shot 2020-06-10 at 7 03 53 AM

The only gatsby site of mine that hasn't really been obliterated is a site that's basically a single page and like 99% html. But even that one dropped about 5-10points.

I'm seeing the inverse of most people though, that is, Lighthouse in Chrome browser is still showing good scores for my site, but when ran on PageSpeed Insights it drops the perf score 20-30 points... maybe my Chrome Lighthouse version is behind? Chrome is latest, not sure how to check the built in Lighthouse version...

This recent Lighthouse update seems to have just screwed everyone's perf scores, including their own:

Screen Shot 2020-06-10 at 7 03 53 AM

The only gatsby site of mine that hasn't really been obliterated is a site that's basically a single page and like 99% html. But even that one dropped about 5-10points.

I'm seeing the inverse of most people though, that is, Lighthouse in Chrome browser is still showing good scores for my site, but when ran on PageSpeed Insights it drops the perf score 20-30 points... maybe my Chrome Lighthouse version is behind? Chrome is latest, not sure how to check the built in Lighthouse version...

Lighthouse version is shown at the bottom of the audit.
Screenshot 2020-06-10 at 13 13 57

@dylanblokhuis ah, yep there it is. I'm on 5.7.1, is v6 not yet shipped in Chrome?

@dylanblokhuis ah, yep there it is. I'm on 5.7.1, is v6 not yet shipped in Chrome?

It is not. Not yet anyway. If you want the latest, you can install it from npm and then run lighthouse https://yoursite.com --view and you'll get your score in the same format as you're used to with Chrome audit :)

For anyone else who's taken a big hit in scores, #24866 might also be relevant. There has been a seemingly pretty significant change to how Gatsby is handing chunking. Whilst the change definitely appears to make a lot of sense, for us at least, this change has resulted in code that was distributed across chunks being concentrated in commons and app chunks. Meaning a significantly bigger js load / parse.

The most concerning thing here is that these metrics are going to start impacting Page Rank relatively soon.

I've stripped out all third-party requests (Tag Manager/Typekit/Pixel/Analytics/ReCaptcha) and that's only giving a relatively small score boost, so something else is at play.

Also, for anyone looking to run Lighthouse 6 locally, it is available now in Chrome Canary and slated to ship to Chrome in July some time.

First: I got in touch with a Google engineer that's working on web.dev and asked about this. Not sure if that will lead to any greater explanation, but they seem to be intent on helping. I'll follow-up when I've managed to chat with them.


My performance scores went from 94-99 to 40-55. 😢

Largest Contentful Paint for my website mostly applies on pages with large images. For some reason, it's saying the images are taking like 14 seconds to load.

If you open any of the minimal Gatsby starter sites, any pages with images seem to be in the 70s max.

Here are the first two starters I saw with many images:

ghost.gatsby.org:

Screen Shot 2020-06-11 at 10 40 47 AM

gcn.netlify.app:

Screen Shot 2020-06-11 at 10 40 37 AM

However, the Gatsby starter blog has 98 performance (granted, it's a super minimal page with just some text):

Screen Shot 2020-06-11 at 10 55 05 AM

gatsbyjs.com:

image

Compare old scores to new scores in Chrome

You can still compare the old vs. new Lighthouse method scores without using the CLI. I find it useful to see what has changed.

View old Lighthouse tests

To view old Lighthouse scores, run the Lighthouse chrome extension from your chrome developer tools, instead of the browser toolbar.

Screen Shot 2020-06-11 at 11 03 41 AM

View new Lighthouse tests

Click the icon from your chrome extensions bar.

Screen Shot 2020-06-11 at 11 04 37 AM

My page changes

These are the two scores I have for the exact same page:

Old lighthouse (via Chrome dev tools)

Screen Shot 2020-06-11 at 10 56 22 AM

New lighthouse (via Chrome extension on the address bar)

Screen Shot 2020-06-11 at 10 58 02 AM

🤷‍♂️

@nandorojo my impression with images is that emulation is done with a really slow connection and there, images do take a long time to be rendered

Since the option of removing images is not always possible, perhaps these 70's scores are the normal ones for this type of pages

And, the option of delaying their loading so that the user can start his interaction sooner, doesn't seem to do the trick (in my case)

Hey, sorry for the late answer. I've worked on Lighthouse, I'll try to explain as good as I can.

Chrome devs have published "Web Vitals", Essential metrics for a healthy site. It contains many metrics but the core ones are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). For tools like Lighthouse FID is swapped with Total Blocking Time (TBT).

Lighthouse v6 also takes these metrics in account and has shifted. This doesn't mean Gatsby is slow. It might be that some different optimizations are necessary.

This is how things changed:
lighthouse metric change

If you want to know more about LCP you should checkout https://www.youtube.com/watch?v=diAc65p15ag.

So let's talk about Gatsby. Gatsby itself is still pretty fast and we're improving it even more. We're creating new API's so page builders like MDX, Contentful's rich text, ... can optimize the bundle as well. A lot can be done by optimizing your LCP. Make sure when using fonts & images, they aren't loaded lazily and are loaded as soon as possible. These assets should be loaded from the same origin as your site, they should not be loaded from a CDN.

Sadly TBT is a hard problem to solve and is something React doesn't optimize for. If you want to drop TBT, you should checkout preact. Preact has the same API as React but has a smaller javascript footprint. They do things differently but React components are compatible. You install it by running gatsby recipes preact.

Somethings I noticed when profiling gatsbyjs.com & gatsbyjs.org is that we should load google analytics, etc a bit later than we do now to make sure it doesn't become part of TBT.

If we look at .com by postponing analytics and GTM and making sure fonts load faster we can already see an improvement of +17. If we add preact into the mix we see +6.
.com metrics

We can do the same for .org, we start at a score of 63. With some optimization of LCP and TBT we can get to 75.
.org metrics

I'm not sure what we should do with this issue. I feel we can close it as there is not much else we can do here. What do you all think?

@wardpeet Ty for the extra insight.

We have been digging into this matter a lot on a big Gatsby project we have that uses Contentful and will be used across multiple sites for us (Gatsby themes are awesome). I'll share a few findings in case they are helpful to anyone else looking at this.

  1. We have a situation that might not be super common, but I have seen it enough to believe it isn't that unique either, where we had to use useStaticQuery to grab images coming from Contentful and .find one by the identifier. We always knew this was wrong but were not noticeably punished for it until the scale of the site grew to have 300+ images and LH6 came about and smacked us.

The reason for this is because the images are part of Rich Text embeds, and we cannot graphql for them at the page query level (it's essentially a json field that Contentful has packages to parse). When using Webpack bundle analyzer, we noticed a massive JSON file (about 720 KB) and tracked it down to be the data from that query, which was grouped into a template we use for most pages by Webpack. This meant that every user visiting our site was downloading it as part of the chunk for the whole page template, regardless of the page using any images or not.

Big woopsie on our part, but if anyone else is doing large static queries (which you of course cannot pass parameters to in order to shrink the size) make sure you watch out for those situations and keep an eye on your bundle chunks.

  1. We had some success just today by using the loading prop for Gatsby image on images that are above the fold (Hero images for us). We've been trying to work on Largest Contentful Paint and this has yielded good results in some initial tests. There is an important part I almost missed to this: If you set loading="eager" for your topmost image, you might want to set fadeIn={false} as well for that image because the transition between the base64 and fully loaded image takes time which delays LCP.

Here is the props documentation I'm referring to and the note about fadeIn is at the bottom: https://www.gatsbyjs.org/packages/gatsby-image/#gatsby-image-props

I'd like to share screenshots but I don't know if I'm allowed to, sorry. If you use Chrome devtools and look at the performance panel, you are given nice little tags under the "timings" row for FP, FCP, FMP and LCP. When we switched to critically loading the first image we not only saw ~8-10 performance score increase but you can see the LCP tag loads immediately after FMP instead a second or so later in my case.

Hope that helps anyone troubleshooting this, and thanks to everyone who has responded so far.

Somethings I noticed when profiling gatsbyjs.com & gatsbyjs.org is that we should load google analytics, etc a bit later than we do know to make sure it doesn't become part of TBT.

@wardpeet how are you postponing analytics and GTM?

@wardpeet thanks for your reply. It is useful. Perhaps the best output from this issue would be some documentation outlining how to optimise for each of the metrics in the new Lighthouse. I am confident that our site feels fast to users and that Gatsby itself is doing a great job of optimising the site for real users. However if Google's web vitals are going to start informing page rank, getting a good lighthouse score is going to become mission-critical for most sites.

@Jimmydalecleveland we had a similar problem where we were needed to load in all the items of a resource so we could use data from within markdown to configure a filtwr (i.e. we couldn't filter using GraphQL) and optimised by using different fragments (a much smaller subset of fields) when loading a full resource vs when loading all resources for filtering. This greatly reduced our by JSON and therefore our bundle size.

@treyles you need to be careful delaying the load of Analytics as it can mean your stats are incomplete. For example it can mean your reported bounce-rate is not accurate. There are some scripts that marketing would not allow us to delay (pixel, analytics, hotjar and therefore tag manager), but others, e.g. Intercom are fine and are a worthwhile optimisation. In terms of how to delay these scripts, the scripts supplied by third-parties usually load async, but this alone is not enough. What you will probably need to do is replace these scripts with your own. Listen for window.load, then trigger the download. You need to be careful though as some scripts rely on window.load to initialise, and if you've used it to load the script, it will not fire again, so you need to initialise them manually. For example with Intercom we:

  • removed the degault <script>...</script> supplied by Intercom.
  • added a listener for window.load
  • added a brief delay within this listener
  • triggered a modified version Intercom's default script that loaded their lib async.
  • polled to see when the script was loaded (Intercom doesn't provide a reliable event)
  • manually initialised their script (which was done on page.load by their default script).

@wardpeet thanks for the very useful insight!

Regarding this solution:

Make sure when using fonts & images, they aren't loaded lazily and are loaded as soon as possible. These assets should be loaded from the same origin as your site, they should not be loaded from a CDN.

Wouldn't this go against how gatsby image works? Also, most CMSs handle the image transformation on the server and hosted in their own CDN. (Which is a good thing, imo). But if we host it in our own site, wouldn't this be counterproductive as well?

Adding to what @Undistraction said, Gatsby is fast but if it's not fast according to Google's eyes then it becomes problematic. Especially that they're including this in the page ranking update next year.

@Jimmydalecleveland I found a way to work with gatsby image inside contentful's rich text without that query hack! Here's the gist. The code was copy pasted from gatsby-source-contentful. Basically you can generate the contentful fluid or fixed props outside of GQL. Which is perfect for contentful's rich text.

I also created a pull request so we can access the APIs directly from gatsby-source-contentful.

Something just doesn't add up for me. I built a very simplistic website with about an image per page, Im using SVG for images without gatsby-image, I also tried removing google analytics and that didn't make much difference, my score was about 50 - 60 for performance.

Something that is really puzzling for me is that only the home page (index.js) is getting the very low score, while other pages like the services page or the contact page are getting a score of ~80. I built this site fairly consistent and so there is not a tremendous difference between pages and yet for some reason the home page has a score of ~50 while the services pages has a score of ~80.

Like i mentioned earlier, with lighthouse v5, the score was ~90, it just makes no sense at all that a simple site like this would now have a low score of ~50.

Btw, have any of you tried setting the above-the-fold image as eager?
This disables lazy loading and might increase the score. The blur or svg
loading effects might be confusing Lighthouse (which if that's the case is
a flaw in their algorithm).

On Sat, Jun 13, 2020, 10:48 AM t2ca notifications@github.com wrote:

Something just doesn't add up for me. I built a very simplistic website
with about an image per page, Im using SVG for images without gatsby-image,
I also tried removing google analytics and that didn't make much
difference, my score was about 50 - 60 for performance.

Something that is really puzzling for me is that only the home page
(index.js) is getting the very low score, while other pages like the
services page or the contact page are getting a score of ~80. I built this
site fairly consistent and so there is not a tremendous difference between
pages and yet for some reason the home page has a score of ~50 while the
services pages has a score of ~80.

Like i mentioned earlier, with lighthouse v5, the score was ~90, it just
makes no sense at all that a simple site like this would now have a low
score of ~50.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/gatsbyjs/gatsby/issues/24332#issuecomment-643648423,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAARLB2Q2IVSNVKGGBZ3ZPDRWOUU5ANCNFSM4NHP7XCA
.

@KyleAMathews We have, and it made a significant increase in performance score and first paints. It is what I outlined as point 2 in my lengthy comment above. Cancelling the fadeIn is what finally made LH happy.

Edit: I, likely ignorantly, feel like the focus on LCP is not the correct approach to universally take with concern to images. Obviously anecdotal, but I feel that a website feels much faster when all the content is loaded and the images are faded in afterwords, unless the image is crucial to the content.

One common example would be a Medium article. Sure, you could say that is a design flaw, but most Medium articles (and many other blogs) start with a big ol' image at the top that is just for mood creation or scenery and I don't care if it lazy loads in.

Btw, have any of you tried setting the above-the-fold image as eager? This disables lazy loading and might increase the score. The blur or svg loading effects might be confusing Lighthouse (which if that's the case is a flaw in their algorithm).

I’ll try this now.

I think I made some good progress here. I got my scores up from 57 to 84 with very basic changes. My LCP went from 12s to 2s.

That said, it is inconsistent. Since making the changes I'll describe below, my score varies from 69 - 84. There's clearly some random variance to the performance scores.

TLDR

First, like @KyleAMathews and @Jimmydalecleveland suggested, I tried setting loading="eager" and fadeIn={false} on my gatsby image components that were above the fold.

Next, I got rid of base64 from my queries.

These made a huge difference.

The good

  • Adding _noBase64 to my image fragments brought my score up 20 points!

    • It seems like base64 images are adding a lot of weight on mobile. I'm querying images from contentful using localFile -> childImageSharp -> fluid -> GatsbyImageSharpFluid_withWebp_noBase64.
  • loading="eager" and fadeIn={false} brought my Largest Contentful Paint time down by about 50%!

    • My actual performance score only went up 6-7 points for some reason, but LCP is definitely making progress...

My query looks like this:


localFile {
  childImageSharp {
      fluid(maxWidth: 800, quality: 100) {
        ...GatsbyImageSharpFluid_withWebp_noBase64
      }
   }
}

And my gatsby-image looks like this:

<Image 
  fluid={localFile.childImageSharp.fluid}
  fadeIn={false} 
  loading="eager"
/>

The less good

My UX on my website now looks much worse. The base64 + fade in provided a great UX. Now, it's a bit choppy. I guess that's a trade-off we have to consider now?

Before & after eager & fadeIn={false}

Here are some side-by-side comparisons of the exact same pages. The only difference is that on the right, the images have loading="eager" and fadeIn={false}.

1. Home page

Screen Shot 2020-06-13 at 3 38 43 PM

LCP down 49%. Performance score up 6 points.


2. Product Page

Screen Shot 2020-06-13 at 3 40 01 PM

LCP down 46%. Performance score up 7 points.

What's weird about this example above: the screenshots on the left have the default gatsby-image behavior (they do fade in, and they don't have eager on.) And yet, even though the performance score is lower, the small screenshots at the bottom make it look like it's loading in faster than the image to the right.

Maybe it's within the margin of error for how they judge performance, or maybe it's a bug on their end related to the fade in effect, as @KyleAMathews mentioned.


After setting _noBase64 in image fragments

Here are the same screens as the example above. They all have loading="eager", fadeIn={false} props on Gatsby Image. Also, the image fragments in the graqhql are GatsbyImageSharpFluid_withWebp_noBase64

It's a bit inexplicable, but I'm running a lighthouse test on the exact same page over and over, and got 84, 75, and 69.

Kinda weird, but in any case, it brought my score up.

Screen Shot 2020-06-13 at 3 52 03 PM

I think the Lighthouse algorithm was feeling unusually generous here lol ^


Screen Shot 2020-06-13 at 4 09 09 PM
Screen Shot 2020-06-13 at 4 07 10 PM

After further investigation, I had discovered that lighthouse was complaining about a specific element that was impacting the LCP score.

All I did was simply move this element which is just a paragraph and my score jumped above 80. Go figure. Not exactly sure why moving a paragraph increased my score from ~50 to ~80.

t2-media-lighthouse-score

@nandorojo Thanks for the thorough write-up. We haven't tried removing base64 completely, but would be a bummer if we had to. We also only put eager loading on the first image of the page, so if you aren't already doing that it's worth a try if you can control that.

After further investigation, I had discovered that lighthouse was complaining about a specific element that was impacting the LCP score.

All I did was simply move this element which is just a paragraph and my score jumped above 80. Go figure. Not exactly sure why moving a paragraph increased my score from ~50 to ~80.

t2-media-lighthouse-score

@t2ca This is what I got (albeit mine was a header tag). But where did you move it to?

@t2ca This is what I got (albeit mine was a header tag). But where did you move it to?

@michaeljwright The first thing I did was to delete the paragraph and check the lighthouse score. After I removed the paragraph my score increased about 20 points. I repeated the test many times just to make sure. I also put the paragraph back and did further tests and my sore was lower once again.

Finally, I decided just to move the paragraph, Im using framer-motion inside a div and I just moved the paragraph outside of the div. This gives me the same result just like when i deleted the paragraph.

@t2ca I think LCP penalizes any animations in our hero pages which is a bummer.

Here's my LCP scores where paragraph tag is the LCP

With animation:
Screen Shot 2020-06-16 at 1 08 09 PM

Without animation:
Screen Shot 2020-06-16 at 1 08 24 PM

@t2ca I think LCP penalizes any animations in our hero pages which is a bummer.

Here's my LCP scores where paragraph tag is the LCP

With animation:
Screen Shot 2020-06-16 at 1 08 09 PM

Without animation:
Screen Shot 2020-06-16 at 1 08 24 PM

@daydream05 Thank you for confirming!

@daydream05

Wouldn't this go against how gatsby image works? Also, most CMSs handle the image transformation on the server and hosted in their own CDN. (Which is a good thing, imo). But if we host it in our own site, wouldn't this be counterproductive as well?

No, because gatsby-image works with local images too, no need to host it on a different CDN. It all comes down to optimizing your first render (what's in the viewport). Hosting it on a different domain/CDN means you have to open up a new connection (dns resolve, tls handshake, ...) this can take up to 300ms on a slow device and then you still have to download your image.

Adding to what @Undistraction said, Gatsby is fast but if it's not fast according to Google's eyes then it becomes problematic. Especially that they're including this in the page ranking update next year.

We'll be optimizing Gatsby even more to make sure our users can get 100's for free.

@t2ca I think LCP penalizes any animations in our hero pages which is a bummer.

That's expected because your screen never stops painting. Normally LCP should ignore CSS animations, but it depends on how you do the animations.

@t2ca

If you can show us the site, we can help to figure out how to improve it, but it's probably setting the image to eager.

@nandorojo

Awesome writeup! Any chance you can give us links to those lighthouse reports?

That's expected because your screen never stops painting...

@wardpeet would you mind expanding on this please?

@DannyHinshaw I received this explanation from lighthouse
"What I think is going on is that LCP does care about images being fully loaded and the time that's reported is when the image is completely loaded and not when it is first visible. This time can be different due to progressive images and iterative paints."

And then this link, perhaps of help
https://web.dev/lcp/#when-is-largest-contentful-paint-reported

In the meantime what you can also try is changing your Largest Contentful Paint (LCP) from an image to text (if you have the luxury), preloading/prefetching fonts and lazy loading the images. In my case that meant reducing the size of the hero image on mobile which boosted our score back into the upper 90's while the issue is being discussed.

image

image

import semiBoldFont from 'static/fonts/SemiBold-Web.woff2';
...
<Helmet>
   <link rel="prefetch" href={semiBoldFont} as="font"></link>
</Helmet>

https://lighthouse-dot-webdotdevsite.appspot.com//lh/html?url=https%3A%2F%2Fwhatsmypayment.com%2F
https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content

That's expected because your screen never stops painting...

@wardpeet would you mind expanding on this please?

Sure I don't know which site this was, I tried to find URLs in this thread but that was hard. LCP doesn't take in account CSS animations (transition, animation props in css). However, if you have content that uses setTimeout/setInterval that switches react component, it will take it into account. The latter approach will give you really bad CLS scores.

So if you want to animate your hero text/image. Please use css animations.

Hi there,

I tried to figure out why my project is scoring so low on Google Page Speed Insights, Google Lighthouse audit and more.

Short of starting from scratch I'm not sure what the problem is. I used this starter/theme to get started: https://github.com/alexislepresle/gatsby-shopify-theme

I mostly and am in the process of changibg css stuff like moving from bulma to chakra-ui.

This is my repo: https://github.com/Chizzah/genesis-style
I tried removing all the account stuff and the gatsby-plugin-appollo-shopify stuff but that does not change things.

Here is the live link: https://genesis-style.netlify.app

Nothing I seem to do changes things. I would prefer not having to start from scratch. If anyone can give me a hint or something I'll appreciate it.

Guess I got too used to Gatsby giving free 90-100s

Thanks for this thread as a discussion on achieving good lighthouse scores is needed. Superb scores have become more difficult with v6, mostly due to the new LCP metric. My site (https://www.jamify.org) dropped from ~100 to ~70 with Lighthouse v6.

In order to bring it back to 100 on desktop, I had to

  • remove a background image that was not needed (as it was wrongly chosen as the LCP)
  • optimize size of images
  • set gatsby-image to loading="eager" and fadeIn=false
    (that's really a bummer as I like the blur-up effect)

image

On mobile, I'm still stuck on 80, again due to an LCP of 5 seconds. This could be improved by properly sizing images specifically for mobile:

image

Overall, these optimizations are pretty feasible, however, I'm quite unhappy that I now have to choose between lazy-loading images with blur-up or a good Lighhouse score :roll_eyes:

Has anyone done any tests yet on lighthouse v6.1? I have noticed an improvement in my performance score.

Asked Addy from Google about the blur-up LCP issue & it's something they're working to fix https://twitter.com/addyosmani/status/1277293541878673411

Lesson here is don't treat the new perf scores as absolute just yet — they're refining edge cases still.

I believe the issue gets worse with Lighthouse 6.1. There are some good suggestions here surrounding LCP but we are not looking so much at TBT which I think is the biggest reason for bad scores on mobile and the most difficult to solve.

You can test Lighthouse 6.1 in Chrome Canary. I've compared my site between 6.0 and 6.1, as well as several other mentioned here and the TBT is drastically increased. For example in my 6.1 tests:

Anything over 600 for TBT is red and the weight according to the calculator is 25% so a major factor. TBT is caused by functions that take greater than 50ms to execute between FCP and Time to Interactive.

Screenshot 2020-07-15 at 17 29 49

The above screenshot is a profile from my site. If you use Lighthouse in Chrome, you can click the View Trace button to load the results into the profile tab to see the same. Any task after FCP with a red flag in the corner counts towards TBT. I am yet to dig into what the tasks are so maybe somebody with more knowledge of Gatsby can assist in this area and perhaps @wardpeet can give his insight into TBT if possible. There are some big tasks that take between 500ms - 1s in my tests so I think they should be analysed.

@IanLunn that's an interesting trace, were you able to get a sense of what those tasks were underneath?

There's likely a correlation between how much JS each Gatsby Site has and how expensive it becomes on the main thread of the browser. However I think the open room for discussion could be, is there a way that Gatsby could help "mitigate" the impact by how it loads queries and scripts altogether?

There are three areas that I'm trying to understand better at the moment:

  • Gatsby adds by default <link rel=preload/> for every script needed (as per the PRPL pattern) regardless of how many are there. I've noticed some differences in measured FCP (which I was pretty surprised with) but mostly in the gap between FCP/LCP when removing these (which probably it's not a bright idea to do without other changes). This issue on lighthouse discusses the latter.
  • The queries end up creating JSONs (page-data.json and the hashed ones for static queries) which are evaluated on main thread. See https://github.com/gatsbyjs/gatsby/issues/18787 but it doesn't seem we've found (or if there is) an alternative that does not block the main thread. The bigger the data, the more of a performance hit (so performance budgets for query sizes would certainly be very welcome) – but the data isn't really needed until the rehydration process, or is it?
  • The actual chunks are added as <script src=foo.js async /> at the end of the </body>. This means that as soon as the browser finished parsing the HTML (which should be pretty soon in the trace), it'll start parsing and executing those scripts (as they were already pre-loaded). Long tasks will arise inevitably as the main thread is requested to parse and execute all that javascript. Is there a better way to handle this (at least _when_ those scripts start being parsed) to avoid blocking the main thread? Is there any way to do this (either the parsing or the execution) in incremental small tasks that do not either delay input feedback (and thus harm TTI), nor block the main thread in chunks of time (and thus harm TBT)?

Whilst at the moment it's true Gatsby sites are being a bit penalized due to LCP not yet recognizing the LQIP pattern from gatsby-image – when it comes to anything related to TBT/TTI (and possibly a major penalization on the cost of Javascript compared to Lighthouse v5) I don't think there's anything in the lighthouse team's roadmap where things will improve from the current scores.

@juanferreras The largest task appears to be domready/ready.js (third-party). I get the feeling your statement about Lighthouse penalizaling JavaScript is correct and although small optimizations may be possible in Gatsby it's not something that is solvable.

If this is how it is going to be in Lighthouse, I am tempted to at least ask them to lessen the weight of TBT or give the option of setting the desired testing environment. Providing a score based on a budget phone isn't always appropriate for the site being tested. We should be able to show bosses/clients that yes, a budget phone gets a score of 75 but a higher-end phone that 95% of our users have gets a score of 90 for example.

  • The queries end up creating JSONs (page-data.json and the hashed ones for static queries) which are evaluated on main thread. See #18787 but it doesn't seem we've found (or if there is) an alternative that does not block the main thread. The bigger the data, the more of a performance hit (so performance budgets for query sizes would certainly be very welcome) – but the data isn't really needed until the rehydration process, or is it?

@juanferreras , regarding this issue of parsing json data on the main thread, what comes to mind is web worker. Gatsby can register a web worker if available on the client, and buffer these sort of things to it, the rehydration process is not really needed immediately, so this is doable I believe

Web worker is supported in major browsers including ie10.

Screenshot from 2020-07-16 15-30-55

… we are not looking so much at TBT which I think is the biggest reason for bad scores on mobile and the most difficult to solve.

I want to add something that I think is relevant to Total Blocking Time. After reviewing my bundles with Webpack bundle analyzer, I noticed that data from static queries are included in the JavaScript bundles. I'm sure there's a good reason for that, but it works against a low TBT.

TBT is a difficult problem to solve especially because React isn't built for it. Moving to preact is a good first step. We'll be improving TBT more and more in the coming months, we want Gatsby to have a really small footprint.

In gatsby version > 2.24.0 we shipped improved polyfill support so we only load polyfills on legacy browsers like IE11. We've also removed static-queries out of webpack, a few days ago (@MarkosKon).

@Teclone sadly webworkers aren't great for JSON parsing. You still pay the price for serializing and deserializing it between threads. With ShardArrayBuffer it would be a different story, sadly they are disabled by default because of Meltdown/spectre

I was just nicely getting 100/100 on everything on the built in Lighthouse (6.0) in Chrome and then used the web.dev version (6.1) and it came back with performance in the 70s due to LCP (about 5-6 seconds in 6.1, about 0.5 seconds in 6.0). It's a decorative header image (using gatsby-background-image) that it's complaining about.

Looking at my own website, Webpack runtime has the highest amount of javascript execution time. Something that the page does not even need until the user starts interacting with the page.

Gatsby seems to just load all these resources (chunks) at once. The js file itself is just very tiny, it is a loader, you can see that it takes just 2ms to pass the file. But the file itself loads chunks and template files.

Screenshot from 2020-07-30 17-16-22

In fact, when I inspect the chunk files, I find out that all of them are dynamic imports, which, we hope that they get loaded only when they are needed, but nope, they all get loaded by default. This behaviour is not what I expect.

Did a fair bit of optimisation of our header image such as using an image directly rather than Gatsby-Image and reducing the res and compression, and ours is a fair bit better. Only, I've just discovered the hard way that Safari doesn't support WebP (grr). And it continues to be the case that the web version of Lighthouse is a lot less forgiving than the one built into Chrome, at least for our "hidden" development site. Time will tell whether aggregated user data helps once it's live - I'm not convinced there are that many using Moto G5s in the real world!

@derykmarl It should be supported soon: https://www.macrumors.com/2020/06/22/webp-safari-14/
I don't get why Apple took so much time to support a widespread image format...

I read that pagespeedinsight emulates how the score is calculated. It seems that they don't throttle network so you can get your report faster. I personally use https://lighthouse-metrics.com/ but they are not supporting 6.1 yet.

The issue with lighthouse 6.x is that it relies on perception timing, you can run the same test 10 times and you won't have the same results depending on network conditions.

it came back with performance in the 70s due to LCP

I had an LCP which was the background image for my website, I was able to drastically cut down my LCP by splitting the image into 6 sub images. I made a python script to do this easily, as long as you know the height that you want each of your segments to be

from PIL import Image
import os
import ntpath

def crop(pathToFile, height, width=False):
    im = Image.open(pathToFile)
    imgwidth, imgheight = im.size
    [name, ext] = ntpath.basename(pathToFile).split('.')

    if(not width):
        width = imgwidth

    k=1
    for i in range(0,imgheight,height):
        for j in range(0,imgwidth,width):
            box = (j, i, j+width, i+height)
            a = im.crop(box)
            a.save(os.path.join("./" + name + "-" + str(k) + "." + ext), compress_level=9, optimize=True)
            k +=1

pathToFile = '/path/to/your/img.jpg'
crop(pathToFile, 933)

I also found this image compression website to work really good for cutting down the size of your image without losing too much quality. I could usually go down to the 20%-30% quality mark and drastically cut down my file size.

I asked some questions about this online and some people recommend that I only split my image into 2, for above the fold and below the fold, but I found splitting into 6 to work much better.

@wardpeet on the TBT/TTI note, we may be able to see how other react-based sites are now mitigating the overall impact on the main-thread of the browser.

reactjs.org itself (which also is built with Gatsby as far as I know) seems to have a considerably smaller TTI (7-8~ vs 12~) than the new gatsbyjs.com (congratulations on the launch by the way!). NextJS has also maintained very good numbers on TTI/TBT despite being React-based themselves (it may as well be due to the relative size of scripts - where gatsby.com has about 628.3kb of script according to PSI, reatjs.org 466.1kb, and nextjs.org only 216.8kb)

gatsby_next_react
(this is obviously a single run and shouldn't be used as an actual comparison, but the trend is pretty consistent across multiple runs).

Is the majority of the score difference due to the overall Cost of Javascript™? If the Gatsby team optimizes the new website at some point that might be a great opportunity to share some insights on the process, provided there isn't much magic left to add into how the gatsby framework already handles javascript internally.

@juanferreras @wardpeet , There is also something I found out on my own project. If you are using styled-components, for some reasons, styles are recomputed/regenerated during hydration and reinjected into the browser. This takes a lot of the main thread.

This is due to hydration issues in gatsby. https://github.com/styled-components/styled-components/issues/2171

Gatsby is also working on running ssr during development, https://github.com/gatsbyjs/gatsby/issues/25729, this will help fish out these sort of performance troubles. too.

@teclone

https://github.com/styled-components/styled-components/issues/2171 doesn't seem to offer solution. How did you fix it in your project?

@dfguo , for now, there is no fix for that, because no one knows exactly why styles get regenerated during rehydration because gatsby in production does not offer development help with rehydration errors.

That is, there is no console log from React pointing differences during rehydration because it is disabled in the production build of gatsby.

The purpose of this work in progress #25729 is to run true SSR in development, so we will able to see why. including the gatsby team.

BTW, you can build a Gatsby site with gatsby build --no-uglify to build your site w/ the development version of React to see logs. https://www.gatsbyjs.com/docs/gatsby-cli/#build

Dev SSR will be super helpful in the future for stuff like this!

So, I've decided to migrate my site from @emotion and theme-ui to linaria while implementing the dark-light mode with custom css variables

The objective was to reduce the blocking time / main thread / anything related to js, since now css are no longer evaluated at runtime but compiled at build time (actually linaria seems much more aligned to gatsby statements than @emotion in this regard)

The process is not totally smooth, most of the things I did with @emotion just work with linaria, but some others do not and you have to rewrite them and/or to reimplement them through custom css variables

DX experience with gatsby is __bad__, hot reloading doesn't work (you have to stop and start again at any change since the browser seems to lose connection), but overall the process has been nice since it forces you to be more conscient about what do you really need from @emotion runtime abilities

__That said, lighthouse metrics are very similar__

I can compare the two netlify deploys and actually the @emotion site has high 70s and the linaria site has low-medium 70s

Needless to say, I'm not very excited

Analyzing the bundle:

  • the site document has increased from 14 Kb to 28 kb
  • the framework script has remained identical at 38.7 kb
  • The app script has decreased from 58.2 kb to 46.1 kb
  • And a fourth script (component--content... then, 20bae078.. now) has gone from 44.2 kb to 46.8 kb

So I assume that the styles in js have moved to styles in css (and ~12 kb are significant IMO), but this hasn't had any real impact in lighthouse metrics (and this is what matters, isn't it?)

So, I'm not at all saying that moving to linaria has no sense, I wouldn't be surprised if someone does the same and has better outcomes (in theory this should be the case (?)), but in my hands the process hasn't been worth

Still, exploring the app script I've opened a new issue trying to figure out how to reduce the js bundle https://github.com/gatsbyjs/gatsby/issues/26655

DX experience with gatsby is bad, hot reloading doesn't work (you have to stop and start again at any change since the browser seems to lose connection), but overall the process has been nice since it forces you to be more conscient about what do you really need from @emotion runtime abilities

@kuworking I encountered a similar issue, but noticed that it only happened on gatsby versions newer than 2.24.9; Not sure if cause is the same, but I thought it might help someone to know that people are talking about it in issue #26192.

@kuworking I encountered a similar issue, but noticed that it only happened on gatsby versions newer than 2.24.9; Not sure if cause is the same, but I thought it might help someone to know that people are talking about it in issue #26192.

I have been with "gatsby": "2.24.14" for several weeks I'd say, and so far I have only experienced this with linaria
But knowing this I won't update gatsby until this is figured out, thanks 👍

@kuworking What I meant to suggest is that maybe if you downgraded to 2.24.9 then the development-server-stopping issue wouldn't happen even with linaria; but that's just an idea. I'd be curious to know if that's the case.

DX experience with gatsby is bad, hot reloading doesn't work (you have to stop and start again at any change since the browser seems to lose connection), but overall the process has been nice since it forces you to be more conscient about what do you really need from @emotion runtime abilities

Have you tried fast refresh?

I recently migrated a production gatsby site (~120 pages) to preact in the hopes of improving TBT & LCP. Our site is svg heavy using react svg components generated with svgr and styled using material-ui styles. The average performance results were in the +-5 range of the initial score (~45 mobile performance down from ~85 prior to v6) and although the migration was relatively painless using the theme, it did require a migration to fast-refresh which was not.

Honestly a little disappointed that there isn't any other optimisations that I can find to try or more detailed metrics to go off besides the generic "Remove unused javascript" lighthouse warning.

Speed is one of the main reasons we picked gatsby and even though the pages are still fast to load, it's hard to justify from an SEO standpoint when your insight scores take such a big hit...

whispers: I switched to NextJS and I'm getting better scores 🤭

whispers: I switched to NextJS and I'm getting better scores 🤭

What about Svelte?


It would be good to know whether Gatsby devs are giving this some specific sense of importance / priority in the roadmap (other than the expected one), since I assume that there are no immediate solutions but perhaps some kind of future directions and implementations focused on this or that

Since gatsby does some compilation with gatsby-node*, I wonder if they are exploring ways to increase that part and deleverage the client part

*In order to decrease the pageContext that I was using (data about all the published posts), I am currently storing (through gatsby-node) most of that data in json files and fetching them if needed from the site, which reduces the bundle size but also feels more logical

Don't get too hung up on the lighthouse scores - especially when they're
meant as a benchmark versus other sites, and not a goal where we should
strive to achieve a perfect score.

It wasn't until recently that gatsby was nailing pure 100s - sure, there
are some issues to address but the SEO game right now is speed plus content
plus links, and we have it covered.

My two cents.

On Fri, 28 Aug 2020, 16:57 kuworking, notifications@github.com wrote:

whispers: I switched to NextJS and I'm getting better scores 🤭

What about Svelte?

It would be good to know whether Gatsby devs are giving this some specific
sense of importance / priority in the roadmap (other than the expected
one), since I assume that there are no immediate solutions but perhaps some
kind of future directions and implementations focused on this or that

Since gatsby does some compilation with gatsby-node*, I wonder if they
are exploring ways to increase that part and deleverage the client part

*In order to decrease the pageContext that I was using (data about all
the published posts), I am currently storing (through gatsby-node) most
of that data in json files and fetching them if needed from the site,
which reduces the bundle size but also feels more logical


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/gatsbyjs/gatsby/issues/24332#issuecomment-682664232,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ALSIKRHQUIKR5YO6OGA3DC3SC7AWPANCNFSM4NHP7XCA
.

Sorry for the late response, there is a lot that goes into performance and load metrics is only a small piece of the puzzle. We're motivated in this quarter and the next to make Gatsby smaller and reduce TBT. The biggest problems right now are React bundle size, MDX, large pages (content/styling), tracking scripts and fonts/images as main content on the first load.

I'm currently looking into gatsby-image & analytics scripts to see how we can improve load time and postpone analytics.

Sadly, I cannot give any estimations. We're also looking at our .com site and our customers to see what the common problems are so we can bake these optimizations into gatsby or in our plugins.

Edit:

I'm happy to look at any of your source codes to get more insights in what you all are doing to see how we can improve.

I made a reddit post where I tried to aggregate all the improvement tips I could find. If you can come up with more tips, please list them

Removing just gatsby-image (home hero image and any background images) improves my score anywhere from 10-20 points.

In some recent testing I was doing I found that using tracedSVG actually dramatically improved the performance score of Largest Contentful Paint. I imagine this will be fixed in Lighthouse, but as of now this happens because the SVG is considered to be the same dimensions as the full resolution image, so it never swaps from the SVG as the LCP target to the full image.

When using base64, the small resolution makes it not a candidate for LCP so Lighthouse uses the full resolution image whenever that loads in.

So if you don't mind the look of traced SVGs (I prefer them personally), you might want to give that a try.

Why v5 more better result than v6? I'm using NextJS the result always changed from 60 to 85.

+1

I've been working on a gatsby-image successor. It's not 100% there yet, still need to write a composable version so you can create your own gatsby-image flavor but it will fix most of the bad lighthouse scores.

You can already use it but it's not yet battle-tested.
https://github.com/wardpeet/gatsby-image-nextgen/tree/main/gatsby-image

You can install it by npm install --save @wardpeet/gatsby-image-nextgen. There is a compat layer for current users of gatsby-image.

Things that aren't supported yet:

  • object-fit needs to be done by css outside of the component
  • art-direction

Current gatsby-image demo:
site: https://wardpeet-using-gatsby-image.netlify.app/
pagespeed-insights: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwardpeet-using-gatsby-image.netlify.app%2F
webpagetest: https://webpagetest.org/result/200928_4M_0879160e38bb6c5489f85534de2dd197/

New gatsby-image-nextgen demo:
site: https://gatsby-image-nextgen.netlify.app/
pagespeed-insights: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fgatsby-image-nextgen.netlify.app%2F
webpagetest: https://webpagetest.org/result/200928_C0_63317160bdfc71ece1a2057df8b08133/

@wardpeet Your pagespeed-insights link for the current demo goes to nextgen so they show the same scores.
Awesome work, btw. Really exciting to see progress.

Thank you fixed!

This update has pointed out something to me that I didn't connect before, I'm not using gatsby-image but actually gatsby-background-image which apparently doesn't use gatsby-image under the hood... I may have to rewrite that component with this new @wardpeet/gatsby-image-nextgen if that's possible....

This article lists some additional tips https://www.freecodecamp.org/news/gatsby-perfect-lighthouse-score/ although I think many of them have already been mentioned in this thread...

@DannyHinshaw when the plugin is feature complete. I'll have a look at that plugin as well. I have to look at remark images too

I've published a new version of @wardpeet/gatsby-image-nextgen - 0.0.2.

  1. minifies css & js in the html
  2. only load the necessary bits, when native image loading is enabled we only load about 2kb (non gzipped).
  3. make sure placeholder is only called on first load, cached images load immediately
  4. Fix blur-up animation by decoding async

I'm wondering how many of you need a composable Image component where you can build your own wrapper and how many of you are actually using art-direction and want that inside gatsby-image by default? My first idea was to disable the functionality but enable it with the composable gatsby-image so you'll have to make your own image component to support it.

Latest demo: https://gatsby-image-nextgen.netlify.app/

@wardpeet This is great! I heavily rely on gatsby-image's build-in art-direction. But I guess an example / smooth upgrade path would be OK too!

I always received 99 on mobile now a 76. Everything is great except my LCP it's 7.0s and it says its my hero image. Makes no sense. When I pull up my site on any mobile phone it's blazing fast. People marvel ya know? It also tells me to put my images into webp or others, but I already us childImageSharp_withWebp so I don't get it. I'm starting to thing Gastby Image and background-image aren't working with this new version at lighthouse and pagespeedinsight. My mind is boggled. I went and killed animations, resized all my images by half and it didn't budge up a single point. I'm reading through this and don't see anything to help me....Oh I just looked up...II think @wardpeet my be onto something 👍🏻

@davidpaulsson mind sharing an example on how you do this? Cause art-direction is still possible with the new gatsby-image, you have to do some manual steps.

@davidpaulsson mind sharing an example on how you do this? Cause art-direction is still possible with the new gatsby-image, you have to do some manual steps.

Sure! I use it like this currently

const artDirection = [
  medium.childImageSharp.fluid,
  {
    ...large.childImageSharp.fluid,
    media: `(min-width: 1390px)`,
  },
];

return <Img fluid={artDirection} />

@wardpeet Hi Ward. Could blurha.sh be useful for gatsby image nextgen?

@wardpeet I used your gatsby-image-nextgen plugin and it did in-fact improve my LCP time (decreased it from ~5s to ~1.5s). Thank you for this!

However, we are also using art-direction, similar to how @davidpaulsson is using it, and I can't seem to get it to work like it does with gatsby-image.

Could you elaborate on the manual steps needed to make this possible with the nextgen plugin? Thanks again!

@Jimmydalecleveland Thanks Jimmy! Replacing GatsbyImageSharpFluid_withWebp with GatsbyImageSharpFluid_withWebp_tracedSVG dramatically improved the performance score of my new Gatsby Webiste. I was getting no more than 54% and now with tracedSVG I'm getting over 80%. That's a huge improvement 💯

In some recent testing I was doing I found that using tracedSVG actually dramatically improved the performance score of Largest Contentful Paint. I imagine this will be fixed in Lighthouse, but as of now this happens because the SVG is considered to be the same dimensions as the full resolution image, so it never swaps from the SVG as the LCP target to the full image.

When using base64, the small resolution makes it not a candidate for LCP so Lighthouse uses the full resolution image whenever that loads in.

So if you don't mind the look of traced SVGs (I prefer them personally), you might want to give that a try.

@abdullahe We've checked it out before and it has a dependency on canvas and node-canvas isn't super reliable. Or at least it wasn't in the past. I'll let you know if we consider it again :)

@guydumais make sure to put loading="eager" it should change your score as well.

@BenjaminSnoha & @davidpaulsson I'll share an example in a bit. The biggest issue with art-direction if the aspect ratio between images change, like horizontal to vertical.

@wardpeet how would one use @wardpeet/gatsby-image-nextgen with gatsby-remark-images? Is it a case of simply pointing to it as a plugin in gatsby-config.js, or is it not possible until it gets merged into the gatsby monorepo?

while this might not have anything to do with lighthouse, I am wondering why gatsby page data JSON files do not support content hashing, just like js files.

I know that the content hashing for js files is performed by Webpack, but gatsby can also do the same for page data JSON files. this can save a lot cdn network requests

@teclone page-data.json files shouldn't be downloaded over and over if your caching is setup correctly. They'll load once and then the browser revalidates them. The problem with content hashing for page data (vs JS/CSS files) is just that there's so many of them. With content hashing, before you can load a file, you need to load a manifest that translates from x to x.LONG_HASH as the hash isn't predictable. With JS/CSS, loading a manifest is reasonable as there's only so many JS files even on very large sites. But with page data, there's one per page so the manifest can grow quite large. We used to do this and we found on a 10k page site, the manifest was already ~500k compressed. https://github.com/gatsbyjs/gatsby/pull/13004

If you do see page-data.json files downloaded over and over — check you haven't disabled caching in your devtools & check your caching headers with https://www.npmjs.com/package/check-gatsby-caching

@KyleAMathews , thanks for clarifying that. That is a very sensible approach

@wardpeet is it true that image-nextgen does not support fadeIn="false" or fadeIn="{false}"

It works a lot better though, went from ~80 to ~95

@MelleNi it does not, I don't think it's necessary but we're happy to consider it.

you can have a look at https://github.com/gatsbyjs/gatsby/discussions/27950 to see what we're shipping.

@wardpeet how would one use @wardpeet/gatsby-image-nextgen with gatsby-remark-images? Is it a case of simply pointing to it as a plugin in gatsby-config.js, or is it not possible until it gets merged into the gatsby monorepo?

We're going to move remark to this plugin as well :)

@MelleNi it does not, I don't think it's necessary but we're happy to consider it.

you can have a look at #27950 to see what we're shipping.

@wardpeet how would one use @wardpeet/gatsby-image-nextgen with gatsby-remark-images? Is it a case of simply pointing to it as a plugin in gatsby-config.js, or is it not possible until it gets merged into the gatsby monorepo?

We're going to move remark to this plugin as well :)

Great to hear about remark, as I saw a lot of improvement in speed.

However, I noticed I could not get 99-100 without disabling Gatsby's javascript (and re-integrating particular functionality manually). I can get the old Gatsby image to work without javascript, using fadeIn={false}, but not image-nextgen. (Maybe I'm missing something, and it is absolutely possible?) Now without javascript I never drop below 99 without nextgen.

I understand that disabling javascript kind of defeats the idea of Gatsby, but oh well.

Interestingly, I saw an improvement on mobile performance score (~70 to ~90) when I stopped using self-hosted fonts (fontsource) and switchted to "system" fonts.

@wardpeet Any chance you can share an example of how to build a composable image component with art direction? I'm in the same boat as @BenjaminSnoha & @davidpaulsson, and I don't mind creating the composable component in my own project.

The biggest issue I see is dealing with media queries and SSR. Libraries such as fresnel work, but suffer in performance because they load all components, then clean up the DOM after the window component becomes available.

On my website it seems that all pages created with createPage have the source code (markdown and markdown react components inside markdown) in the pagespeed heavy javascript (Remove unused JavaScript)

I've just launched Plaiceholder which can be used to create pure CSS blurred placeholders. Perhaps this would be of interest? More than happy to chat with any of the core team about options forwards

I made a Next.js version of the Jamify Blog Starter that scores nicely with the latest Lighthouse 6.4.0:

Lighthouse Score

You can inspect the demo site at next.jamify.org.

I am posting this here, NOT to suggest that you switch to Next.js. Rather, to learn how the same can be achieved with Gatsby. I think the key success factors are:

  • highly optimized images (Next achieves this with a lambda optimizer, but this can be done with gatsby-plugin-sharp too).
  • a simple placeholder svg (nice effects like blur will slow down the page).
  • use of intersection observer to only show images when in view (see next/images).
  • ensure lazy loading of images.
  • small bundle size.

If you want to discuss this further, you can reach me on twitter.

@styxlab I get slightly lower results in web.dev/measure

image

but better in post results, definitively very good values in any case and markedly different from gatsby version https://demo.jamify.org/

image


I will also post that in one site I've changed gatsby for 11ty, and the performance has improved but not dramatically

(gatsby)

image

(different design, essentially the same content, 11ty)

image


Or in a similar page, this time with an image

(gatsby)

image

(different design, essentially the same content, 11ty)

image

I will say that the 11ty developer experience is very nice (you can also --experimentally use jsx and styled-components), but any js on the client side is lost (you can insert it and fight with webpack, that moment is where you miss gatsby)

While I was using 11ty, I was also thinking how nice would it be that gatsby enabled some sort of 11ty render strategy so that one could deploy mixed react and reactless static pages in one framework ...

any updates on this? I dont have any images and I get 76 on performance because of Total Blocking Time

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jimfilippou picture jimfilippou  ·  3Comments

totsteps picture totsteps  ·  3Comments

dustinhorton picture dustinhorton  ·  3Comments

kalinchernev picture kalinchernev  ·  3Comments

KyleAMathews picture KyleAMathews  ·  3Comments