Three.js: Adopt some ES6 features

Created on 16 Apr 2015  ·  74Comments  ·  Source: mrdoob/three.js

ES6 is coming, with browsers and tools now rapidly gaining support. I think THREE.js could benefit enormously from some of the new features brought by ES6.

For fun and to encourage debate surrounding a possible migration of the project, I've created this issue and created some code examples.

Features demonstrated in the below examples:

  • Default parameters
  • Block-scoped let keyword
  • Iterators + For..Of
  • Classes
  • Arrow Functions
  • Generators
  • Modules

Class Example

import Object3D from '../core/Object3D';
import Geometry from '../core/Geometry';
import MeshBasicMaterial from '../materials/MeshBasicMaterial';

class Mesh extends Object3D {
    constructor(
        geometry = new Geometry(),
        material = new MeshBasicMaterial({color: Math.random() * 0xffffff}
    ) {
        super();

        this.geometry = geometry;
        this.material = material;

        this.updateMorphTargets();
    }
}

export default Mesh;

Traversal Generator Example

class Object3D {
    constructor() {
        ...
    }

    traverse(callback) {
        callback(this);

        for (let object of this.children) {
            object.traverse(callback);
        }
    },

    *traversalGenerator() {
        this.traverse(object => yield object);
    }

    ...
}

Generator in use:

for (let object of scene.traversalGenerator()) {
    if (object.name === 'Blah') {
        break;
    }
}

Note: I haven't tested any of this

Suggestion

Most helpful comment

https://github.com/mrdoob/three.js/commit/1017a5432eede4487436d6d34807fda24b506088

Okay, I think we can start with let and const in src/.

@DefinitelyMaybe Is this something you would like to help with?

All 74 comments

Probably a bit early? Support isn't consistently great cross browser: http://kangax.github.io/compat-table/es6/

let not exposed in FF and not in 100% of situations in Chrome and IE.
generators not available in IE
class is only in Chrome, IE12(Spartan) technical preview for Win10 and FF39 which is two above current version.
import isn't available anywhere yet
super is only available in IE12, not IE11, Chrome or FF
arrow functions not in IE11 or Chrome

Also both generators and for-of are optimization killers for Chrome

Imo also too early just for fancier looking syntax. Debatable when something on the list would give noticeable performance gains or code readability benefits. Maybe someone who knows the codebase and ES6 features can comment on that.

I'm a little late to the ES6 party. This looks like a different language to me. I regularly contribute to Three.js but if it looked like the snippets above, I don't know if I would continue. I just don't have the patience to learn a different-looking and different-functioning version of JavaScript, that basically does the same things as 'new function()' and 'object.prototype', but possibly slower.

Is this what the web community has been demanding of the ES6 language committee? I've been coding for 20 years and the word 'class' does not appear anywhere in my projects (nor will it ever if I have my way). Frankly, JavaScript is starting to look like JAVA...(script). :-\

I'm with @benaadams and @jonnenauha ; too early, and it might slow the code down if Chrome and Firefox don't heavily optimize this new version of the language like they are already optimizing JavaScript under the hood (V8, etc...).

:+1:

@erichlof Well, I'm actually waiting for ES6 class and module support the most. I'm currently using AMD for my apps. The main JS project I work on is ~10k lines with hundreds of "classes" and AMD is a life saver. Both during development and in producing builds. Imo big or even smaller projects need to have some kind of module system and a way for things to declare what they depend on. It just gets too hairy to manage once you have a complex project structure. It should not be left for the programmed to figure out in what order to put 25x <script> tags. This is just silly once you have the amount of files like three.js does. Most projects resolve this by building one big js file (like three.js) and then there is a random extras/helpers folders where you can include other stuff. This is somewhat limiting and I have to include a ~500kb (minified) build to get ~everything under the sun.

At one point I think three added AMD support, but thats just a small code block that detects if AMD is used and it calls a function instead of effectively declaring window.THREE. I don't know how hard it is currently to make custom builds that would drop functionality that I dont need and add things I want outside of the core and optimize it to a single working file that loads things in order.

You might not have "class" in your projects but if you use object.prototype (like three.js does heavily) it's effectively the same thing, would you agree? We like putting objects together and giving them a purpose. Other code then wants to use said objects and a clean declarative way to import them.

All I'm saying ES6 classes and modules would be a nice addition also to three.js to give it some more structure and modularity for builds. It's wayyyy too early because afaik no browsers support eg. modules yet. Others might say should just start using them and use babel etc. and output ES5 compliant builds, I'm not sure if thats a good idea, but who knows. This makes developing harder, you have to auto start the build when you save files, and debugging might get more complex too. For these things alone I tend to think these transpilers are not worth it atm.

P.S. Was not a rant against three.js, just saying there might be benefits in ES6 for projects like three.js once browser support is there :) and mostly its more syntactical sugar than making it a whole new language. It just will have more stuff.

Another feature that might be beneficial for three.js is WeakMap and possibly WeakSet. I'm not sure about the implications but it seems like this could be used to track things without keeping the GC doing its job if the "owner" no longer holds the ref. There is a lot of state like this in three.js and using it internally might make a difference. It seems to me that this is a bit similar to shared/weak ptrs in C++ to declaring ownership.

A good candidate might be things like https://github.com/mrdoob/three.js/blob/master/src/lights/SpotLight.js#L12 where this could be a weak ref and would get automatically cleared if the target was removed from the scene and collected. But I guess you could not anyway depend on the collection, but would anyway need to null that out once the target is removed from the scene.

Who knows, I'll wait for experts to write some blog posts how all these features might benefit apps in general :)

Hi @jonnenauha ,

Yes I agree that new features like 'import' and 'export' will be useful, especially where modularity is concerned. I believe that's the direction that @coballast and @kumavis are trying to go with making THREE.js more modular and manageable for custom builds and code-base maintainability. I'm all for modules and re-use of objects and functions, especially when we have a large library format such as THREE.js.

However I don't think that we need the additional syntactic sugar of classes, super, let, for, of, arrow functions, etc.. when we basically get the same functionality in JavaScript now that has been in use for decades and has a lot of traction already. I may be antiquated but I'm with Douglas Crockford when he says JavaScript is already an object-oriented language (meaning functions = objects = first class citizens), but not a 'class'ical one - so we shouldn't be concerned with shoehorning classes into JavaScript. It was designed with a different approach - an approach I was taken aback by at first (coming from C/C++ programming in the 1990's) , but one I agree with more and more every time I sit down to code or try to solve code architecture problems.

Instead of syntactic changes to THREE.js I would rather see migration towards features like the new SIMD programming interface. I think ALL of THREE's Math code (especially Vector3 and Matrix4) could greatly benefit from this. Here's a video link (check out 22:51 in the time code) :
https://youtu.be/CbMXkbqQBcQ?t=1371
When this feature lands in major browsers, every THREE.js user would see a noticeable speedup in their framerate as a result.

Sorry if my earlier post sounded like a rant, I just like THREE.js how it looks now - I can easily select any random part of it and know what's going on, where it came from, and what it uses. :)

@erichlof @jonnenauha I feel compelled to point out that es6 classes are just sugar and they are internally using the prototype mechanism to implement everything at runtime.

I am fairly optimistic that es6 features won't impact computational performance. es6 modules might affect load time when those are finally implemented by engines, because you're fetching a bunch of little files rather than one big file. HTTP/2 might make that a non-issue though. Either way, if we use es6 modules we can use browserify to build a bundle like we always do.

I personally am for using es6 syntax in the codebase as much as possible, because it would make the code more concise and reduce errors. I think many people tend to abuse the prototype chain and use it incorrectly. I don't have a good example though.

I also think sitting and waiting for es6 to be implemented in engines is the wrong idea. Transpilers are available right now that produce good results that run in browsers right now. I think the benefits of doing this are enormous, so we should be impatient and start working on this immediately.

I don't think manually moving code over is a good idea. As Doug McIlroy, the inventor of unix pipes and generally a demigod said:
"Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them."

I would encourage anyone interested in es6 to jump in and help contribute to this repo: https://github.com/5to6/5to6

I disagree on converting the whole library to a different subset of javascript. As always we should discuss what is possible and what the pro and cons are. Because performance is going to differ between versions.

For instance, weakmaps is something that would be a huge gain in the renderers to handle states of objects. It's biggest disadvantages were a weak polyfill and very little browser support and unknown performance characteristics. ( It has been a while since I investigated this, so chances are that this is changed)

And we shouldn't only look at es6. For instance, ams.js would be great as the technology that runs a software renderer. For more info, http://stackoverflow.com/questions/18427810/three-and-asmjs/18439786#18439786.

And we shouldn't forget that most of the contributors are mainly contributing because javascript is familiar and es6 isn't yet for most.

@gero3 Those are good points that I didn't think of. If we are talking micro optimization then I 100% agree with you. I wouldn't want to take advantage of those features without browser support.

Largely what I was talking about above was using syntactic sugar that has identical semantics to existing es5 features, and so could be built with transpilers without affecting performance too much.

UPDATE:
Maybe nobody cares, but I changed my mind. I believe we should not use classes. I still think es6 modules are a good idea.

I'm totally in favour of classes. I get all the wonderful things the object prototype in JavaScript can do. However, in THREE.js (and many other modern OS projects) we're using prototypal inheritance to simulate classes, so why not have the nice syntactics to go along with it, and get rid of the hacky stuff like:

THREE.Object3D.call( this );

and

THREE.Scene.prototype = Object.create( THREE.Object3D.prototype );

As soon as classes are supported in stable Chrome and Firefox I wouldn't mind considering updating the code 😊

@mrdoob Classes are supported in stable Chrome 42 today and Firefox 39 shipping in June FYI 😊

Safari on iOS seems to be the one dragging nowadays... 😣

Classes are amazing and so are other ES7 features. We are using them in one of our projects, but we had to introduce a cross compiler (babel.js) because we need to run on all browsers -- Chrome, Firefox, IE and Safari.

There is a browserify transform to run babel.js (babelify), so it would work nicely with my efforts.

Just out of curiosity, has work started on some of these? :)

no since we still don't know the implications on speed of these.

My understanding is transpiled ES6 is mostly not performant and therfore not for production until implimented into the browser. With that said newer projects or modules should adapt these conveniences today but performance must be factored into this decision.

I agree with @erichlof imo the most useful implementation would be SIMD (https://github.com/tc39/ecmascript_simd), even with using the polyfill. Seems like users would benefit most from that.

Where would SIMD fit in ThreeJS? I think ThreeJS already offloads all significant calculation to the GPU. While I can sort of understand using it on the Vector classes, there is very little math that is actually done in ThreeJS, rather mostly those vectors are just transported to the GPU.

I speak from experience as I implemented SSE2/SSE4 extensions before and I found that the benefits were fleeting in the majority of cases -- the only real cases were it was a benefit was when I had large arrays on which I wanted to do the same operation.

Hi @bhouston ,
Does the same hold true for Matrix operations? When I was thinking SIMD, I was thinking that 4 X 4 matrix operations would benefit the most. And as we all know, matrices are used every animation frame in Three.js on the JavaScript/CPU side, no matter the complexity of the app.

If the Babylon.js guys don't mind, here's a hint of how to get all of this started:
https://github.com/BabylonJS/Babylon.js/blob/master/src/Math/babylon.math.js#L2030-L2093

I think that a single matrix multiple if you have to convert the format is likely a loser. If you have your matrices always in a SIMD format, then it may be a benefit.

But even so, Matrix multiply often isn't optimal because you have to multiply columns by rows, which require a reordering operation.

I've found that most efforts at SIMD optimization (excluding large arrays with homogeneous operations) are not worth the effort, but I am okay with others doing benchmarks to find out.

The resulting code is hard to debug and maintain as well.

One strategy that can work is to have all vectors and matrices use a SIMD compatible memory layout as their native representation -- which is what Quake 3 did I believe in its math library. But then you have to use 4 floats for vec2 and vec3 types for efficiency, but then that becomes problematic when you want to upload to the GPU because now you have extra floats in the way.

I see what you mean, thank you for your insight and expertise. Being impressed by the SIMD.js presentation a while back, and sticking it into three.js and maintaining it are two different things I suppose. It would be interesting like you said to have some performance comparisons done. Maybe physics libraries like Cannon.js and Oimo.js , which are used in conjunction with three.js would get more benefits from SIMD?

@bhouston Ahh ok that does make some sense, some benchmarks would be quite interesting though.

@erichlof if you are interested, I have started a branch, https://github.com/Globegitter/three.js/tree/include-SIMD, where I started to replace Vector3 with SIMD. It still is a heavy WIP and my time is limited so we'll so how far I get. Also, using the latest Firefox nightly you can run this code without a polyfill (where the native implementation is apparently ~20 times faster compared to the polyfill: https://blog.mozilla.org/javascript/2015/03/10/state-of-simd-js-performance-in-firefox/).

@erichlof @jonnenauha I feel compelled to point out that es6 classes are just sugar and they are internally using the prototype mechanism to implement everything at runtime.
I am fairly optimistic that es6 features won't impact computational performance.

You might be a little fast to conclude that: There are indeed different code paths for ES6 in every JS engine in the (fairly complex) implementation of JS objects.

es6 modules might affect load time when those are finally implemented by engines, because you're fetching a bunch of little files rather than one big file. HTTP/2 might make that a non-issue though.

Clients may still want to use whole-app JS-level compression to cut network bandwidth and their protect intellectual property.

Making the Closure Compiler understand Three.js would allow to compile to ES6 and to see when the time is right to switch over, plus many additional benefits. I think this is where efforts of this kind should be going for now...

But even so, [SIMD] Matrix multiply often isn't optimal because you have to multiply columns by rows, which require a reordering operation.

SIMD instruction sets often have a "multiply by scalar and add" instruction, to implement matrix multiplication like this:

for i : 0..3:
    dst.col[i] = lhs.col[i] * rhs.co[i][0] // multiply vector by scalar
    for j : 1..3:
        dst.col[i] += lhs.col[i] * rhs_col[i][j] // multiply vector by scalar & add

Matrix multiplication is actually just applying a transform to the column vectors of the right hand side operand. Now, transforms can either be looked-at as a bunch of dot products (the confusing way which we usually use with pen and paper), or as a linear combination of the axis vectors of the destination space = column vectors of the left hand side operand.

@Globegitter Wow that's an awesome start! I am going to get Firefox Nightly so I can experiment with the new branch as well!

@Globegitter I love when someone just goes ahead and does something when they believe in it. Code settles divergent viewpoints faster than discussion does.

where the native implementation is apparently ~20 times faster compared to the polyfill

Would be interested to see also how much slower the polyfill is compared to non-SIMD

I am going to get Firefox Nightly so I can experiment with the new branch as well!

Should also be able to able to try it in Edge by switching experimental and asm.js on in about:flags

@Globegitter I think your editor is changing the whitespace leading to really nasty diffs: https://github.com/Globegitter/three.js/commit/d835ca3a22eed4ed4603534773ae55c29d5a8026

I notice you are making the SIMD type as a side car:

https://github.com/Globegitter/three.js/commit/8ed9c1d351a3b0587a1f05051922d271d79f642d

May i suggest that you just change Vector3 x, y, z into getter/setters and only store a float32x4? I think such an approach may be a lot easier to implement with less changes.

I'm not sure of the best way to define properties but mrdoob does something like that here:

https://github.com/mrdoob/three.js/blob/5c7e0df9b100ba40cdcaaf530196290e16c34858/examples/js/wip/proxies/ProxyVector4.js#L18

Having the SIMD as the main storage type behind a math type is likely the most efficient, no extra conversions required. Here is the Bullet Physics SSE math library if you need a guide to any standard vector/matrix operations:

https://code.google.com/p/bullet/source/browse/trunk/src/vectormath/sse/

@bhouston, thanks for these notes. I was kind of just getting straight in there to see how far I'd get in a few hours without having worked much with three and SIMD before (we use it in most of our projects). So this feedback from someone who knows three is really appreciated. And yeah gotta turn that off in my editor.

@tschw thanks for the note on the matrix/vector math! You are right that is better.

@Globegitter Here is a better example of setter/getter on an object's prototype: https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L83

Some cents on working on ES2015 (mainly in node)

  • there are places Javascript engines (eg. V8) needs to play catch up optimizing ES6 features
  • from my experience code like let x = 1, y = 2 would deoptimize v8 although I would expect v8 to be supporting it eventually
  • transpiled code to ES5 can be run slower than ES6 code (which is why I prefer to use only supported ES6 features in >node 4, that is almost everything except the import export system)
  • Maps and Sets are performance wins
  • Classes are nice
  • Using Babel can be a pain to your workflow, probably not worth the effort if using ES6 purely for syntax sugar

Half year ago I converted some Three.js functions(4x4 matrix multiplying, Vector4 etc) to SIMD and you can try them out. At some point it worked as a bookmarklet but now might require a refresh to work with the latest Three version

  • Firefox Nightly supports native SIMD out of the box
  • Chrome with flag --js-flags="--harmony-simd" supports JS polyfill of SIMD so it's going to be slower than non-simd version. At least you can check if it works and experiment!

350% performance gain is a LIE :)

I also ported part of Vector3 but it's commented.

https://github.com/DVLP/three.turbo.js

Edit: I have a more up to date version somewhere on my hard drive so I'll update this project soon

Awesome watching and staring a +1 to that.
What would be great is a system that checks performance and enables or disables such features depending on said performance.

What would be great is a system that checks performance and enables or disables such features depending on said performance.

I guess one would have two Vector3 definitions within the Vector3 module and you would conditionally return one or the other depending on whether SIMD is native? I guess that would work, but it would increase download side. Maybe you could just have a few functions that switch depending on whether SIMD is available -- probably SIMD is most important only for a small subset of math functions. If this was small in terms of total code, it may be worthwhile to put in now. But it needs to be optional so that it doesn't become slow when SIMD isn't available.

We could start by doing a SIMD version of Matrix4's multiplyMatrices() as it's currently the most called method in complex scenes.

https://github.com/mrdoob/three.js/blob/dev/src/math/Matrix4.js#L383-L419

screen shot 2016-08-30 at 20 46 29

@mrdoob look in here it's done.
https://github.com/DVLP/three.turbo.js/blob/master/src/three.turbo.js

Try as a bookmarklet:
javascript:(function(){var script=document.createElement('script');script.src='//rawgit.com/DVLP/three.turbo.js/master/src/three.turbo.js';document.head.appendChild(script);})()

The piece of code responsible

THREE.Matrix4.prototype.multiplyMatrices = function(a, b) {
    var ae = a.elements,
      be = b.elements,
      tb = this.elements,
      arr1 = SIMD.Float32x4.load(ae, 0),
      arr3 = SIMD.Float32x4.load(ae, 4),
      arr5 = SIMD.Float32x4.load(ae, 8),
      arr7 = SIMD.Float32x4.load(ae, 12),
      arr2,
      arr4,
      arr6,
      arr8,
      res;
    // calculated version
        for (var i = 0; i < 4; i++) {
            arr2 = SIMD.Float32x4.splat(be[i * 4]);
            arr4 = SIMD.Float32x4.splat(be[i * 4 + 1]);
            arr6 = SIMD.Float32x4.splat(be[i * 4 + 2]);
            arr8 = SIMD.Float32x4.splat(be[i * 4 + 3]);
            res = SIMD.Float32x4.add(SIMD.Float32x4.add(SIMD.Float32x4.mul(arr1, arr2), SIMD.Float32x4.mul(arr3, arr4)), SIMD.Float32x4.add(SIMD.Float32x4.mul(arr5, arr6), SIMD.Float32x4.mul(arr7, arr8)));
            SIMD.Float32x4.store(tb, i * 4, res);
          }
}

When I was testing this half year ago I noticed that the version with unfolded "for" loop was minimally faster. That's why in my little library the loop is commented and hardcoded version is present.

There are more functions in "inverse" branch
https://github.com/DVLP/three.turbo.js/blob/inverse/src/three.turbo.js

What's the performance difference?

https://jsfiddle.net/tk6zx5dm/6/

It depends. In Nightly when the number of calculations is small (<1000) then the result is 3 times slower. When the number of calculations is higher than 10000 then the speed is ~40% faster

In Chrome with --js-flags="--harmony-simd" flag enabled there's no real SIMD, but a JavaScript polyfill so obviously the speed is many times slower for now.

A good place to test would be Crosswalk project(based on Chromium). They have real SIMD for Android so it could be an interesting experiment to build a project with code from this JSFiddle to see what the result would be.

You might wan't to include the simd polyfill anyways on your example pages. Just a bit more convenient for people to try them out. Log a line in console or put some text on screen when no native is available (the polyfill probably leaves some hints that its enabled).

I can't imagine why Chrome would just load the js simd polyfill with --js-flags="--harmony-simd", that just makes no sense when it can be already done in user land?! What is the benefit of this. Maybe they will start putting stuff in gradually. Where did you read this is whats actually happening with the flag, any good links? Looks to be "in development" here https://www.chromestatus.com/features/5757910432874496

Edit: To the actual topic: I imagine its very hard to put this stuff in three.js if the perf is all over the place from a few objects to a lot of objects. If all the implementations can give a perf improvement with 1 to N objects, it should be feature detected and used where it makes sense. If the native impls are inconsistent, I don't see the whole thing going anywhere so why bother working on it, spend the dev hours on something more productive.

three can jump on this when it matures, or is the case that three.js support could drive browser implementations to do better? I would think unreal engine and other simulation/math use cases would be the driving force for browser vendors, not particularly how many libs want to use the feature. But who knows.

can't imagine why Chrome would just load the js simd polyfill with --js-flags="--harmony-simd"

I think it is a rather common practice that browsers do it this way, so that users can start to test things out. For example, if I understand it correctly, this is how WASM will be introduced in browsers initially.

All arguments that support the inevitability that a centurion is required. Real time monitoring of performance that can enable or disable features and determine optimal performance. So even shadow map density should change based on camera distance etc. Anyway I know the consensus has decided this would be a third party tool and not needed. Still I just want to keep that point in the conscious if not subconscious mind.
Maybe your just checking for a pollyfill and disabling SIMD but again FPS tells all. Priorities of what to enable or disable is likely best as a user/app specific preference. Thanks for listening! Im looking forward to better faster GL in the near future. Let's be ready to outshine the rest.

It seems that the process of loading data to SIMD might be creating overhead which defies the benefit of using SIMD in Nightly. I'm very curious about performance in Chrome but the latest Chromium with real SIMD is here http://peterjensen.github.io/idf2014-simd/
It's old.

Update:
now SIMD works also in Edge. You can enable it by going to about:flags and checking "Enable experimental JavaScript features" (restart Edge)
Not sure if this is real SIMD or polyfill though. This test renders ~40% slower SIMD on my machine :/ https://jsfiddle.net/tk6zx5dm/6/

variation of the test with SIMD objects cached in Matrix4 object, not sure if this will ever be useful
https://jsfiddle.net/tk6zx5dm/15/
What's interesting is that code with SIMD objects cached is _sometimes_ faster even in Chrome which is using polyfill SIMD(...? weird)

THREE.Matrix4.prototype.multiplyMatrices = function(a, b) {
  var i = 4;
  while(i--) {
    SIMD.Float32x4.store(this.elements, i * 4, 
      SIMD.Float32x4.add(
        SIMD.Float32x4.add(
          SIMD.Float32x4.mul(
            a.cacheSIMDRow1,
            b.cacheSIMDSplat[i * 4]
          ),
          SIMD.Float32x4.mul(
            a.cacheSIMDRow2,
            b.cacheSIMDSplat[i * 4 + 1]
          )
        ),
        SIMD.Float32x4.add(
          SIMD.Float32x4.mul(
            a.cacheSIMDRow3,
            b.cacheSIMDSplat[i * 4 + 2]
          ), 
          SIMD.Float32x4.mul(
            a.cacheSIMDRow4,
            b.cacheSIMDSplat[i * 4 + 3]
          )
        )
      )
    );
  }
}

Caching - probably must be executed on every updateMatrix() which may render entire solution slower in the end

THREE.Matrix4.prototype.updateSIMDCache = function() {
  this.cacheSIMDRow1 = SIMD.Float32x4.load(this.elements, 0);
  this.cacheSIMDRow2 = SIMD.Float32x4.load(this.elements, 4);
  this.cacheSIMDRow3 = SIMD.Float32x4.load(this.elements, 8);
  this.cacheSIMDRow4 = SIMD.Float32x4.load(this.elements, 12);

  if(!this.cacheSIMDSplat) {
    this.cacheSIMDSplat = [];
  }
  for(var i = 0; i < 16; i++) {
    this.cacheSIMDSplat.push(SIMD.Float32x4.splat(this.elements[i]));
  }
};

I'm curious has ES6 style been revisited? The entire code is unusable live right now without running a concatenator so why not also run babel and start using ES6 stuff where appropriate?

The entire code is unusable live right now without running a concatenator so why not also run babel and start using ES6 stuff where appropriate?

Mainly due to lack of tests on the performance impact of babel produced code. Also, I heard V8 produces faster code when using var instead of let/const.

FYI: I recently started trying bublé which "limits itself to ES features that can be compiled to compact, performant ES5".

The bottom of this link helps explain var vs let in loops
http://stackoverflow.com/questions/21467642/is-there-a-performance-difference-between-let-and-var
Personally I'm waiting for the none transpiler solutions to hit the browser. like for 'await' sooner the better, but for now maybe it should be written that way and run through (being horribly disfigured by) a transpiler, but if it works in Chrome?! so testing for other browsers? I say 'Let them eat cake... or use Chrome.' (my favorite expression).
I think 'let' should be used if it's in the situation where it's preferably proven via tests to be better to do so.
I suppose it's like 'var' is hoisted to avoid scoping but you may want that and then 'let' is best. I think/hope 'let' should keep your memory footprint smaller in certain situations too.

Transpiled code will never be as fast as optimised by hand. It's a shame that JSPerf is down. I used to visit it more often than Google.
Edit: JSPerf.com is not down! I just assumed it's dead forever after it wasn't working for a year
let 3x slower than var in my Chrome Canary: https://jsperf.com/let-vs-var-performance/14
in Firefox and Edge no difference in speed but Chrome is the most important.
Can someone test in Safari?

Classes seems great and modern! does anyone know if this is already on the schedule?

@Rubinhuang9239 I have not seen anyone giving it a go.

let 3x slower than var in my Chrome Canary: https://jsperf.com/let-vs-var-performance/14

For what it's worth, let and const are now marginally faster than var for me on Chrome 66, Firefox 59 and Edge 42, using that test.

Switching over the classes should be very straight forward I would think - the main bulk of work was the adoption of rollup that was done quite some time ago. I wouldn't be surprised that a hero could implement classes in ThreeJS in a couple hours.

Well there is a lot of classes, so maybe it will take like 8 hours or so if you did it manually.

Maybe start small and convert the math classes first and do a PR. If that works, then move on to the others.

@looeee it's been a over year so no surprise ES6 managed to catch up with performance

2019 is the year to finally start to adopt some ES6 code in the examples/docs, right @mrdoob? 😄

ES6 has been the standard for quite some time now, new developers learn ES6. Also we can stop supporting IE11 in the examples as you guys discussed in #16220, which developers look at three.js examples using IE11? 😅

I think the most needed features in order to simplyfy the code for newcomers are the classes and template strings, while not forgetting the now default const/let.

I can contribute if we decide to start.

Step by step. Lets start by updating the examples to use three.module.js for now.

Step by step. Lets start by updating the examples to use three.module.js for now.

This step has been completed for a while now. Maybe it's time to officially move on to the next step?

Two candidates:

  1. const/let
  2. classes

Since classes have already been used in the box geometries for a while now, I vote for doing that first. Or we could do both at the same time.

Related: #11552, #18863

If I understand right, the issue is that we can't convert anything to ES6 classes if it's extended by examples, until the examples are converted too. And that might mean waiting until examples/js is gone? Unless we can confirm that the modularize script will support classes, to convert the examples/js files at the same time as their parent classes.

As I understand it, examples/js is kind of all the excess features that are cool none-the-less but that if we're all combined into the main src/... would bloat it out of proportion. Inherently is the answering of the question 'Do we provide examples/scripts of commonly created x, y, z?'. I hazard a guess that three.js's answer is mostly yes but as educational material within the website. Removing examples/js seems out of the question at that point. Though perhaps I've misunderstood how examples/js is referenced/used 🤔 let me know.

Leaving examples/js as only website/educational misses part of the point of what that folder could be i.e. community scripts/content/projects/other that people would like to share but I'm not aware of somewhere that's doubling down on that.

I digress.

until the examples are converted too.

I like the sound of this as our next best interim step.

@DefinitelyMaybe we are not removing the functionality in examples/js, there are two directories worth noting:

  • examples/js/*
  • examples/jsm/*

The first contains legacy files, the latter contains ES modules generated from those files, with the same functionality. We will eventually remove the first. The script that does the conversion does not support ES classes currently. So until that is removed, files in examples/js cannot be converted to ES classes. Some of them extend files in src/, and you can't extend ES classes without using ES classes, so this is a blocking dependency.

ah, was confused by the previous comment

And that might mean waiting until examples/js is gone?

makes sense now.

modularize.js reminds me of my initial project that brought me here. Converter. I saw comments here about moving to ES6 classes so I thought I'd just jump in here instead.

So if examples/js extends src in some way both need to be converted to ES6 classes at the same time
or...
work on modularize till it generates classes/es6?

we can't convert anything to ES6 classes if it's extended by examples

There's still a lot of stuff in the core that's not extended in the examples, why don't we start with that?

That sounds fine to me.

@Mugen87 was there anything else blocking the ES class change, or just that?

There's still a lot of stuff in the core that's not extended in the examples, why don't we start with that?

List of scripts not extended by examples.

edit: list has been updated!

The blockers are sections like these:

https://github.com/mrdoob/three.js/blob/6865b8e6367d0ce07acbacfae6663c4cce3ac21e/examples/js/loaders/ColladaLoader.js#L6-L12

https://github.com/mrdoob/three.js/blob/6865b8e6367d0ce07acbacfae6663c4cce3ac21e/examples/js/cameras/CinematicCamera.js#L38-L39

https://github.com/mrdoob/three.js/blob/6865b8e6367d0ce07acbacfae6663c4cce3ac21e/examples/js/controls/OrbitControls.js#L1149-L1150

Use of THREE.<class>.call and Object.create( THREE.<class> would be the most likely patterns. Which would mean that Loader, EventDispatcher, and PerspectiveCamera (among probably many others) cannot yet be converted to classes.

https://github.com/mrdoob/three.js/commit/1017a5432eede4487436d6d34807fda24b506088

Okay, I think we can start with let and const in src/.

@DefinitelyMaybe Is this something you would like to help with?

🎉 💯 hellz yea!

I just wanted to raise a concern that if you are hot code loading my fear is const will cause problems. If it is true in all JavaScript that objects totally overwrite const then no problem. Alas if they don't, all const should never be used. I just merge object structures with code functions assigned to keys of those object trees (like structures), so I avoid needing to use let or const for the most part.
Anyway it is something to think about and I basically believe const should never be used and are really unesscary. Mainly for my concerns with hotcode loading.
Still I think they are not as constant as you think so maybe someone that understands that more can explain it's a meaningless concern with import reloading or whatever. Thanks for you consideration and input.
Let there be 'let'! Finally.

Crockford said var was a big mistake but could not be changed so 'let' was created but var is considered flawed and the code of 'let' should have been a fix for var, but it would have broken a few poorly coded edge cases left lurking in the wild. Strict mode and hoisting are issues around this topic.

@MasterJames there are absolutely no issues with hot reloading when using const.

I work in a React environment and hot reloading is the norm there, as well as const and let, which are the standard nowadays.

I agree, this won't interfere with hot reloading. Perhaps you've mistaken const for making objects immutable with Object.freeze? We aren't planning to do that.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

akshaysrin picture akshaysrin  ·  3Comments

fuzihaofzh picture fuzihaofzh  ·  3Comments

donmccurdy picture donmccurdy  ·  3Comments

Horray picture Horray  ·  3Comments

clawconduce picture clawconduce  ·  3Comments