The prototype is your friend (if you care about perf)
Update: Because this post has gotten a bunch of views already, and I definitely don't want to spread any misinformation here, I've updated my conclusion (jump to the bottom if you've already read the protip).
The this
keyword in JavaScript can be really confusing. That I won't dispute. And so a lot of developers I really respect actually advocate for avoiding it altogether, which is definitely possible. You can sort of ignore the existence of JavaScript's prototype system and follow your own object-building approach:
function createValueObject(value) {
return {
get: function() { return value; }
};
}
Or you can embrace the prototype system:
function ValueObject(value) {
this.value = value;
}
ValueObject.prototype.get = function() {
return this.value;
};
Obviously this a contrived example; it's just meant to concisely illustrate the two approaches I'm talking about.
The readability of these two examples is a debatable issue. There are certainly many valid reasons for preferring the former, including its avoidance of this
. However, if performance is a serious concern, you should consider going with the second approach. Using a prototype to define the methods of an object is faster pretty much across the board, though how much faster depends on the browser.
What is the big difference here?
A Reddit user pointed out that my closing paragraph (below) implies the big difference between these two approaches has to do with the efficiency of method invocation. Re-reading the paragraph, I have to agree that it does seem like that's what I'm saying. But that is wrong. If you compare just the method invocations in both examples--calling both the factory method and the constructor beforehand--you'll see that they show basically the same performance.
The real difference between the factory approach and the prototype approach is that using a prototype speeds up object creation a lot. Which, when you think about it, is really not so surprising: it's simply the difference between defining methods one time and re-defining them over and over.
The connection I was trying to make with the link to Eric Lippert's post is really based on the second installment in the series, in which he implements virtual methods by creating a delegate field for every method of a class. In the final installment he makes this much more efficient by using a vtable instead.
To be fair, the connection--even now that I've clarified it--isn't perfect. In particular, what makes a vtable so preferable to delegate fields is that it is much more memory efficient, and memory efficiency is obviously a different animal from execution speed (which is what jsPerf directly measures). But the comparison was apples to oranges to begin with, since C# as a statically compiled language does not give you the ability to dynamically define methods in quite the same way JavaScript does (lambdas seem like that, but in reality they get compiled to generated classes that lift local variables into instance fields... but that's a whole other discussion!).
My original conclusion
The why is a question for another post. I can't speak with authority on that, but I have a strong suspicion it relates to JS engines' use of hidden classes internally and the efficiency of vtables. (To get an idea of what I'm talking about, I recommend reading Eric Lippert's series on implementing the virtual method pattern in C#, which goes over the efficiency considerations in designing a method lookup system. Clearly C# is not JavaScript, but I think similar principles may be at play here.)
Written by Dan Tao
Related protips
17 Responses
Nice write up!
One trick that I like to do with prototype is something like this.
https://gist.github.com/6799520
Object.defineProperty(ValueObject.prototype, "value", {
get: function () {return this.superValue},
set: function (v) (this.superValue = v;)
});
EDIT: No underscores....stupid markdown
@tencircles I think that will give you a stack overflow. (You'd need to go with something like this._value
instead.)
Edit: Ah, I see what happened. You did go with this._value
, and then Coderwall interpreted your comment using Markdown and italicized half your code instead. Ha!
@dtao lol yeah I noticed that! Damn you markdown! Also, if we're going for performance. getters/setters aren't really the way to go yet. But, just throwing it out there as yet another alternative.
I would think this is especially relevant in the context of performance on Node. Could it be reasoned that the performance difference would be somewhere around that of Chrome 32?
There are a few conclusions here about the difference in performance:
The second form (using the
prototype
field on constructors) is much more memory efficient and faster at the instantiation time because all values are already defined ahead of the time. A new object instantiation does not incur the instantiation of new properties besides those created inside the constructor itself. Whereas on the former case you need to instantiate all of the new function objects alongside with storing their Environment Record and optimising access for closured variables along the way, for every new object.Still expanding on the first point, this actually happens because JS uses delegative inheritance, rather than concatenative. Therefore even the least optimised object created through the second form, initialising it would be just a matter of making a pointer to a Hashtable that contains the methods. This gives you blazing fast instantiations, but slow as fuck property accesses.
To optimise property access, objects that are created through the same path (same [[Prototype]] field, same properties set in the same order, etc.) get the same hidden class, which optimises static property accesses to plain memory offsets. So you also get really fast property access with delegative inheritance, instead of having to manually find the value in each hashtable of each prototype in the chain.
new Foo()
is more of an idiom thanfoo()
for creating new instances of objects, therefore JITs optimise heavily for this. I am not familiar with the internals of any optimising JIT, but I wouldn't be surprised if they compiled stuff optimistically beforehand to optimise the object creation time. One of the reasons I believe this is so is that dynamic inheritance with prototypes, by way of Object.create(foo) is not as heavily optimised, nor is simulatingnew
. This is likely to be becausenew
is an operator, therefore optimistic optimisations are easier to implement —new
can't meandelete
,Object.create
can!
Some of these are just hypothesis, since as I said, I'm not familiar with any of the internal implementations of any of the JITs. I wrote a performance test comparing the different ways of instantiating an object before for performance here, which is a bit more thorough: http://jsperf.com/constructing-objects
I love the prototype. I encourage using the prototype modal even in a singleton pattern.
(function() {
function CoolClass() {};
CoolClass.prototype.coolestOfThemAll = function() {};
return new CoolClass();
})();
@planet That's neat and all, but... why exactly do you encourage that?
For something that is only to be instantiated once. Instead of the the classic singleton style:
Singleton = (function() {
return {
publicProperty: function() {}
}
})();
@planet I fail to see why you'd abuse prototypes like that. The whole thing about prototypical OO is that everything is an object, and objects inherit directly from other objects. Thus, { a: b }
is always a singleton, no need to go to lengths to construct something that looks like a class, then instantiate that class — in fact, JS only has instances, no classes :P
You should see my reddit comment http://www.reddit.com/r/javascript/comments/1nnkfe/the_prototype_is_your_friend_if_you_care_about/ccm0rw6
In the fixed jsperf, prototype method invocation is "only" 1.5x faster. Even if we ignore for a second that this difference is huge for such a low level operation, we cannot look at the 1.5x at face value because the difference comes from the ability to do inlining.
@petkaantonov I don't mean to be a Scrooge; but it seems to me you're arguing with a straw man. I stated both in this post and in the Reddit thread that the real difference between the two approaches is in object creation; i.e., the real "meat" of the performance discrepancy here is in the lines createValueObject()
vs. new ValueObject()
. The new jsPerf you posted compares only method invocation, which—despite there still being a perceived performance difference (though as you point out this is a flawed conclusion)—is much less significant here.
@planet I'm inclined to agree with @sorella here: since one of the hugest benefits to using the prototype system is in making object creation efficient, applying it to defining singletons seems like an overly complicated approach with no real payoff.
That said, if you just like the way prototypes work aesthetically, and you prefer to use this style, I can't say you're "wrong"; it just seems unnecessary to me.
@dtao I am trying actually trying to defend your point. Many people shrug off object creation because they are "not creating that many objects anyway". Anyway, in object creation you have O(n) vs O(1) so there is no fair benchmark. If you benchmark a class of say, 25 methods, you will easily get like 10000x difference. And again many people only benchmark with one method (so they compare O(N) to O(1) with N=1, lol).
My point is that it should already be obvious to everyone that the object creation will be extremely slow when methods are re-recreated instead of placed on prototype.
This opens up comparison of method calls rather than object creation, where I am again trying to explain that 1.5x difference is not what it looks like and there is much bigger difference in practice even between method calls of "prototype class" vs "closure class"
@petkaantonov Haha, in that case, sorry for getting defensive. The thing is I feel that the method invocation case is much more complicated, and seemingly more controversial; so I've tried to steer clear of putting too much interpretation into that side of the equation. But you bring up an interesting point. It's probably worth further investigation. But yeah, in this case I was just focusing on object creation since I actually think the "you aren't creating that many objects" argument is very circumstantial at best and generally inaccurate at worst.
Has anyone tested the memory issues of factory in recent Chromes? I saw a page recently that implied that functions defined and assigned to this in the constructor would be shared across instances if they weren't modified later on. I could be mistaken though..
Just added a series of tests to jsPerf:
http://jsperf.com/factory-module-and-prototype-performance-part-1 , 2, 3
They measure construction, execution, combined performance.