There's a famous StackOverflow question everyone links when I suggest using Function.prototype as opposed to creating and sharing no-op (no operation) functions all over your codebase.
The question's title: What is the JavaScript convention for no operation?
I prefer code that reads easier and is more maintainable over time. In addition, I prefer native solutions where possible because using libraries and importing functions is cumbersome. Developer experience is key. This is why I always advocate Function.prototype
.
It's even more important in React code because using Function.prototype
as an event handler's prop means you get a cached value for memoization. If you use () => {}
, then each time it's created, you're building out a completely new function. This causes many components to re-render unnecessarily. That's why you'd have to wrap () => {}
, or really any anonymous function in useCallback
.
In addition, if you care about test coverage, each const noop = () => {}
you create in another uncovered test case.
The answers of that StackOverflow question tell a story of how Function.prototype
is super duper slow.
One even shows actual benchmarking code revealing a 70% slowdown using Function.prototype
in most browsers. 😲
Here's a slightly modified version of that benchmark:
const performanceTest = (func, iterations) => {
const before = performance.now()
for(let i = 0; i < iterations; i++) {
func()
}
const after = performance.now()
const elapsed = after - before
return `${elapsed.toFixed(6)}ms`
}
const iterations = 10000000 // 10 million
console.info(`${iterations.toLocaleString()} iterations.`);
const timings = {
'() => {}': (
performanceTest(
() => {},
iterations,
)
),
'Function.prototype': (
performanceTest(
Function.prototype,
iterations,
)
),
};
console.info(
JSON.stringify(
timings,
null,
4,
)
);
Run it yourself, you'll probably see similar stats:
10 million iterations.
"() => {}": "5.585000ms",
"Function.prototype": "59.195000ms"
On a slower machine, the numbers deviated even more.
Conclusion, Function.prototype
is super slow right? Never use it ever again even though it's the only native no-op solution?
Wrong. These numbers are lies.
You'll get the same results, but that's not how V8 and other JavaScript engines work.
Now for the fun part. What happens if you swap the order of which no-op function gets executed?
10 million iterations.
"Function.prototype": "54.925000ms",
"() => {}": "48.330000ms"
Whoa. Wait a minute. These numbers are way way closer. They're so close that the difference over 10 million iterations is negligible.
This is why I always run my benchmark timings independently of one another. If not, V8's gonna do some sort of optimization run (or not) and mess up your numbers. You'll also need to run a completely new instance of Node.js or the browser console (at about:blank
) after each test for accuracy.
Running them separately, I get:
"() => {}": "4.820000ms"
and
"Function.prototype": "40.035000ms"
Still seems pretty conclusive right? Function.prototype
is super slow.
The most interesting data comes from making this benchmark more realistic. Your application will most-likely have no more than 1000 no-op function calls so why are we testing 10 million iterations?
We're also testing those iterations back-to-back. I guarantee you're not calling 1000 or more no-op functions one after another.
I ran these tests separately 10 times each and averaged the results (excluding statistical outliers):
1000 iterations.
"() => {}": "0.040000ms
"Function.prototype": "0.035000ms"
All-of-a-sudden, Function.prototype
is faster. That makes sense to me considering () => {}
is essentially function() {}.bind(this)
and Function.prototype
is essentially new Function()
.
Drop down to 100
iterations, and the question becomes: "why aren't you using Function.prototype?
":
100 iterations.
"() => {}": "0.020000ms"
"Function.prototype": "0.010000ms"
Clearly, Function.prototype
is actually twice as fast!
While this validates my use of Function.prototype
over the last couple years, none of these benchmarks actually matter. You shouldn't be making code styling decisions based performance metrics anyway unless there's a very specific reason.
What matters the most are big problems like "why isn't my input
responsive when the page loads?" and "why does it take 10 seconds before I can interact with the site?" or even "why is literally every component re-rendering multiple times when a single state change occurs in Redux".
If you're micro-optimizing, you're probably doing it wrong. I've personally never been in a situation where a micro-optimization actually did anything substantial unless I was actually working with a ridiculously large dataset.
I have two other articles on using transducers to speed up JavaScript arrays and speeding up JavaScript array processing. In both articles, I talk about how much speed you gain moving to transducers, but how you won't see any benefit until a certain point.
In one article, it was after 250K items. In the browser plugin I was doing at the time, transducers were effective for as little as 20K items because the processing complexity was higher. I've never dealt with JavaScript code requiring these sorts of performance optimizations before or since.
I can't imagine how many people made poor code-styling decisions because of the answers on that Stack Overflow question. The performance metrics are incorrect for almost everyone's use case, and they're biased by the order of execution. The only thing that benchmark tested is the ability of a JavaScript engine to optimize performance over 10 million iterations.
In everyday applications, you'll never see a difference in performance even close to these numbers. In fact, if you really think Function.prototype
is the bottleneck of your application's performance, you're looking in the wrong place. There are actual areas of your codebase that need to be looked at, and your choice of no-op function isn't one of them.
If you've got an interest in more topics related to JavaScript performance, you should checkout my other articles: