- read

Is Elegant JavaScript Always Performant? Concise vs. Fast Code Explained

Yuri Bett 81

Is Elegant JavaScript Always Performant? Concise vs. Fast Code Explained

Discover how elegant JavaScript code sometimes stacks up against performance-focused solutions in real-world scenarios.

Yuri Bett
JavaScript in Plain English
9 min read4 days ago


Elegant and short Javascript code is not always the best performant code.
Photo by Dylan Ferreira on Unsplash

Who isn’t proud of writing short and elegant code that solves a big problem?

I remember hearing colleagues, almost in a sense of competition, arguing things like, ‘I can solve this in less than 10 lines of code.’

Shorter code tends to be more readable than longer lines of code, and primarily, readability should be our first intention.

But we need to keep something in mind:

Not every short and elegant piece of code is the best solution.

There will be times when reducing lines for the sake of brevity or elegance will make our code less performant, or it could even create a catastrophe.

Enough talking, let’s start checking some examples:

Comparing `for` and `forEach`

I will start with a very simple, and I should say, silly example. It just illustrates that even iterables in JavaScript can have different performances.

In a real-world scenario, these differences tend not to make a significant impact on your project, but let’s take a look ‘just for science’.

Let’s compare JavaScript’s for loop with forEach.

First, I will generate an array with 1 million random numbers:

const randomArray = Array.from({ length: 1000000 }, () => Math.floor(Math.random() * 1000));

Now I will create 2 simple functions, for both for and forEach .

function sumNumbersForEach(numbers) {
let totalSum = 0;
numbers.forEach(num => {
totalSum += num;
return totalSum;

function sumNumbersFor(numbers) {
let totalSum = 0;
for (let i = 0; i < numbers.length; i++) {
totalSum += numbers[i];
return totalSum;

Any guesses on performance?

We will run the functions in this article a little differently. Let’s create a ‘function runner’ that measures the execution time of our code.

function measurePerformance(func, n) {
const start = performance.now();
const result = func(n);
const end = performance.now();
console.log(`Result: ${result}`);
console.log(`Time taken for ${func.name}: ${end - start} milliseconds`);

Now, let’s run each function using our measurePerformance utility.

measurePerformance(sumNumbersForEach, randomArray);
measurePerformance(sumNumbersFor, randomArray);

Here on my computer, I’m getting the following output:

Result: 499050411
Time taken for sumNumbersForEach: 7.001167297363281 milliseconds
Result: 499050411
Time taken for sumNumbersFor: 1.7294998168945312 milliseconds

Did you see the difference? Although the times are all measured in milliseconds, and we’re dealing with a million numbers, forEach here took four times longer.

Why did that happen?

While the forEach method is less verbose and looks cleaner, it's generally slower than the traditional for loop due to the extra function call overhead for each iteration. The difference is usually negligible for small arrays, but for larger datasets, the traditional for loop can be significantly faster.

Comparry Array `includes` and Set `has`

Here’s an interesting one. Among many developers, Set isn’t very popular. It’s not the type of construct you see often, and there are some important things to consider about it.

But let’s compare the includes method from regular arrays with the has method from Set.

I will use the same random array from above, but I’ll add an extra position at the end, just for illustration. I know it’s odd to mix numbers and ‘banana’, but bear with me; again, it’s just for science.

const randomArray = Array.from({ length: 1000000 }, () => Math.floor(Math.random() * 1000));

Now, let’s create two functions to find ‘banana’: one using the array’s includes method and the other using Set's has method.

function findBanana() {
if (randomArray.includes('banana')) {
return true;
return false;

const itemsSet = new Set(randomArray);

function findBananaSet() {
if (itemsSet.has('banana')) {
return true;
return false;

Important: Please note that I am creating a Set from the array outside the function. This approach is more sensible when dealing with a static array. If you have to create the Set every time you call the function, the situation changes. Creating a Set object has its own cost, but if it’s created just once and used multiple times, it can significantly improve performance.

Now, let’s run each function again using our measurePerformance utility.

Result: true
Time taken for findBanana: 0.5135412216186523 milliseconds
Result: true
Time taken for findBananaSet: 0.010957717895507812 milliseconds

Did you see the difference? Set.has was faster. Of course, we are comparing native JavaScript methods and objects, and we're still talking about fractions of milliseconds.

While using .includes() on an array is concise and straightforward, its performance can become an issue for large datasets. In the worst-case scenario, it might need to traverse the entire array. On the other hand, using a Set for membership checks leverages its internal structure, which allows for constant-time (O(1)) lookups. So, while initializing a Set might seem more verbose, repeated membership checks are much faster, especially as the dataset grows.

Side note: In this article, I will talk a bit about algorithm time complexity, also known as Big O notation. You will see terms like O(1), O(n), and so on. I have a small article about it if you want to understand what this is about.

Comparing Algorithms: Time Complexity and Memory

Now, things start to get a little more interesting. This is where Big O notation begins to play a significant role. Being aware of the time complexity of algorithms can really help make our code more performant.

And, as the examples above demonstrate, sometimes we need to write more code to achieve better performance.

To begin our examples, let’s generate a large dataset of customers. I’ll do this programmatically, so there’s no need to paste a gigantic object here. Let’s create 10,000 customers.

function generateRandomCustomer() {
const firstNames = ['John', 'Jane', 'Mary', 'James', 'Patricia', 'Robert', 'Jennifer', 'Michael'];
const lastNames = ['Smith', 'Johnson', 'Williams', 'Brown', 'Jones', 'Garcia', 'Miller', 'Davis'];

return {
firstName: firstNames[Math.floor(Math.random() * firstNames.length)],
lastName: lastNames[Math.floor(Math.random() * lastNames.length)],
customerId: Math.floor(Math.random() * 100000)

function generateCustomerArray(size) {
return Array.from({ length: size }, generateRandomCustomer);

const customers = generateCustomerArray(10000);
// if you want to see them :)

Our goal here is to search for users by their last name. So, given a last name, we need to find and return all the users that match the argument.

We could elegantly build something like this:

function findCustomersByLastName(customers, lastName) {
return customers.filter(customer => customer.lastName === lastName);

This function is elegant and utilizes the filter method to create a new array containing all elements that pass the test implemented by the provided function. However, its time complexity is O(n), where n is the number of customers. If this function is used frequently with a large dataset, it could become a performance bottleneck.

Similarly to the Set example before, if we need to search this dataset multiple times, a better option would be to create an index, like this:

// Preprocessing function to create an index
function indexCustomersByLastName(customers) {
const index = {};
for (const customer of customers) {
const key = customer.lastName;
if (!index[key]) {
index[key] = [];
return index;

// Function to find customers using the index
function findCustomersByLastNameIndexed(index, lastName) {
return index[lastName] || [];

// Assume customers is a large array of customer objects
const customersIndex = indexCustomersByLastName(customers);

// Now every search is potentially O(1) with respect to the number of searches (not considering the time to build the index)
const smiths = findCustomersByLastNameIndexed(customersIndex, 'Smith');

In this more verbose approach:

  • We first create an index (which is a one-time operation with O(n) time complexity).
  • The findCustomersByLastNameIndexed function can now perform the search in O(1) time with respect to the number of last names, assuming a good hash function with no collisions. The actual complexity can vary depending on several factors, but for large datasets with many lookups, this will generally be much faster than the filter method.
  • This approach uses additional memory to store the index.

What about the metrics? Let's take a look:

Time taken for findCustomersByLastName: 10.358540534973145 milliseconds
Time taken for findCustomersByLastNameIndexed: 0.007416725158691406 milliseconds

Real-life Considerations:

In a real-world application, especially in backend systems, you might use a database that handles indexing for you. Databases are optimized to create and maintain indexes efficiently, allowing you to retrieve records quickly. However, the concept illustrated here is the same: using additional memory (for indexes) to reduce the time complexity of read operations.

This example demonstrates the classic space-time trade-off in computer science: you can often make a program faster by using more memory (space). It also emphasizes the importance of considering both the frequency of operations and the size of the dataset when choosing an approach to implement.

Lastly, The Classic Fibonacci

Let’s take a look at a classic problem: finding the nth Fibonacci number. The Fibonacci sequence starts with 0 and 1, and each subsequent number is the sum of the two preceding ones: 0, 1, 1, 2, 3, 5, 8, 13, and so on.

1. Recursive Approach:

Less Verbose (Simple Recursion):

function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);

This recursive approach is very concise and straightforward. However, it’s horribly inefficient for large values of n due to excessive repeated calculations. Its time complexity is O(2^n), which means the function performs an exponential number of operations as n grows. For instance, fibonacci(40) would take a noticeable amount of time.

2. Dynamic Programming Approach (Memoization):

More Verbose (Using Memoization):

function fibonacci(n, memo = {}) {
if (n in memo) return memo[n];
if (n <= 1) return n;

memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo);
return memo[n];

With this approach, we introduce a memo object to store and reuse previously calculated Fibonacci numbers, which drastically reduces the number of operations. This technique is known as memoization. Now, the function can compute fibonacci(1000) or even larger values almost instantly. Its time complexity is O(n) because each Fibonacci number from 2 through n is only computed once.

3. Iterative Approach:

More Verbose (Using Iteration):

function fibonacci(n) {
if (n <= 1) return n;

let twoBefore = 0;
let oneBefore = 1;
let current;

for (let i = 2; i <= n; i++) {
current = twoBefore + oneBefore;
twoBefore = oneBefore;
oneBefore = current;

return current;

This iterative approach also has a time complexity of O(n), and it avoids the overhead of recursion and the extra space required for the memoization table. It’s more verbose than the simple recursive solution but is much more efficient for larger values of n.

In this example, while the simple recursive approach is elegant and concise, the other methods — though more verbose — offer drastically improved performance. The trade-off between readability and performance becomes evident, especially when dealing with larger inputs.

What about the metrics?

First of all, avoid using the Recursive Approach with a large number; your JavaScript engine will not be happy. You can try measuring it by passing 35 as the argument, but don’t go beyond that — it’s already quite slow.

Just by using an input of 35 for all 3 functions, I get this result:

Time taken for fibonacciRecursive with input 35: 89.2701244354248 milliseconds
Time taken for fibonacciMemoization with input 35: 0.028415679931640625 milliseconds
Time taken for fibonacciIterative with input 35: 0.01462554931640625 milliseconds

Now, I will pass 1,000 as the argument, but only for the last two functions; the first one won’t be able to handle it.

Result: 4.346655768693743e+208
Time taken for fibonacciMemoization with input 1000: 0.17408275604248047 milliseconds
Result: 4.346655768693743e+208
Time taken for fibonacciIterative with input 1000: 0.13091564178466797 milliseconds

Note that even increasing the input from 35 to 1,000, the time practically remained the same. On the other hand, the first option would have struggled significantly.

Where else can we go from here?

The whole inspiration for this post came after reading about a problem from Franziska Hinkelmann, regarding a linear-time sorting algorithm. It’s worth checking out:

My Final Advice to You

In programming, especially with JavaScript, you’ll learn that concise code isn’t always the fastest. Here’s some straightforward advice:

  • Write code that’s easy to understand first. Maintainability matters.
  • Measure performance and optimize only where it makes a real difference. Avoid early optimization for complex problems that don’t need it.
  • Get comfortable with Big O notation — it’ll guide your decisions about time and space efficiency.
  • Balance is key. Too much optimization can make code hard to read, while overly concise code can suffer in performance.


Good code is not just about how it runs today — it’s also about how well it adapts to the challenges of tomorrow.

Keep your code clean, but be ready to roll up your sleeves for some fine-tuning when your application demands it.

Let’s get connected! You can find me on:
- Medium: https://medium.com/@yuribett
- Linkedin: https://www.linkedin.com/in/yuribett/
- X (formerly Twitter): https://twitter.com/yuribett
- Buy me a cup of tea at https://ko-fi.com/yuribett

PlainEnglish.io 🚀

Thank you for being a part of the In Plain English community! Before you go: