Skip to content

Rationale

Vitaly Tomilov edited this page Dec 11, 2021 · 55 revisions

Key reasons of why you might want to use this library...

Native JavaScript Protocol

This library operates on JavaScript native types (sync and async iterables), and outputs the same. This means, no integration commitment, you can make use of the library in any context, without creating any compatibility concerns.

Clear separation of synchronous and asynchronous processing

If you look at the Benchmarks, synchronous iteration can outperform asynchronous 200x times over. This is a strong indication of just how bad an idea it is to treat synchronous and asynchronous sequences as one. And yet, this exactly what you are getting within frameworks like rxjs and some others, where performance is sacrificed to the convenience of processing unification.

Also, in the real world applications, the amount of asynchronous processing will always be significantly lower than synchronous. Ignoring this means throwing performance and scalability under the bus. To design a good product, you need a clear picture of your data flow, in order to be able to improve on performance and scalability efficiently, and that does require separation of synchronous and asynchronous layers in your data processing.

To illustrate this, let's start with a bad code example:

import {pipe, toAsync, filter, distinct, map, wait} from 'iter-ops';

const data = [12, 32, 357, ...]; // million items or so

const i = pipe(
    toAsync(data), // make asynchronous
    filter(a => a % 3 === 0), // take only numbers divisible by 3
    distinct(), // remove duplicates
    map(a => service.process(a)), // use async service, which returns Promise
    wait() // resolve each promise
);

for await(const a of i) {
    console.log(a); // show resolved data
}

And here's what a good code should look like:

import {pipe, toAsync, filter, distinct, map, wait} from 'iter-ops';

const data = [12, 32, 357, ...]; // million items or so

// syncronous pipeline:
const i = pipe(
    data,
    filter(a => a % 3 === 0),
    distinct()
);

// asynchronous pipeline:
const k = pipe(
    toAsync(i), // enable async processing
    map(a => service.process(a)),
    wait()
);

for await(const a of k) {
    console.log(a); // show resolved data
}

By separating synchronous processing pipeline from asynchronous one, in the above scenario of filtering through a lot of initial data, before asynchronous processing, we can achieve performance increase of easily 100 times over.

Clone this wiki locally