How does event-driven programming even work?

I’ve always wondered how event-driven programming worked – it is very different from the programming paradigms I was taught in school. I was confused by the asynchronous nature of callbacks and promises. It was also interesting to me how something like setTimeout or setInterval was implemented! It seemed non-trivial for this to be implemented in another language like C/C++ without constantly checking a timer in several areas of your code.

In Node.js, there is a runtime and a JIT compiler that executes the Javascript that a programmer has written. The runtime doesn’t execute operations in the traditional line-after-line blocking manner that synchronous C/C++ does. Instead, it has an event loop and operations are added and executed on the event loop throughout the lifetime of a program. If an event has I/O and needs to be blocked, instead of the CPU halting, context switching, and waiting for the I/O to complete, the Node.js runtime continues to process the next event on the loop. Here is an example:

const fs = require('fs');

function hello_world(x) {
	console.log(`Hello World ${x}!`);
	fs.writeFile(`${x}.txt`, "hi", err => {
		if (err) {
			console.error(err);
		} else {
			console.log(`Finished writing to file ${x}`);
		}
	});
}

A synchronous version of this written in C/C++ would have a guaranteed output order of:

Hello World 1!
Finished writing to file 1
Hello World 2!
Finished writing to file 2

But in Node.js, the output would likely be something closer to:

Hello World 1!
Hello World 2!
Finished writing to file 1
Finished writing to file 2

It almost looks like the Node.js runtime was smart enough to do other work on the CPU while an I/O operation was happening! Under the hood, Node.js is adding hello_world(1) to the task queue. While executing hello_world(1), it notices that some I/O needs to be done so it *does some magic to be discussed later* and executes the next item on the task queue which is hello_world(2). Eventually, the Node.js runtime will get an event added to its task queue notifying it that writing to 1.txt file has completed and it will finish up the method call hello_world(1).

The most interesting part here is the mechanism in which Node.js skips blocking on I/O and executes a different event instead of completing the first hello_world(1). And then, *somehow* the runtime gets a notification that the file has been written to and executes the callback in fs.writeFile. To do all this and more, Node.js uses an asynchronous I/O library called libuv.

Node.js uses libuv as a wrapper to do I/O that would otherwise block the CPU for several cycles. When fs.writeFile is called, a request is sent to libuv telling it to write some content to a file. Eventually, once the content is written, libuv will send a notification back to Node.js telling it the write operation has been completed and it should run the callback for fs.writeFile. Here is an example of how libuv works when handling file I/O:

#include <uv.h>
#include <iostream>

uv_loop_t* loop;

void close_callback(uv_fs_t *close_request) {
	std::cout << "Finished closing file" << std::endl;
	int result = close_request->result;

	// Free the memory
	uv_fs_req_cleanup(close_request);

	if (result < 0) {
		std::cout << "There was an error closing the file" << std::endl;
		return;
	}
	std::cout << "Successfully wrote to the file" << std::endl;
}

void write_callback(uv_fs_t *write_request) {
	std::cout << "Wrote to file" << std::endl;
	int result = write_request->result;
	int data = *(int*) write_request->data;

	// Free the memory
	uv_fs_req_cleanup(write_request);

	if (result < 0) {
		std::cout << "There was an error writing to the file" << std::endl;
		return;
	}

	// Make sure to allocate on the heap since the stack will disappear with
	// an event loop model
	uv_fs_t* close_req = (uv_fs_t*) malloc(sizeof(uv_fs_t));
	uv_fs_close(loop, close_req, data, close_callback);
}
void open_callback(uv_fs_t *open_request) {
	std::cout << "Opened file" << std::endl;
	int result = open_request->result;

	// Free the memory
	uv_fs_req_cleanup(open_request);

	if (result < 0) {
		std::cout << "There was an error opening the file" << std::endl;
		return;
	}

	// Make sure to allocate on the heap since the stack will disappear with
	// an event loop model
	uv_fs_t* write_request = (uv_fs_t*) malloc(sizeof(uv_fs_t));
	write_request->data = (void*) malloc(sizeof(int));
	*((int*) write_request->data) = result;

	char str[] = "Hello World!\n";
	uv_buf_t buf = {str, sizeof(str)};

	uv_buf_t bufs[] = {buf};
	uv_fs_write(loop, write_request, result, bufs, 1 , -1, write_callback);
}

int main() {
	loop = uv_default_loop();

	uv_fs_t* open_request = (uv_fs_t*) malloc(sizeof(uv_fs_t));
	uv_fs_open(loop, open_request, "hello_world.txt", O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR, open_callback);

	uv_fs_t* open_request2 = (uv_fs_t*) malloc(sizeof(uv_fs_t));
	uv_fs_open(loop, open_request2, "hello_world2.txt", O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR, open_callback);

	// Run event loop
	return uv_run(loop, UV_RUN_DEFAULT);
}

In this example, we have added two events to our event loop and uv_run begins running the events. In a traditional C/C++ synchronous style program, we’d expect these to execute sequentially and take a long time because each I/O operation takes a long time. However, using libuv as an async I/O library with an event loop, I/O blocking becomes less of an issue because we are able to execute other pending events while another event is blocked on I/O. To prove that, a possible output of running the above program is:

Opened file
Opened file
Wrote to file
Wrote to file
Finished closing file
Succesfully wrote to the file
Finished closing file
Succesfully wrote to the file

As you can see, the program doesn’t open, write, and then close each file sequentially. Instead, it opens each file, then writes to them and closes them in batches. This is because while the program is waiting for the file to do I/O, it executes the operations for another event. For example, while it is waiting to open file #1, it sends syscalls to open files #2 and #3.

But...how does it work under the hood?

An initial guess as to how this is implemented in libuv is to spawn a separate thread for every I/O operation and block on it. Once the I/O operation has completed, the thread exits and returns to the main libuv thread. The main libuv thread then notifies Node.js that the I/O operation has completed. However, this is likely very slow. Spawning a new thread for every I/O request is a lot of additional CPU overhead! Can we do better?

Another idea I have is to constantly run the poll syscall on all the file descriptors of interest, waiting for the event of interest to occur. In this design, we would only need one libuv thread and that thread would have a loop constantly polling all the file descriptors of interest to check if it is ready. This method would scale linearly O(n) with the number of file descriptors. Unfortunately, this method also isn’t fast enough. You can imagine a Node.js webserver running and having to loop through 5000 file descriptors on every iteration to check for a read or write event.

After a bit more digging and understanding how high-performance web servers like NGINX handle this problem (C10K problem), I came across epoll. The benefit of epoll vs. poll is that epoll only returns file descriptors which have some data update, so there’s no need to scan all of the watched file descriptors. This seems much better than poll and is indeed how libuv implements its async I/O on Linux.

On Linux, epoll works by having the kernel update the epoll per process data structure for every event on a monitored file descriptor. When a user space program requests all the file descriptors that have updates, the kernel already has this list of updated file descriptors and simply has to transfer it to user space. This contrasts from poll because in poll, the kernel needs to query all the file descriptors by iterating through them.

What about setTimer and setInterval, how are those implemented?

Now that we have a rough understanding of how I/O is implemented in single-threaded Node.js, how do features like setTimer and setInterval work? These don’t use libuv but it is pretty easy to guess how it might work. Because we now know that Node.js is an event-driven language and constantly pulls events off a task queue, it is easy to fathom that the runtime checks every timer or interval to see if it has expired on every event loop iteration. If it has, then it runs the callback for the timer or interval. If not, it skips to the next phase in the event loop. It is important to note that not all expired timers and intervals will be processed in one loop, the runtime often has a maximum number of events that it will process in each phase.

If you have any questions, feel free to reach out to me on Twitter (DMs open) or via email