The Event Loop is the Boss
It is common knowledge that blocking the event loop is a bad thing. Ted Dziuba did not bring anything new to the table neither with his badly named post nor with his follow up post, but he got to troll as he enjoys.
What may not be common knowledge is, how to make the best the event based server (whether the server is based on node.js or Tornado or something else), and move CPU intensive and other blocking work out of the event loop.
There are a number of approaches to this, but the essential pattern is:
The event loop is the boss, use it for delegating work and talk to clients.
So having written your server in node.js, Tornado or another event based system, the important part is to delegate request handling for blocking tasks.
You can use any kind of messaging or queueing system for moving work out of the event loop and distribute it to one or more worker processes. In a recent project, working with CPU intensive calculations, I used the intelligent transport layer zeromq to distribute blocking tasks to a number of workers. Another option would be to use the PUB/SUB messaging features of the in data structure server Redis, or any of the other messaging and queuing systems out there.
With zeromq you get a socket like communication channel to other processes, which is completely transparent to whether the other processes are running on the same server or on different machines - which is great, particularly if you at some time need to scale your server from one to multiple machines. Also, zeromq has a number of built-in standard messaging patterns, including PUB/SUB, REQ/RES, PUSH/PULL, dealing, routing etc., making it very flexible and easy to adapt to most usage scenarios.
A really nice side effect of moving work out of the event loop is that you get the freedom to use the best tool to solve a given task. In my work, I have found that I can have a high performance asynchronous web server for stuff like streaming incoming big data files to disc, without having to store the full request body in memory, and use a separate process (typically on another server) to perform a CPU intensive task such as transformation or other computation using the uploaded data.