Here's the description of node you haven't heard before. The goal of node.js, or any long-polling/websocket/whatever network server really, is to map M connections to N physical cores on your box. Understanding the details of this will help make it clear why using async everywhere is insane.
Ever wonder why JS is async? It's async because it was built to be a UI scripting language. UIs happen to be very similar to long-poll servers in that they both have a large number of event sources (UI elements in the case of a browser, sockets in the case of a server), that should be multiplexed onto a non-blocking thread. This is M:1, where M is the number of connections (or UI elements), and 1 is a JS platform's single thread.
Writing single threaded async code that works well is hard. Firstly, async code is flat out more verbose and harder to read than equivalent blocking code. Additionally, you have a single thread that you cannot block. Try processing 3000 DB rows in node and watch your QoS suffer. You can always work around it but the work arounds suck. Using nextTick adds unnecessary complexity, and a web worker is fine, but if you're really spinning of threads why aren't you using another language that at least gives you the convenience of non-async code, or even saving that, why aren't you using a language like java that comes bundled with amazing threading support via its executors framework and atomic classes?
Thread scheduling is awesome, and works well, contrary to what many would have you believe. It works especially well for web-apps where threads generally don't contend over resources on the app server itself. Web apps generally are independent processes. The problem of shared memory is usually minimal in a properly architected web app.
The thing is, an async reactor like node is a fantastic pattern when dealing with IO and a terrible pattern for dealing with app logic. It's fantastic for IO because a single thread can indeed be faster due to a lack of concurrency interactions (locks, thread scheduling, etc.) when doing many fast, non-blocking operations.
The reality of app logic is that app logic does block performing parallel ops because CPU operations are, in a sense, blocking. You are still contending over a number of cores. In that case what you want is M:N, where M is the number of connections and N is the number of active threads handling app logic. I say let async do what it does well (parallel IO), and let threads do what they do well (schedule CPU).
Some people may wonder how this works in practice, thinking that 1000 websocket connections need 1000 threads. What you should have is 1 async reactor handling the IO handing off discreet messages to N threads. It's a good pattern, and works well. The state can be encapsulated in either a closure or an object, that's up to you (and your programming language).
In the case of highly parallel IO, yes, a thousand times yes async is great. But the great lie about node is that people need to carry over the async from the IO layer to the app logic layer. In node it's all just mashed together.
The trick about async is that your server can handle 10,000 connections that are idle, but only a handful active at a given time since you only have a few cores. Thread scheduling works just fine.
None of these ideas are novel, in fact, they are decades old. I can only hope that people are willing to open up their minds to the idea that the future of concurrent web programming has many possibilities.