Because Competition is Difficult

Concurrency is the ability of a program to manage multiple tasks "in progress" at the same time — not necessarily in parallel. The parallelism it's the execution simultaneous on multiple physical cores. The confusion between the two concepts is the source of many bugs and bad architectural decisions.

There are five main competition models in 2026, each with different trade-offs. Not there is the absolute "best" model: the choice depends on the type of workload (I/O-bound vs CPU-bound), language, ecosystem and latency requirements.

Model 1: OS Threads (Java, C++)

The most traditional model: each concurrency unit is a thread of the operating system. The kernel handles scheduling, context switches and communication between threads via mutex-protected shared memory.

// Java: thread tradizionale vs virtual thread (Java 21)
// Thread OS tradizionale — costoso: ~1MB stack, scheduling kernel
Thread platformThread = new Thread(() -> {
    processRequest(); // blocca il thread OS durante I/O
});
platformThread.start();

// Virtual Thread (Java 21, Project Loom) — leggero: ~2KB stack iniziale
Thread virtualThread = Thread.ofVirtual().start(() -> {
    processRequest(); // blocca solo il virtual thread, non il carrier
});

// Un milione di virtual thread sono praticabili
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    for (int i = 0; i < 1_000_000; i++) {
        executor.submit(() -> handleRequest());
    }
} // attende il completamento

Pros: Simple mental model, leverage multiple cores automatically, mature libraries

Against: Expensive OS threads (1MB+ stack, switching overhead), shared memory race conditions, scaling limited to thousands of threads

When: CPU-bound work, Java/C++ with thread pool, mixed loads

Pattern 2: Single-Thread Event Loop (Node.js, JavaScript)

JavaScript is single-threaded: there is only one thread of execution and an event loop that manages callbacks. Asynchronous I/O (network, filesystem) is delegated to the operating system via libuv and completed operations are placed in the callback queue.

// Node.js: event loop in azione
// Tutto esegue sullo stesso thread — nessun race condition!

const http = require('http');

http.createServer((req, res) => {
    // Questa callback non blocca il thread
    fetchUserData(req.userId)
        .then(user => {
            return fetchOrders(user.id); // altra I/O non bloccante
        })
        .then(orders => {
            res.json({ user, orders });
        })
        .catch(err => res.status(500).json({ error: err.message }));
}).listen(3000);

// Async/await (zucchero sintattico sopra Promise):
async function handleRequest(req, res) {
    const user = await fetchUserData(req.userId);   // non blocca il thread
    const orders = await fetchOrders(user.id);       // non blocca il thread
    res.json({ user, orders });
}

Pros: No race conditions (single-thread), very high concurrency for I/O-bound, huge npm ecosystem

Against: CPU-bound blocks everything, callback hell (mitigated by async/await), Worker Threads for true parallelism

When: API server with many concurrent I/O requests, real-time apps, BFF layer

Model 3: Goroutine and Channel (Go)

Go implements Communicating Sequential Processes (CSP): ultra-lightweight goroutines (2KB initial stack, grows dynamically) communicated via typed channels. The Go mantra: "Don't communicate by sharing memory; share memory by communicating."

// Go: goroutine e channel
package main

import (
    "fmt"
    "sync"
)

// Fan-out/Fan-in pattern con goroutine
func processItems(items []Item) []Result {
    results := make(chan Result, len(items))
    var wg sync.WaitGroup

    for _, item := range items {
        wg.Add(1)
        go func(i Item) {           // avvia goroutine — ~2KB stack
            defer wg.Done()
            result := processItem(i)  // eseguito concorrentemente
            results <- result
        }(item)
    }

    // Chiudi il channel quando tutte le goroutine completano
    go func() {
        wg.Wait()
        close(results)
    }()

    // Raccoglie i risultati
    var collected []Result
    for r := range results {
        collected = append(collected, r)
    }
    return collected
}

// Channel per comunicazione sicura tra goroutine
func producer(ch chan<- int) {
    for i := 0; i < 10; i++ {
        ch <- i    // invia sul channel (blocca se pieno)
    }
    close(ch)
}

func consumer(ch <-chan int) {
    for v := range ch {     // riceve finché il channel è aperto
        fmt.Println(v)
    }
}

Pros: Ultra-lightweight goroutines (millions viable), channels prevent race conditions, runtime manages scheduling

Against: CSP model requires learning, goroutine leak if channel is not closed, no generics before Go 1.18

When: Backend services with high I/O concurrency, data pipelines, CLI tools

Pattern 4: Async/Await (Python, Rust)

The async/await is cooperative competition: tasks explicitly give up the control at I/O waiting points (await). Unlike the JavaScript event loop (built-in runtime), Python and Rust require an explicit runtime (asyncio, Tokyo).

// Python: asyncio con TaskGroup (Python 3.11+)
import asyncio
import aiohttp

async def fetch_url(session: aiohttp.ClientSession, url: str) -> str:
    async with session.get(url) as response:
        return await response.text()

async def fetch_all_parallel(urls: list[str]) -> list[str]:
    async with aiohttp.ClientSession() as session:
        # TaskGroup garantisce che tutti i task completino o vengano cancellati
        async with asyncio.TaskGroup() as tg:
            tasks = [tg.create_task(fetch_url(session, url)) for url in urls]

    return [task.result() for task in tasks]

# Rust: Tokio async/await (zero-cost)
use tokio::time::{sleep, Duration};

async fn fetch_data(id: u64) -> String {
    sleep(Duration::from_millis(100)).await;  // simula I/O
    format!("data_{}", id)
}

#[tokio::main]
async fn main() {
    // Join concorrente senza allocazioni aggiuntive (zero-cost)
    let (r1, r2, r3) = tokio::join!(
        fetch_data(1),
        fetch_data(2),
        fetch_data(3),
    );
    println!("{}, {}, {}", r1, r2, r3);
}

Pros: Explicit control over wait points, no race conditions on the shared stack, zero-cost in Rust

Against: "Async disease" (every function must be async if it calls async), more complex debugging

When: Python for I/O-bound (web scraping, API calls), Rust for high performance systems

Model 5: Actor Model (Erlang/Elixir, Akka)

The actor model is the most isolated: each actor has its own private state and communicates only via messages. There is no shared memory, there are no mutexes — every actor is an independent lightweight process.

// Elixir: GenServer (actor model)
defmodule Counter do
  use GenServer

  # Interfaccia pubblica
  def start_link(initial \\ 0) do
    GenServer.start_link(__MODULE__, initial, name: __MODULE__)
  end

  def increment() do
    GenServer.call(__MODULE__, :increment)
  end

  def get_count() do
    GenServer.call(__MODULE__, :get)
  end

  # Implementazione (private)
  def init(initial), do: {:ok, initial}

  def handle_call(:increment, _from, count) do
    {:reply, count + 1, count + 1}   # reply, valore_risposta, nuovo_stato
  end

  def handle_call(:get, _from, count) do
    {:reply, count, count}
  end
end

# La BEAM VM può avere milioni di processi leggeri
# con supervisione automatica (OTP supervisor tree)

Pros: Total isolation (actor crash does not propagate), fault tolerance by design, natively distributed

Against: Message serialization overhead, debugging systems with many actors is complex

When: Fault-tolerant distributed systems, telco, real-time gaming, IoT with millions of connections

Comparative Benchmark

Competition: Guide to Model Selection

  • Many I/O requests (web API server): Go goroutine or Node.js event loop — both scale to tens of thousands of concurrents
  • CPU-bound (ML inference, encoding): Java or Go OS threads with worker pools — use all cores
  • Ultra-low latency (trading, gaming): Rust Tokyo or Go — zero overhead in the runtime
  • Distributed fault-tolerance (telco, IoT): Elixir/Erlang Actor model — native supervision tree
  • Data science, scripting: Python asyncio for I/O, multiprocessing for CPU-bound
  • Java enterprise team: Java 21 Virtual Threads — same mental model as classic threads, scales like goroutine

Conclusions

There is no universally superior competition model. The right choice depends on workload (I/O vs CPU), by the team (skills and preferences), by the ecosystem (available libraries) and non-functional requirements (latency, throughput, fault-tolerance).

The next articles in the series delve into each model in detail: Let's start with the JavaScript event loop, the most misunderstood component of Node.js.

Next in the series Event Loop JavaScript →