Java has been around for nearly three decades, and we’ve had threads since day one. We got java.util.concurrent in 2004, lambdas in 2014, CompletableFuture improvements, reactive streams, virtual threads in 2023… and yet, writing correct concurrent code in Java still feels like navigating a minefield.
Why is this still so hard?
The Problem: Too Many Half-Solutions
Let’s look at a simple scenario: fetch data from three APIs concurrently, process the results, and handle errors gracefully.
The Thread Approach (circa 1995)
public List<String> fetchData() {
    List<String> results = Collections.synchronizedList(new ArrayList<>());
    CountDownLatch latch = new CountDownLatch(3);
    Thread t1 = new Thread(() -> {
        try {
            results.add(callApi("api1"));
        } catch (Exception e) {
            // How do we handle this? Log? Ignore? Set some flag?
        } finally {
            latch.countDown();
        }
    });
    Thread t2 = new Thread(() -> { /* ... copy paste ... */ });
    Thread t3 = new Thread(() -> { /* ... copy paste ... */ });
    t1.start(); t2.start(); t3.start();
    try {
        latch.await(10, TimeUnit.SECONDS); // What if it times out?
    } catch (InterruptedException e) {
        // Now what? Cancel the threads? How?
    }
    return results; // Hope for the best!
}
Problems: Manual lifecycle management, no error propagation, cancellation is nearly impossible, resource leaks.
The ExecutorService Approach (circa 2004)
public List<String> fetchData() {
    ExecutorService executor = Executors.newFixedThreadPool(3);
    try {
        List<Future<String>> futures = List.of(
            executor.submit(() -> callApi("api1")),
            executor.submit(() -> callApi("api2")),
            executor.submit(() -> callApi("api3"))
        );
        List<String> results = new ArrayList<>();
        for (Future<String> future : futures) {
            try {
                results.add(future.get(10, TimeUnit.SECONDS));
            } catch (TimeoutException e) {
                // Cancel remaining? How do we know which ones?
                future.cancel(true);
            }
        }
        return results;
    } finally {
        executor.shutdown();
        // But what if tasks are still running?
        executor.awaitTermination(5, TimeUnit.SECONDS);
        executor.shutdownNow(); // Fingers crossed
    }
}
Better, but still: complex lifecycle management, unclear error handling, timeout handling is manual.
The CompletableFuture Approach (circa 2014)
public CompletableFuture<List<String>> fetchData() {
    CompletableFuture<String> f1 = CompletableFuture.supplyAsync(() -> callApi("api1"));
    CompletableFuture<String> f2 = CompletableFuture.supplyAsync(() -> callApi("api2"));
    CompletableFuture<String> f3 = CompletableFuture.supplyAsync(() -> callApi("api3"));
    return CompletableFuture.allOf(f1, f2, f3)
        .thenApply(v -> List.of(f1.join(), f2.join(), f3.join()))
        .orTimeout(10, TimeUnit.SECONDS)
        .exceptionally(throwable -> {
            // One failed, but which ones are still running?
            // How do we cancel them?
            return List.of();
        });
}
Cleaner, but: no structured concurrency, cancellation is still unclear, error handling is awkward.
The Virtual Threads Approach (circa 2023)
public List<String> fetchData() {
    try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
        var futures = List.of(
            executor.submit(() -> callApi("api1")),
            executor.submit(() -> callApi("api2")),
            executor.submit(() -> callApi("api3"))
        );
        return futures.stream()
            .map(future -> {
                try {
                    return future.get(10, TimeUnit.SECONDS);
                } catch (Exception e) {
                    throw new RuntimeException(e);
                }
            })
            .toList();
    }
}
Better performance, but we still have the same fundamental problems: no automatic cancellation, unclear error boundaries, manual timeout handling.
What’s Wrong With This Picture?
After over 20 years, we’re still dealing with the same fundamental issues:
1. No Structured Concurrency
When you launch concurrent operations, there’s no clear parent-child relationship. If the parent operation is cancelled or fails, the children keep running. This leads to resource leaks and zombie tasks.
2. Cancellation is an Afterthought
Cancellation in Java is based on thread interruption, which is:
- Unreliable (not all blocking operations respect it)
 - Unclear (checked in some places, ignored in others)
 - Dangerous (can leave resources in inconsistent states)
 
3. Error Handling is Ad-Hoc
When one of your concurrent operations fails, what happens to the others? There’s no consistent pattern for error propagation or cleanup.
4. Resource Management is Manual
You’re always thinking about: Did I shut down the executor? Are there still running tasks? What if I forget to call close()?
5. Context is Lost
There’s no standard way to pass context (like cancellation tokens, timeouts, or tracing information) through your async operations.
6. People cut corners
As this is so complex and because people dont fully understand the complexity of concurrency they cut corners. This means that threading code not using heavier weight frameworks tends to leak resources.
Other Languages Got This Right
Meanwhile, other languages learned from these mistakes:
Go has structured concurrency built-in with goroutines and context cancellation:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
// All goroutines inherit the context and are cancelled together
Kotlin coroutines provide structured concurrency by default:
runBlocking {
    val results = listOf("api1", "api2", "api3").map { api ->
        async { callApi(api) }
    }.awaitAll()
    // If this scope is cancelled, all children are automatically cancelled
}
C# has had similar patterns with Tasks and CancellationTokens for years.
The Mental Model Problem
The deeper issue is that Java’s concurrency primitives force you to think about mechanisms instead of intent:
- Instead of “run these 3 things concurrently and collect results,” you think about threads, executors, futures, and cleanup
 - Instead of “cancel this operation and all its children,” you think about interruption flags and resource management
 - Instead of “handle errors from any child operation,” you think about exception wrapping and CompletableFuture combinators
 
What We Actually Want
Here’s what the same operation should look like:
try (var scope = new CoroutineScope()) {
    var results = List.of("api1", "api2", "api3").stream()
        .map(api -> scope.async(suspend -> callApi(suspend, api)))
        .map(handle -> handle.join())
        .toList();
    return results;
} // Everything is automatically cancelled and cleaned up
This is:
- Structured: Clear parent-child relationships with automatic cleanup
 - Cancellable: The 
suspendcontext can be cancelled cooperatively - Resource-safe: The try-with-resources ensures everything is cleaned up
 - Error-transparent: Exceptions propagate naturally without special handling
 
Enter JCoroutines
Concurrency should feel like an elevator (lift): press a few clear buttons and trust the machinery. JCoroutines provides the control panel; not a whole new building.
I built JCoroutines because I was tired of fighting Java’s concurrency primitives. It’s a simple concurrency framework designed around three core principles:
1. Structured Concurrency by Default
Every async operation has a clear parent. When the parent is done, children are automatically cancelled.
2. Explicit Context Passing
Instead of hidden thread-local state, everything gets an explicit SuspendContext that carries cancellation, timeouts, and scheduling.
3. No Magic
Unlike Kotlin coroutines with their compiler transformations, everything in JCoroutines is explicit. You can see exactly what’s happening.
The Path Forward
Java is slowly moving in the right direction. JEP 428 (Structured Concurrency) and JEP 446 (Scoped Values) show that the platform team recognizes these problems. But they’re taking a very conservative approach, and it’ll be years before we have a complete solution.
In the meantime, we don’t have to suffer. Libraries like JCoroutines can provide structured concurrency patterns today, using the Java we have right now.
The question isn’t whether Java will eventually get this right. The question is: how much time do you want to spend fighting your tools instead of solving your actual problems?
JCoroutines brings structured concurrency to any JVM project today, you get some of the features of Kotlin, without using Kotlin, for when you need to use just Java. No compiler plugins, no bytecode manipulation, just clean APIs built on Virtual Threads and explicit context passing. Try it out and see what concurrent Java code could feel like.
