Scenario-Based Java Multithreading Interview Questions with a Focus on CompletableFuture

Arvind Kumar
6 min read3 days ago

--

Multithreading plays a crucial role in high-performance applications, ensuring efficient CPU utilization, reduced response time, and non-blocking operations. CompletableFuture is a powerful tool in Java for handling asynchronous computations effectively. The following scenario-based questions will test practical understanding and problem-solving skills for real-world applications."

Scenario 1: E-Commerce Order Processing with Parallel Tasks

In an e-commerce system, when a user places an order, multiple tasks must run concurrently:

  1. Validate the order.
  2. Process the payment.
  3. Update the inventory.
  4. Send a confirmation email.

These tasks should execute asynchronously, and the system should proceed only when all are completed. How would you implement this using CompletableFuture?"_

Solution:

Here, we need parallel execution of all tasks and then proceed with the next steps. CompletableFuture.allOf() ensures all tasks complete before proceeding further.

import java.util.concurrent.*;
public class ECommerceOrderProcessing {
public static void main(String[] args) {
CompletableFuture<Void> orderValidation = CompletableFuture.runAsync(() -> {
simulateDelay(1000);
System.out.println("✅ Order validated");
});
CompletableFuture<Void> paymentProcessing = CompletableFuture.runAsync(() -> {
simulateDelay(1500);
System.out.println("💰 Payment processed");
});
CompletableFuture<Void> inventoryUpdate = CompletableFuture.runAsync(() -> {
simulateDelay(1200);
System.out.println("📦 Inventory updated");
});
CompletableFuture<Void> confirmationEmail = CompletableFuture.runAsync(() -> {
simulateDelay(500);
System.out.println("📧 Confirmation email sent");
});
// Wait for all tasks to complete
CompletableFuture<Void> allTasks = CompletableFuture.allOf(orderValidation, paymentProcessing, inventoryUpdate, confirmationEmail);

allTasks.join();
System.out.println("🚀 Order successfully processed!");
}
private static void simulateDelay(int millis) {
try { Thread.sleep(millis); } catch (InterruptedException e) { e.printStackTrace(); }
}
}

Key Takeaways:

All tasks run in parallel → Reduced response time.
Non-blocking approach → Uses thread pools efficiently.
Ensures order consistency → Proceeds only after all tasks complete.

Scenario 2: Stock Market Live Data Aggregation

“A financial application retrieves stock prices from three different APIs and needs to compute the average stock price. If any API fails, the system should return a default value for that source. How would you implement this using CompletableFuture?"

Solution:

  • Use CompletableFuture.supplyAsync() for non-blocking calls.
  • Handle failures using exceptionally().
  • Use thenCombine() to aggregate results.
import java.util.concurrent.*;
import java.util.Random;
public class StockMarketAggregator {
public static void main(String[] args) {
CompletableFuture<Double> api1 = fetchStockPrice("API-1");
CompletableFuture<Double> api2 = fetchStockPrice("API-2");
CompletableFuture<Double> api3 = fetchStockPrice("API-3");
// Compute the average stock price once all APIs respond
CompletableFuture<Double> averagePrice = api1
.thenCombine(api2, Double::sum)
.thenCombine(api3, (sum, price) -> (sum + price) / 3);
System.out.println("📈 Final Stock Price: $" + averagePrice.join());
}
private static CompletableFuture<Double> fetchStockPrice(String api) {
return CompletableFuture.supplyAsync(() -> {
simulateDelay(new Random().nextInt(2000) + 500);
if (new Random().nextBoolean()) { // Simulating failure
throw new RuntimeException(api + " failed!");
}
double price = 100 + new Random().nextDouble() * 10;
System.out.println(api + " returned $" + price);
return price;
}).exceptionally(ex -> {
System.err.println(ex.getMessage());
return 100.0; // Default fallback price
});
}
private static void simulateDelay(int millis) {
try { Thread.sleep(millis); } catch (InterruptedException e) { e.printStackTrace(); }
}
}

Key Takeaways:

Resilient to API failures → Uses fallback values.
Asynchronous calls improve performance → Aggregates data in parallel.
Ensures fault toleranceexceptionally() prevents crashes.

Scenario 3: Real-time Fraud Detection in Payment Transactions

“A banking system must analyze multiple parameters asynchronously (transaction amount, location, past history) and flag suspicious transactions if a fraud score exceeds a threshold. How would you implement this?”

Solution:

  • Use multiple CompletableFutures to compute fraud risk factors asynchronously.
  • Combine results using thenCombine().
  • Use threshold logic to detect fraud.
import java.util.concurrent.*;
import java.util.Random;
public class FraudDetectionSystem {
public static void main(String[] args) {
CompletableFuture<Double> amountRisk = analyzeAmountRisk(5000);
CompletableFuture<Double> locationRisk = analyzeLocationRisk("Nigeria");
CompletableFuture<Double> historyRisk = analyzeTransactionHistory("User123");
CompletableFuture<Double> fraudScore = amountRisk
.thenCombine(locationRisk, Double::sum)
.thenCombine(historyRisk, Double::sum);
fraudScore.thenAccept(score -> {
if (score > 7.5) {
System.out.println("🚨 Fraud Detected! Score: " + score);
} else {
System.out.println("✅ Transaction Approved. Score: " + score);
}
}).join();
}
private static CompletableFuture<Double> analyzeAmountRisk(double amount) {
return CompletableFuture.supplyAsync(() -> amount > 3000 ? 4.0 : 1.0);
}
private static CompletableFuture<Double> analyzeLocationRisk(String location) {
return CompletableFuture.supplyAsync(() -> location.equals("Nigeria") ? 3.0 : 0.5);
}
private static CompletableFuture<Double> analyzeTransactionHistory(String userId) {
return CompletableFuture.supplyAsync(() -> new Random().nextDouble() * 3);
}
}

Key Takeaways:

Asynchronous risk analysis reduces latency.
Parallel execution improves scalability.
Threshold-based detection ensures robust fraud prevention.

Scenario 4: Parallel Data Processing in a Big Data Pipeline

A data pipeline receives raw sensor data from IoT devices and must process it in parallel through multiple stages:

  1. Cleanse the data (remove duplicates, invalid values).
  2. Enrich the data (add metadata, location info).
  3. Aggregate the data (compute averages, detect anomalies).

How would you implement this efficiently using CompletableFuture?"_

Solution:

  • Each stage runs independently in parallel using supplyAsync().
  • The final result is computed using thenCompose() to ensure dependent transformations.
import java.util.*;
import java.util.concurrent.*;
public class IoTDataPipeline {
public static void main(String[] args) {
List<String> rawSensorData = Arrays.asList("Sensor1:45", "Sensor2:50", "Sensor1:45", "Sensor3:60", "InvalidData");
CompletableFuture<List<String>> cleanedData = CompletableFuture.supplyAsync(() -> cleanseData(rawSensorData));
CompletableFuture<List<String>> enrichedData = cleanedData.thenCompose(data -> CompletableFuture.supplyAsync(() -> enrichData(data)));
CompletableFuture<Map<String, Double>> aggregatedData = enrichedData.thenCompose(data -> CompletableFuture.supplyAsync(() -> aggregateData(data)));
System.out.println("🚀 Final Processed Data: " + aggregatedData.join());
}
private static List<String> cleanseData(List<String> raw) {
System.out.println("🔍 Cleaning data...");
return raw.stream().filter(d -> d.contains(":")).distinct().toList();
}
private static List<String> enrichData(List<String> data) {
System.out.println("📌 Enriching data...");
return data.stream().map(d -> d + " (Location: NY)").toList();
}
private static Map<String, Double> aggregateData(List<String> data) {
System.out.println("📊 Aggregating data...");
Map<String, List<Integer>> grouped = new HashMap<>();
for (String record : data) {
String[] parts = record.split(":");
grouped.computeIfAbsent(parts[0], k -> new ArrayList<>()).add(Integer.parseInt(parts[1].split(" ")[0]));
}
Map<String, Double> averages = new HashMap<>();
grouped.forEach((sensor, readings) -> {
averages.put(sensor, readings.stream().mapToInt(i -> i).average().orElse(0));
});
return averages;
}
}

Key Takeaways:

Parallel processing improves throughput for high-volume IoT data.
Non-blocking execution reduces wait time using thenCompose().
Scales well for streaming data architectures like Kafka + Spark.

Scenario 5: Asynchronous Caching with Auto-Refresh

A high-traffic application uses an in-memory cache for frequently accessed data (e.g., product details). The cache should:

  1. Fetch fresh data from the database only when stale.
  2. Auto-refresh data every 10 minutes asynchronously without blocking requests.

How would you implement this using CompletableFuture?"_

Solution:

  • The first request fetches from DB and caches it.
  • Future requests use the cached version.
  • A background task auto-refreshes every 10 minutes.
import java.util.concurrent.*;
public class AsyncCache {
private static final ConcurrentHashMap<String, String> cache = new ConcurrentHashMap<>();
private static final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public static CompletableFuture<String> getData(String key) {
return CompletableFuture.supplyAsync(() -> {
return cache.computeIfAbsent(key, AsyncCache::fetchFromDatabase);
});
}
private static String fetchFromDatabase(String key) {
simulateDelay(2000);
return "Data for " + key + " (Fetched from DB)";
}
public static void scheduleAutoRefresh() {
scheduler.scheduleAtFixedRate(() -> {
System.out.println("🔄 Auto-refreshing cache...");
cache.replaceAll((k, v) -> fetchFromDatabase(k));
}, 10, 10, TimeUnit.MINUTES);
}
public static void main(String[] args) {
scheduleAutoRefresh();
getData("product_123").thenAccept(System.out::println);
getData("product_123").thenAccept(System.out::println); // Uses cache
}
private static void simulateDelay(int millis) {
try { Thread.sleep(millis); } catch (InterruptedException e) { e.printStackTrace(); }
}
}

Key Takeaways:

Reduces load on DB by caching frequently used data.
Non-blocking auto-refresh mechanism keeps cache up-to-date.
Ensures high availability in distributed caching systems.

Scenario 6: Distributed Computing with Worker Nodes

“A large dataset must be processed across multiple worker nodes in a distributed environment. Each node processes a subset of data, and the final results are aggregated centrally. How would you implement this?”

Solution:

  • Each worker processes a partition of data asynchronously.
  • The main thread collects and aggregates results using CompletableFuture.allOf().
import java.util.*;
import java.util.concurrent.*;
public class DistributedComputing {
private static final ExecutorService workerPool = Executors.newFixedThreadPool(4);
public static void main(String[] args) {
List<Integer> dataset = Arrays.asList(10, 20, 30, 40, 50, 60);

List<CompletableFuture<Integer>> futures = new ArrayList<>();
for (Integer chunk : dataset) {
futures.add(CompletableFuture.supplyAsync(() -> processChunk(chunk), workerPool));
}
CompletableFuture<Void> allTasks = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
CompletableFuture<List<Integer>> aggregatedResults = allTasks.thenApply(v ->
futures.stream().map(CompletableFuture::join).toList()
);
System.out.println("✅ Final Aggregated Result: " + aggregatedResults.join());
workerPool.shutdown();
}
private static Integer processChunk(Integer chunk) {
simulateDelay(1000);
return chunk * 2; // Example transformation
}
private static void simulateDelay(int millis) {
try { Thread.sleep(millis); } catch (InterruptedException e) { e.printStackTrace(); }
}
}

Key Takeaways:

Improves scalability by distributing workload across nodes.
Reduces processing time by leveraging parallel execution.
Ideal for batch processing jobs in Hadoop, Spark, or Kubernetes-based ML pipelines.

— —

Follow me for more such articles

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Arvind Kumar
Arvind Kumar

Written by Arvind Kumar

Staff Engineer @Chegg || Passionate about technology || https://youtube.com/@codefarm0

No responses yet

Write a response