- Регистрация
- 1 Мар 2015
- Сообщения
- 15,259
- Баллы
- 155
Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building , a tool that makes generating API docs from your code ridiculously easy.
Concurrency is a big deal in modern programming. When you’re building apps that need to handle multiple tasks at once—like web servers, data pipelines, or real-time systems—you want a language that makes concurrency easy, safe, and fast. Go and Rust are two heavyweights in this space, each with its own approach. Go keeps things simple with goroutines and channels, while Rust leans on its ownership model for safety and control. Let’s dive into how they stack up, with code examples and details to help you decide which fits your project.
This post breaks down the concurrency models of Go and Rust, compares their strengths and weaknesses, and shows you how they work in practice. Expect a 12-15 minute read packed with examples and tables to make things clear.
Why Concurrency Matters
Concurrency lets programs do multiple things at once, like handling thousands of HTTP requests or processing streams of data. Go and Rust tackle this differently:
Both are great for building high-performance systems, but their philosophies—Go’s ease versus Rust’s precision—shape how you write concurrent code. Let’s explore the key concepts.
Goroutines vs. Threads: The Lightweight Showdown
Go’s Goroutines
Go’s concurrency starts with goroutines, which are lightweight threads managed by the Go runtime, not the OS. They’re cheap to create (a few KB of memory) and let you spin up thousands without breaking a sweat. You launch a goroutine with the go keyword.
Here’s a simple example of goroutines handling parallel tasks:
package main
import (
"fmt"
"time"
)
func printNumbers(id int) {
for i := 0; i < 5; i++ {
fmt.Printf("Goroutine %d: %d\n", id, i)
time.Sleep(100 * time.Millisecond)
}
}
func main() {
for i := 1; i <= 3; i++ {
go printNumbers(i)
}
time.Sleep(1 * time.Second) // Wait for goroutines to finish
}
This code runs three goroutines concurrently, each printing numbers. The time.Sleep in main is a crude way to wait (we’ll fix that later with channels). Key point: Goroutines are easy to use and scale well for I/O-bound tasks like network servers.
Rust’s Threads
Rust uses OS threads for concurrency, which are heavier than goroutines (think MBs of memory per thread). You get precise control, but spawning thousands of threads can strain resources. Rust’s standard library provides std::thread for threading.
Here’s a similar example in Rust:
use std::thread;
use std::time::Duration;
fn print_numbers(id: i32) {
for i in 0..5 {
println!("Thread {}: {}", id, i);
thread::sleep(Duration::from_millis(100));
}
}
fn main() {
let mut handles = vec![];
for i in 1..=3 {
let handle = thread::spawn(move || {
print_numbers(i);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
}
Rust’s thread::spawn creates a thread, and join ensures the main thread waits for completion. Key point: Rust threads are powerful but resource-intensive, better for CPU-bound tasks where you need fewer, heavier workers.
Comparison Table
Takeaway: Goroutines win for simplicity and scalability in I/O-heavy apps. Rust threads shine when you need control for compute-heavy tasks.
Channels vs. Message Passing: Talking Between Tasks
Go’s Channels
Go uses channels for safe communication between goroutines. Channels are typed, first-class constructs that let you send and receive data without shared memory, avoiding race conditions.
Here’s an example of goroutines coordinating via a channel:
package main
import (
"fmt"
"time"
)
func worker(id int, ch chan string) {
time.Sleep(100 * time.Millisecond)
ch <- fmt.Sprintf("Worker %d done", id)
}
func main() {
ch := make(chan string, 3) // Buffered channel
for i := 1; i <= 3; i++ {
go worker(i, ch)
}
for i := 1; i <= 3; i++ {
fmt.Println(<-ch)
}
}
The chan keyword creates a channel, and <- sends or receives data. The buffered channel here holds up to three messages. Key point: Channels make synchronization straightforward, reducing bugs.
Rust’s Message Passing
Rust offers channels via std::sync::mpsc (multiple producer, single consumer). Like Go, Rust channels avoid shared memory, but they require more setup due to ownership rules.
Here’s a Rust equivalent:
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
fn worker(id: i32, tx: mpsc::Sender<String>) {
thread::sleep(Duration::from_millis(100));
tx.send(format!("Worker {} done", id)).unwrap();
}
fn main() {
let (tx, rx) = mpsc::channel();
for i in 1..=3 {
let tx = tx.clone();
thread::spawn(move || {
worker(i, tx);
});
}
for _ in 1..=3 {
println!("{}", rx.recv().unwrap());
}
}
Rust’s mpsc::channel creates a sender (tx) and receiver (rx). Cloning tx lets multiple threads send messages. Key point: Rust’s channels enforce safety through ownership, but they’re less intuitive than Go’s.
Resource: For more on Rust’s channels, check the .
Synchronization: Keeping Things in Order
Go’s WaitGroups and Mutexes
Go provides sync.WaitGroup for coordinating goroutines and sync.Mutex for protecting shared data. WaitGroups are simpler than channels for basic “wait for completion” tasks.
Example with WaitGroup:
package main
import (
"fmt"
"sync"
)
func task(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Task %d running\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go task(i, &wg)
}
wg.Wait()
fmt.Println("All tasks done")
}
Key point: WaitGroups are lightweight for synchronization, while mutexes handle shared state (though channels are often preferred).
Rust’s Mutex and Condvar
Rust uses Mutex for locking and Condvar for condition-based waiting. These are lower-level than Go’s tools, aligning with Rust’s control-oriented philosophy.
Example with Mutex:
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for i in 1..=3 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += i;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final count: {}", *counter.lock().unwrap());
}
Arc (Atomic Reference Counting) enables safe sharing across threads. Key point: Rust’s synchronization is explicit and safe but requires more code.
Error Handling in Concurrent Code
Go’s Error Simplicity
Go handles errors explicitly with return values, even in concurrent code. Channels often carry errors, keeping things consistent.
Example:
func riskyTask(id int, ch chan error) {
if id == 2 {
ch <- fmt.Errorf("worker %d failed", id)
return
}
ch <- nil
}
func main() {
ch := make(chan error, 3)
for i := 1; i <= 3; i++ {
go riskyTask(i, ch)
}
for i := 1; i <= 3; i++ {
if err := <-ch; err != nil {
fmt.Println(err)
}
}
}
Key point: Go’s error handling is predictable, though it can feel repetitive.
Rust’s Result and Panic
Rust uses Result for errors and panic! for unrecoverable failures. In concurrent code, Result integrates with channels or join.
Example:
use std::sync::mpsc;
use std::thread;
fn risky_task(id: i32, tx: mpsc::Sender<Result<(), String>>) {
if id == 2 {
tx.send(Err(format!("Worker {} failed", id))).unwrap();
} else {
tx.send(Ok(())).unwrap();
}
}
fn main() {
let (tx, rx) = mpsc::channel();
for i in 1..=3 {
let tx = tx.clone();
thread::spawn(move || risky_task(i, tx));
}
for _ in 1..=3 {
match rx.recv().unwrap() {
Ok(()) => {}
Err(e) => println!("{}", e),
}
}
}
Key point: Rust’s type system catches errors early, but it’s more complex.
Performance: Speed and Scalability
Go’s Runtime Advantage
Go’s runtime schedules goroutines efficiently, multiplexing them onto OS threads. This makes Go great for I/O-bound workloads like web servers. However, CPU-bound tasks may hit limits due to the runtime’s overhead.
Rust’s Zero-Cost Abstractions
Rust’s concurrency has no runtime overhead, compiling to lean machine code. It excels in CPU-bound workloads like simulations or crypto. However, spawning many threads for I/O tasks can be less efficient.
Comparison Table:
Takeaway: Choose Go for network-heavy apps, Rust for compute-heavy ones.
Ecosystem and Libraries
Go’s Simplicity
Go’s standard library (sync, net/http) covers most concurrency needs. External libraries like golang.org/x/sync add utilities like errgroups. Key point: Go’s ecosystem is minimal but sufficient.
Rust’s Flexibility
Rust’s std::sync provides core primitives, but crates like tokio and rayon dominate for async and parallel tasks. Key point: Rust’s ecosystem is richer but requires learning third-party tools.
Resource: Explore tokio at .
When to Choose What
Final Table:
In conclusion..
Go and Rust both tackle concurrency well, but they cater to different needs. Go’s goroutines and channels make concurrent code feel like a breeze, perfect for networked apps where simplicity matters. Rust’s threads and ownership give you unmatched control and safety, ideal for performance-critical systems where every cycle counts. Try both with small projects to see what clicks for you.
Have you used Go or Rust for concurrency? Share your thoughts or questions below—I’d love to hear what you’re building!
Concurrency is a big deal in modern programming. When you’re building apps that need to handle multiple tasks at once—like web servers, data pipelines, or real-time systems—you want a language that makes concurrency easy, safe, and fast. Go and Rust are two heavyweights in this space, each with its own approach. Go keeps things simple with goroutines and channels, while Rust leans on its ownership model for safety and control. Let’s dive into how they stack up, with code examples and details to help you decide which fits your project.
This post breaks down the concurrency models of Go and Rust, compares their strengths and weaknesses, and shows you how they work in practice. Expect a 12-15 minute read packed with examples and tables to make things clear.
Why Concurrency Matters
Concurrency lets programs do multiple things at once, like handling thousands of HTTP requests or processing streams of data. Go and Rust tackle this differently:
- Go focuses on simplicity. Its concurrency model is built around goroutines (lightweight threads) and channels (for communication), making it easy to write concurrent code without much boilerplate.
- Rust prioritizes safety and performance. Its ownership system ensures thread-safe concurrency without garbage collection, giving you fine-grained control.
Both are great for building high-performance systems, but their philosophies—Go’s ease versus Rust’s precision—shape how you write concurrent code. Let’s explore the key concepts.
Goroutines vs. Threads: The Lightweight Showdown
Go’s Goroutines
Go’s concurrency starts with goroutines, which are lightweight threads managed by the Go runtime, not the OS. They’re cheap to create (a few KB of memory) and let you spin up thousands without breaking a sweat. You launch a goroutine with the go keyword.
Here’s a simple example of goroutines handling parallel tasks:
package main
import (
"fmt"
"time"
)
func printNumbers(id int) {
for i := 0; i < 5; i++ {
fmt.Printf("Goroutine %d: %d\n", id, i)
time.Sleep(100 * time.Millisecond)
}
}
func main() {
for i := 1; i <= 3; i++ {
go printNumbers(i)
}
time.Sleep(1 * time.Second) // Wait for goroutines to finish
}
This code runs three goroutines concurrently, each printing numbers. The time.Sleep in main is a crude way to wait (we’ll fix that later with channels). Key point: Goroutines are easy to use and scale well for I/O-bound tasks like network servers.
Rust’s Threads
Rust uses OS threads for concurrency, which are heavier than goroutines (think MBs of memory per thread). You get precise control, but spawning thousands of threads can strain resources. Rust’s standard library provides std::thread for threading.
Here’s a similar example in Rust:
use std::thread;
use std::time::Duration;
fn print_numbers(id: i32) {
for i in 0..5 {
println!("Thread {}: {}", id, i);
thread::sleep(Duration::from_millis(100));
}
}
fn main() {
let mut handles = vec![];
for i in 1..=3 {
let handle = thread::spawn(move || {
print_numbers(i);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
}
Rust’s thread::spawn creates a thread, and join ensures the main thread waits for completion. Key point: Rust threads are powerful but resource-intensive, better for CPU-bound tasks where you need fewer, heavier workers.
Comparison Table
Feature | Go (Goroutines) | Rust (Threads) |
---|---|---|
Resource Usage | Lightweight (~KB per goroutine) | Heavier (~MB per thread) |
Scalability | Thousands easily | Limited by OS resources |
Use Case | I/O-bound (e.g., web servers) | CPU-bound (e.g., computations) |
Ease of Use | Simple go keyword | More setup with spawn and join |
Takeaway: Goroutines win for simplicity and scalability in I/O-heavy apps. Rust threads shine when you need control for compute-heavy tasks.
Channels vs. Message Passing: Talking Between Tasks
Go’s Channels
Go uses channels for safe communication between goroutines. Channels are typed, first-class constructs that let you send and receive data without shared memory, avoiding race conditions.
Here’s an example of goroutines coordinating via a channel:
package main
import (
"fmt"
"time"
)
func worker(id int, ch chan string) {
time.Sleep(100 * time.Millisecond)
ch <- fmt.Sprintf("Worker %d done", id)
}
func main() {
ch := make(chan string, 3) // Buffered channel
for i := 1; i <= 3; i++ {
go worker(i, ch)
}
for i := 1; i <= 3; i++ {
fmt.Println(<-ch)
}
}
The chan keyword creates a channel, and <- sends or receives data. The buffered channel here holds up to three messages. Key point: Channels make synchronization straightforward, reducing bugs.
Rust’s Message Passing
Rust offers channels via std::sync::mpsc (multiple producer, single consumer). Like Go, Rust channels avoid shared memory, but they require more setup due to ownership rules.
Here’s a Rust equivalent:
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
fn worker(id: i32, tx: mpsc::Sender<String>) {
thread::sleep(Duration::from_millis(100));
tx.send(format!("Worker {} done", id)).unwrap();
}
fn main() {
let (tx, rx) = mpsc::channel();
for i in 1..=3 {
let tx = tx.clone();
thread::spawn(move || {
worker(i, tx);
});
}
for _ in 1..=3 {
println!("{}", rx.recv().unwrap());
}
}
Rust’s mpsc::channel creates a sender (tx) and receiver (rx). Cloning tx lets multiple threads send messages. Key point: Rust’s channels enforce safety through ownership, but they’re less intuitive than Go’s.
Resource: For more on Rust’s channels, check the .
Synchronization: Keeping Things in Order
Go’s WaitGroups and Mutexes
Go provides sync.WaitGroup for coordinating goroutines and sync.Mutex for protecting shared data. WaitGroups are simpler than channels for basic “wait for completion” tasks.
Example with WaitGroup:
package main
import (
"fmt"
"sync"
)
func task(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Task %d running\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go task(i, &wg)
}
wg.Wait()
fmt.Println("All tasks done")
}
Key point: WaitGroups are lightweight for synchronization, while mutexes handle shared state (though channels are often preferred).
Rust’s Mutex and Condvar
Rust uses Mutex for locking and Condvar for condition-based waiting. These are lower-level than Go’s tools, aligning with Rust’s control-oriented philosophy.
Example with Mutex:
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for i in 1..=3 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += i;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final count: {}", *counter.lock().unwrap());
}
Arc (Atomic Reference Counting) enables safe sharing across threads. Key point: Rust’s synchronization is explicit and safe but requires more code.
Error Handling in Concurrent Code
Go’s Error Simplicity
Go handles errors explicitly with return values, even in concurrent code. Channels often carry errors, keeping things consistent.
Example:
func riskyTask(id int, ch chan error) {
if id == 2 {
ch <- fmt.Errorf("worker %d failed", id)
return
}
ch <- nil
}
func main() {
ch := make(chan error, 3)
for i := 1; i <= 3; i++ {
go riskyTask(i, ch)
}
for i := 1; i <= 3; i++ {
if err := <-ch; err != nil {
fmt.Println(err)
}
}
}
Key point: Go’s error handling is predictable, though it can feel repetitive.
Rust’s Result and Panic
Rust uses Result for errors and panic! for unrecoverable failures. In concurrent code, Result integrates with channels or join.
Example:
use std::sync::mpsc;
use std::thread;
fn risky_task(id: i32, tx: mpsc::Sender<Result<(), String>>) {
if id == 2 {
tx.send(Err(format!("Worker {} failed", id))).unwrap();
} else {
tx.send(Ok(())).unwrap();
}
}
fn main() {
let (tx, rx) = mpsc::channel();
for i in 1..=3 {
let tx = tx.clone();
thread::spawn(move || risky_task(i, tx));
}
for _ in 1..=3 {
match rx.recv().unwrap() {
Ok(()) => {}
Err(e) => println!("{}", e),
}
}
}
Key point: Rust’s type system catches errors early, but it’s more complex.
Performance: Speed and Scalability
Go’s Runtime Advantage
Go’s runtime schedules goroutines efficiently, multiplexing them onto OS threads. This makes Go great for I/O-bound workloads like web servers. However, CPU-bound tasks may hit limits due to the runtime’s overhead.
Rust’s Zero-Cost Abstractions
Rust’s concurrency has no runtime overhead, compiling to lean machine code. It excels in CPU-bound workloads like simulations or crypto. However, spawning many threads for I/O tasks can be less efficient.
Comparison Table:
Workload Type | Go Advantage | Rust Advantage |
---|---|---|
I/O-Bound | Scales with goroutines | Thread overhead limits scale |
CPU-Bound | Runtime adds slight overhead | No runtime, max performance |
Takeaway: Choose Go for network-heavy apps, Rust for compute-heavy ones.
Ecosystem and Libraries
Go’s Simplicity
Go’s standard library (sync, net/http) covers most concurrency needs. External libraries like golang.org/x/sync add utilities like errgroups. Key point: Go’s ecosystem is minimal but sufficient.
Rust’s Flexibility
Rust’s std::sync provides core primitives, but crates like tokio and rayon dominate for async and parallel tasks. Key point: Rust’s ecosystem is richer but requires learning third-party tools.
Resource: Explore tokio at .
When to Choose What
- Pick Go if you want simplicity and speed of development for I/O-heavy apps (e.g., APIs, microservices). Goroutines and channels get you far with minimal fuss.
- Pick Rust if you need maximum performance and safety for CPU-heavy or low-level systems (e.g., game engines, blockchain). Its ownership model prevents bugs but demands more effort.
Final Table:
Aspect | Go Wins When | Rust Wins When |
---|---|---|
Learning Curve | Quick to learn | Steeper but rewarding |
Safety | Channels reduce bugs | Ownership eliminates races |
Performance | Great for I/O | Best for CPU |
Ecosystem | Simple, built-in tools | Flexible with crates |
Go and Rust both tackle concurrency well, but they cater to different needs. Go’s goroutines and channels make concurrent code feel like a breeze, perfect for networked apps where simplicity matters. Rust’s threads and ownership give you unmatched control and safety, ideal for performance-critical systems where every cycle counts. Try both with small projects to see what clicks for you.
Have you used Go or Rust for concurrency? Share your thoughts or questions below—I’d love to hear what you’re building!