When working with concurrency in Go, channels are a core part of how goroutines communicate and synchronize. But not all channels behave the same: one of the first design decisions you’ll face is whether to use a buffered
or an unbuffered
channel.
In this post, we’ll dive deep into the differences between the two, explore how they behave under the hood and help you decide which one to use and when
What Is an Unbuffered Channel?
An unbuffered
channel is the simplest form of a channel. It has no internal storage, meaning a send operation will block until another goroutine is ready to receive from that channel.
Unbuffered channels act as a synchronization point between two goroutines. The sender pauses at the send line until the receiver is ready, and vice versa. This behavior ensures that both goroutines meet at the channel, making them perfect for building tightly coordinated workflows.
They also allow you to communicate between goroutines without sharing memory. Instead of using a shared variable protected by a sync.Mutex
, you can pass data directly through the channel, following Go’s philosophy of:
“Don’t communicate by sharing memory. Share memory by communicating.”
This reduces the complexity of locking, avoids race conditions and makes concurrent code easier to understand and maintain.
package main
import (
"fmt"
)
func main() {
// Create a new unbuffered channel:
ch := make(chan string)
// Using a new goroutine, send data to the channel:
go func() {
ch <- "hello"
}()
// Read data from the channel:
msg := <-ch
// Print received data:
fmt.Println(msg)
}
Executing this script we can see the following output:
hello world!
What is happening in there?
- In the main goroutine, we create a new unbuffered channel called
ch
of typestring
. - We then launch a new goroutine that attempts to send a message to the channel.
- Meanwhile, the main goroutine continues execution and reaches the line where it tries to receive from the channel.
- At that point, both goroutines are paused: the sender is waiting for a receiver and the receiver is waiting for a value to arrive.
- Since a receiver is now ready, the send operation succeeds and the sending goroutine completes.
- The main goroutine receives the message and continues execution.
- Finally, we print the received message in the main goroutine.
As you can see, we’re using the channel not just to pass data between goroutines, but also to synchronize their execution. Neither side proceeds until the other is ready. A key behavior of unbuffered channels that allows for safe, memory-free coordination between goroutines.
What Is a Buffered Channel?
A buffered
channel, unlike an unbuffered one, comes with internal capacity. A limited number of slots where values can be stored temporarily. This means that a send operation won’t block immediately, as long as the buffer isn’t full.
ch := make(chan string, 2) // buffer size: 2
ch <- "first"
ch <- "second"
// ch <- "third" // would block here — buffer is full
In this example, we can send two values into the channel without needing a receiver to read them right away. The third send would block because the channel’s buffer is now full.
How Is This Different?
The key distinction is that unbuffered channels require a receiver to be ready at the exact time the sender tries to send. Otherwise, both block. In contrast, buffered channels allow you to decouple the sender and the receiver to some extent.
This makes buffered channels useful when:
- You have producers that generate data faster than consumers can process.
- You want to prevent blocking in short bursts of activity.
- You need to queue up work temporarily.
However, this decoupling is limited. Once the buffer fills up, the channel behaves just like an unbuffered one: the sender will block until a receiver reads and frees up space.
Internal Behavior
Buffered channels are backed by an internal FIFO
queue. When you send a value:
- If the buffer has room, the value is enqueued and the send returns immediately.
- If the buffer is full, the sender blocks until a receiver reads a value.
- Receivers dequeue values in the same order they were sent.
So while buffered channels give you more flexibility, they do not eliminate blocking entirely, they just delay it.
Think of a buffered
channel like a mailbox:
- You can drop letters (messages) in the mailbox without needing the recipient to be standing there.
- But once the mailbox is full, you have to wait for the recipient to empty it before you can drop more letters.
Unbuffered
channels, by contrast, are like hand-delivering the letter: the recipient has to be there at the exact same time.
When to Use Unbuffered Channels
Use unbuffered channels when:
- You need synchronization: a clear “send and wait” behavior.
- You want to make sure that every message is received immediately.
- You’re building pipelines where each stage must process before the next continues.
They make code simpler and easier to reason about, especially in tightly coupled concurrent flows.
When to Use Buffered Channels
Use buffered channels when:
- You want to avoid blocking the sender immediately.
- You expect bursts of messages and want to queue them temporarily.
- You want to reduce contention between producers and consumers.
You’re building things like worker pools, loggers or event queues.
But beware: larger buffers don’t magically solve problems. You’re just moving the bottleneck elsewhere and you’ll still need to handle full buffers correctly.
Common Issues with Channels and Concurrency
Go makes concurrency accessible, but it’s still easy to introduce subtle bugs that only show up under load or after long runtimes. Let’s look at some of the most common issues developers run into when working with goroutines and channels.
Deadlocks
A deadlock happens when goroutines are waiting on each other forever, usually due to a blocked channel operation.
func main() {
ch := make(chan string)
ch <- "hello" // blocks forever — no receiver!
}
In this case, there’s no goroutine reading from the channel, so the program blocks indefinitely and panics with:
fatal error: all goroutines are asleep - deadlock!
Solution: Make sure every send has a corresponding receiver. Or using a buffered channel when appropriate (almost never).
Goroutine Leaks
Goroutines that never exit are known as leaks. They consume memory and scheduling resources indefinitely, often because they’re blocked waiting on something that never arrives.
func worker(ch <-chan int) {
for {
data := <-ch // blocks forever if no data is sent and channel is never closed
fmt.Println(data)
}
}
If no one sends to ch or if the channel is left open forever, this goroutine never exits.
Solution: Set up cancellation logic (using context.Context for example) and ensure goroutines have a clear exit path.
Not Closing Channels
Failing to close a channel is not always a bug, but in some cases it can prevent receivers from knowing that no more data is coming. If you know you have finished using your channel, ensure you’re closing it at the right point.
func main() {
ch := make(chan int)
go func() {
for i := 0; i < 3; i++ {
ch <- i
}
// close(ch) // <- without this, the loop below blocks
}()
for v := range ch {
fmt.Println(v)
}
}
In this case, range ch
will wait forever after the last value unless ch is closed.
Solution: Close channels when no more values will be sent. And remember that only the sender should close the channel, never the receiver.
Writing to a Closed Channel
Once a channel is closed, writing to it will panic:
ch := make(chan int)
close(ch)
ch <- 1 // panic: send on closed channel
Solution: Ensure that you only close a channel once and that no goroutines are still trying to send after it’s been closed.
Anonymous Goroutines
Declaring goroutines directly inside other functions (especially main()
or handler functions) binds them to the parent’s scope. This has several implications:
- You can’t test the goroutine independently. You’re forced to test the entire function that contains it, often involving unnecessary setup or dependencies.
- Variable sharing is implicit. It’s very easy to unintentionally capture shared variables like counters, slices or maps, especially in loops, leading to data races.
- Reusability suffers. You can’t reuse that goroutine logic elsewhere, since it’s tied to a specific place in your codebase.
Instead of this:
func main() {
ch := make(chan string)
go func(ch chan string) {
ch <- "hello world!"
}
msg := <-ch
fmt.Println(msg)
}
You should do this:
func sendGreeting(ch chan string) {
ch <- "hello world!"
}
func main() {
ch := make(chan string)
go sendGreeting(ch)
msg := <-ch
fmt.Println(msg)
}
This version is:
- Easier to test: you can call
sendGreeting()
directly in a test.
- Easier to read and understand.
- Free from hidden scope traps.
In short, anonymous goroutines are fine for quick, one-off operations but for anything non-trivial, extracting them into named functions will make your code more robust, testable and maintainable.
How to Catch These Bugs
Use Go’s built-in tools to catch concurrency issues early:
go vet
: Warns about common mistakes like misuse of range and close.go run -race
: Detects race conditions and unsafe concurrent access to memory.- Logging and timeouts : Can help detect stuck goroutines or channels not receiving/sending as expected.
Final Advice
Most concurrency bugs come down to missing coordination: goroutines doing something forever, channels that aren’t fully consumed or the wrong assumptions about send/receive timing.
Design with lifecycle and ownership in mind:
- Who starts a goroutine?
- Who stops it?
- Who owns the channel?
- Who’s responsible for closing it?
Being intentional about these will save you hours of debugging down the line.
Passing the Channel vs Closing Over It
There are two common ways to send data to a channel from a goroutine. While both may look similar, they behave slightly differently in terms of code clarity and function design.
Let’s take a look.
Closing Over the Channel (Capturing from Outer Scope)
ch := make(chan string)
go func() {
ch <- "hello world!"
}()
msg := <-ch
fmt.Println(msg)
In this example, the anonymous function captures the ch variable from the outer scope. It’s using a closure. This is concise and perfectly valid, especially in small programs or quick tests.
However, it can lead to tighter coupling between the function and the surrounding code. If the function grows or is reused, it will depend on variables defined outside its body.
Passing the Channel as a Parameter
ch := make(chan string)
go func(ch chan string) {
ch <- "hello world!"
}(ch)
msg := <-ch
fmt.Println(msg)
Here, we’re explicitly passing the channel as a parameter to the anonymous function. This is generally considered more idiomatic and promotes better encapsulation.
Why is this better?
- Clear dependencies: It’s immediately obvious what this function needs to work. No hidden reliance on external variables.
- Easier to test: When logic is extracted into a named function, you can test it in isolation without needing to run the entire surrounding context.
- Avoids scope coupling: Anonymous goroutines declared inline often access outer-scope variables, which leads to unintended sharing of mutable state. One of the most common sources of race conditions and subtle bugs.
Conclusion
Understanding the difference between buffered and unbuffered channels is essential for writing correct and efficient concurrent code in Go.
- Use
unbuffered
channels when you need synchronous communication and want goroutines to wait on each other.
- Use
buffered
channels when you want to decouple sender and receiver timing, allow short-term queuing, or optimize throughput.
But remember: more buffering doesn’t mean better performance. Channels are a coordination tool, not just a message queue. Misusing them can lead to deadlocks, memory leaks, and hard-to-find race conditions.
Always design with clarity in mind:
- Who owns the data?
- Who starts the goroutine?
- Who is responsible for reading, writing and closing the channel?
By being intentional with these choices, you’ll avoid most of the common pitfalls that plague concurrent Go code.
In upcoming posts, we’ll explore how to combine channels, contexts and goroutines to build resilient and scalable systems. Stay tuned.
See you next time!
Fede.