Kavya Joshi on July 14, 2017
Liveblog by Jaana B. Dogan (@rakyll)
Kavya writes Go for a living at a start-up in San Francisco. She's primarily a backend/ systems developer, but has of late been dabbling in firmware too. Her favorite aspects of being a programmer are reasoning about systems at scale, and delving into the inner workings of sophisticated software. When not programming, she tends to spend her time on rock walls and mountain tops. Before moving to San Francisco to be an Adult, Kavya was at MIT where she got a Bachelor's and Master's in Computer Science.
Channels provide a simple mechanism for goroutines to communicate, and a powerful construct to build sophisticated concurrency patterns. We will delve into the inner workings of channels and channel operations, including how they're supported by the runtime scheduler and memory management systems.
A fair number of us are using Go for concurrency features. Channels are not just useful, they are interesting.
We can implement a simple task queue with just channels.
Channels are interesting because:
Even though we understand all these properties, you take a second to appreciate and you want to understand how it works.
This talk is about:
First, you need to first create a channel to use it. make can create both buffered and unbuffered channels. This talk is primarily about buffered channels.
A buffered channel with a capacity of 3: ch := make(chan Task, 3)
make chan
allocates an hchan struct on the heap and initializes it and returns a pointer to it. This is why we can pass channels between functions.
Assume G1 keep sending and it takes a long time to process G2. When channel is full, G1’s execution is paused; how does pausing work?
When a goroutine needs to be paused, chan calls into the scheduler to park G1, what scheduler does is to change G1 from running to waiting. And schedule another goroutine on the OS thread.
(Also explains the case what happens when G2 needs to be parked and resumed again.)
When G1 finally runs, it needs to acquire the lock. But the runtime is actually is so much smarter to make this less costly. Runtime can copy directly to the receiver stack. G1 writes directly to G2’s stack and doesn't have to acquire any locks.
On resuming, G2 does not need to acquire channel lock and manipulate the buffer. This also means one fewer memory copy.
Not in the scope of this talk. The Go runtime is written is Go and you read the source code to learn.
Why are the channels implemented the way they are? Two main points: Simplicity: queue with a lock preferred to lock-free implementation. Performance: goroutine wake-up path is lockless and potentially fewer memory copies.
In the implementation of channels, there's an astute trade-off between simplicity and performance.