Bas van den Heuvel
2024-04-24
Channels are first-class citizens
A channel that accepts channels of integers.
This facility enables complex concurrent programming patterns
Multiple clients make requests to a worker on a “public” channel, which acknowledges successful processing on a “private” channel.
package main
import "fmt"
import "time"
type Request struct {
id int
ack chan int
}
func worker(req chan Request) {
var c Request
for {
c = <-req
fmt.Printf("request received from %d \n", c.id)
time.Sleep(1 * 1e9)
fmt.Println("notify")
c.ack <- 1
}
}
func client(id int, req chan Request) {
var ack = make(chan int)
for {
c := Request{id, ack}
req <- c
<-ack
}
}
func main() {
var req = make(chan Request)
go worker(req)
go client(1, req)
client(2, req)
}
A more concrete example based on the clients/worker example.
A possible implementation in Go:
package main
import "fmt"
import "time"
const (
NUMBER_OF_CHAIRS = 8
)
type Request struct {
id int
ack chan int
}
// The worker
func barber(queue (chan Request)) {
for {
req := <-queue
fmt.Printf("BARBER: Serving customer %d \n", req.id)
time.Sleep(1 * 1e9)
fmt.Printf("BARBER: Done with customer %d \n", req.id)
req.ack <- 1
}
}
// The clients
func customer(queue (chan Request), id int) {
var ack = make(chan int)
for {
fmt.Printf("CUSTOMER: %d wants haircut \n", id)
req := Request{id, ack}
queue <- req
fmt.Printf("CUSTOMER: %d sits on chair \n", id)
<-ack
fmt.Printf("CUSTOMER: %d served by barber \n", id)
time.Sleep(1 * 1e9)
}
}
func main() {
queue = make(chan Request, NUMBER_OF_CHAIRS)
go customer(queue, 1)
go customer(queue, 2)
barber(queue)
}
Note:
ch1
ch2
ch3
First try:
What if there is no sender on ch1
but there is one on
ch2
? The program above gets stuck.
Gets stuck again, if there is no sender on ch2
but there
is one on ch1
.
select
primitive allows simultaneous waiting for
multiple eventsselect {
case x = <-ch1:
...
case y = <-ch2:
...
case ch3 <- 1:
...
// default and timeout possible
}
select
works as follows:
select
blocks if all events (cases) block.
If one event (case) occurs, the corresponding case is chosen.
If multiple events (cases) occur, one corresponding case is chosen randomly.
The remaining cases are no longer available!
The example demonstrates that select
supports mixed
communication in that it can manage several send and
receive events!
package main
import "fmt"
func sel(ch1, ch2, ch3 chan int) {
select {
case x := <-ch1:
fmt.Printf("\n ?ch1 = %d", x)
case y := <-ch2:
fmt.Printf("\n ?ch2 = %d", y)
case ch3 <- 1:
fmt.Printf("\n !ch3")
}
}
// Case selection is "random".
func test1() {
ch1 := make(chan int)
ch2 := make(chan int,1)
ch3 := make(chan int)
go func() {
ch1 <- 1
}()
go func() {
ch2 <- 2
}()
go func() {
<-ch3
}()
sel(ch1, ch2, ch3)
}
// Events that were not chosen remain available.
func test2() {
ch1 := make(chan int)
ch2 := make(chan int, 1)
ch3 := make(chan int)
go func() {
ch1 <- 1
}()
go func() {
ch2 <- 2
}()
go func() {
<-ch3
}()
sel(ch1, ch2, ch3)
sel(ch1, ch2, ch3)
fmt.Printf("\n")
}
func main() {
for {
test1()
// test2()
}
}
Consider:
package main
import "fmt"
import "time"
func sel(x time.Duration, a, b chan int) {
as := 0
bs := 0
for {
select {
case <-a:
as ++
fmt.Printf("A(%d/%d)",as,bs)
case <-b:
bs ++
fmt.Printf("B(%d/%d)",as,bs)
}
time.Sleep(x)
}
}
func snd(c chan int) {
for {
c <- 1
}
}
func main() {
a := make(chan int)
b := make(chan int)
go snd(a)
go snd(b)
sel(1e6, a, b)
}
How can we prioritize a case?
select
in Newsreaderpackage main
import "fmt"
func reuters(ch chan string) {
ch <- "REUTERS"
}
func bloomberg(ch chan string) {
ch <- "BLOOMBERG"
}
func newsReaderWithThreads(reutersCh chan string, bloombergCh chan string) {
ch := make(chan string)
go func() {
y := <-reutersCh
ch <- y
}()
go func() {
y := <-bloombergCh
ch <- y
}()
x := <-ch
fmt.Printf("got news from %s \n", x)
}
func newsReaderWithSelect(reutersCh chan string, bloombergCh chan string) {
var x string
select {
case x = <-reutersCh:
case x = <-bloombergCh:
}
fmt.Printf("got news from %s \n", x)
}
func test() {
reutersCh := make(chan string)
bloombergCh := make(chan string)
go reuters(reutersCh)
go bloomberg(bloombergCh)
newsReaderWithThreads(reutersCh, bloombergCh)
newsReaderWithThreads(reutersCh, bloombergCh)
}
We try to emulate a select
with helper
threads.
The idea is that there are always two threads waiting for a
message from Reuters or Bloomberg. This message is then forwarded to the
Newsreader. See newsReaderWithThreads
.
Yet, there is a problem if there are multiple Newsreaders.
The Newsreader expects only one message (either from Reuters or from Bloomberg).
Both messages are fetched from the respective message channel.
As only one message is processed, the other message remains unused and is “discarded”
Hence, further calls to newsReaderWithThreads
lead
to deadlock.
The situation is different for newsReaderWithSelect
.
Thanks to select
, only one of the messages is fetched from
the Reuters or Bloomberg channels.
E.g., if the first call to newsReaderWithSelect
fetches the Reuters message, then the second call can only fetch the
Bloomberg message.
select
picks one case randomly
If none of the cases occurs, select
blocks
Using a timeout, the select
can be unblocked (see
above for an example)
It is also possible to prevent blocking using
default
.
Consider the following example:
If none of the first two cases occurs, then the third (default) case is selected.
Using default, events can be prioritized. See the sleeping barber exercise.
package main
import "fmt"
import "time"
func task1() { time.Sleep(1 * 1e9) }
func task2() { time.Sleep(2 * 1e9) }
func task3() { time.Sleep(3 * 1e9) }
func barrier() {
var ch = make(chan int)
// run all three tasks concurrently
go func() {
task1()
ch <- 1 // signal done
}()
go func() {
task2()
ch <- 1
}()
go func() {
task3()
ch <- 1
}()
// collect results concurrently
timeout := time.After(4 * 1e9)
for i := 0; i < 3; i++ {
select {
case <-ch:
case <-timeout:
fmt.Println("timed out")
return
}
}
fmt.Println("done")
}
func main() {
barrier()
}
We effectively model a counting semaphore
We consider failure scenarios in the context of concurrent programming.
Challenge:
Non-deterministic execution
Failure occurs, failure does not occur
We consider:
Deadlock
Starvation and livelock
Data race
Methodic procedure:
Observation of program behavior as trace
Trace = Sequence of events
A deadlock occurs when all threads are blocked.
The Go runtime system recognizes such a situation and aborts.
Consider the following example:
package main
import "fmt"
func snd(ch chan int) {
var x int = 0
x++
ch <- x
}
func rcv(ch chan int) {
var x int
x = <-ch
fmt.Printf("received %d \n", x)
}
func main() {
var ch chan int = make(chan int)
go rcv(ch) // R
go snd(ch) // S
rcv(ch) // Main
}
We study the possible behavior of the program above. To this end, we use R, S and Main to point to the corresponding threads.
Program execution consists of events such as sending and receiving on a channel. For events, we use the following notation:
ch? receiving on channel ch
ch! sending on channel ch
Note:
Events are blocking. Receiving is generally blocking. Sending blocking if the buffer is full or if we are using a channel without buffer.
Hence, the question: what is the precise meaning of events? We have the following two options:
Event ch?
means that we want to
receive on channel ch
.
Event ch?
means that we have
received on channel ch
.
The same holds for event ch!
.
Hence, we use the following notation:
pre(ch?) wanting to receive on channel ch
post(ch?) having received on channel ch
pre(ch!) wanting to send on channel ch
post(ch!) having sent on channel ch
Summarized:
pre
describes the event before the corresponding
operation takes place.
post
describes the event after the corresponding
operation has taken place.
We consider a possible program execution expressed as a trace. A trace is a sequence of events and expresses the interleaved execution of individual threads.
Is the trace-based description of program execution related to the state-based execution? Yes, both notations/concepts have the goal to describe (concurrent) program execution. The relationship between the two is somewhat like regular expressions versus finite machines.
To represent traces, we use a tabular notation. We write
ch?_1
to refer to the event ch?
at trace
position 1.
R S Main
1. pre(ch?)
2. pre(ch?)
3. pre(ch!)
4. post(ch!)
5. post(ch?)
In the run above, S communicates with R. This can be read from the
trace, because after pre(ch!)
in S comes a
post(ch!)
, and after pre(ch?)
in R comes a
post(ch?)
. In case of communication (send-receive), we
assume that in the trace the post event of the send always occurs before
the post event of the receive.
Threads S and R terminate. Thread Main blocks, because there is no
communication partner for ch?_2
. All threads (here only
Main) are blocked. Hence, deadlock!
Consider the following alternative program execution:
R S Main
1. pre(ch?)
2. pre(ch?)
3. pre(ch!)
4. post(ch!)
5. post(ch?)
In this run, S communicates with Main, and R is blocked. However, since Main terminates, thread R is also terminated. Hence, we do not observe a deadlock.
Consider the following variant of the example above. All channel operations (send/receive) occur in endless loops, so the program does not terminate.
package main
import "fmt"
import "time"
func snd(ch chan int) {
var x int = 0
for {
x++
ch <- x
time.Sleep(1 * 1e9)
}
}
func rcv(ch chan int) {
var x int
for {
x = <-ch
fmt.Printf("received %d \n", x)
}
}
func main() {
var ch chan int = make(chan int)
go rcv(ch) // R
go snd(ch) // S
rcv(ch) // Main
}
A deadlock does not occur. However, it is possible that, for example, Main starves (does not progress), because S and R always communicate with each other. Such a situation is considered starvation.
Concrete trace:
R S Main
1. pre(ch?)
2. pre(ch?)
3. pre(ch!)
4. post(ch!)
5. post(ch?)
6. pre(ch?)
7. pre(ch!)
8. post(ch!)
9. post(ch?)
....
We assume that S always communicates with R (and never with Main). Hence, lines 6-9 keep on repeating. Extremely unlikely in practice but theoretically possible.
A livelock describes a situation in which always at least one thread is not blocked, but no thread progresses.
A livelock does not occur in the previous example. We will study livelocks in context of the Dining Philosophers exercise.
A data race describes a situation in which two unprotected, conflicting memory operations (at least one write) occur simultaneously.
Consider the following example:
package main
import "fmt"
import "time"
func main() {
var x int
y := make(chan int, 1)
go func() { // T
y <- 1
x++
<-y
}()
x++
y <- 1
<-y
time.Sleep(1 * 1e9)
fmt.Printf("done \n")
}
We write Main to denote the main thread, and T for the other thread. Besides send/receive events, we also consider write/read events.
We write w(x)
to denote a write event on variable
x
, and r(x)
for a read event.
We consider a possible program execution expressed as a trace. We
simplify the operation x++
to w(x)
.
We do not distinguish pre and post events; all events are post events. Hence, we omit pre and post annotations.
Main T
1. y!
2. w(x)
3. w(x)
In a program execution (represented as trace), a data race occurs when two conflicting write/read events occur directly after one another. See above.
Consider the following alternative trace:
Main T
1. y!
2. w(x)
3. y?
4. w(x)
In this trace, the data race is no longer visible, because between
w(x)_2
and w(x)_4
there is now
y?_3
.
However, the trace can be reordered such that the data race does occur. The following reordering is allowed:
Main T
1. y!
2. w(x)
3. w(x)
4. y?
We consider another trace:
Main T
1. w(x)
2. y!
3. y?
4. y!
5. w(x)
6. y?
In this trace, the data race does not occur.
The problem of reordering traces to detect data races (and more) is a field of research on its own. We will discuss such “dynamic trace analysis” separately.
Overview of Go primitives for concurrent programming through message-exchange
Multi-threading
Typed channels
Synchronous (without buffer) and asynchronous (with buffer) channels.
Non-deterministic choice
Theoretical foundations: Communicating Sequential Processes by Sir Tony Hoare
Related languages:
Typical problems of concurrent programming
Deadlock
Livelock
Starvation
Data race
Your task is to implement a publish/subscribe server with multiple example clients.
// publish, subscribe example, adopted from Russ Cox
package main
import "fmt"
import "time"
import "strconv"
import "container/list"
/*
In the following, we incrementally develop a solution.
Firstly, a few necessary data structures.
*/
// Every message consists of a "topic" and a "body".
type Message struct {
topic string
body string
}
// Every subscriber registers a "topic" and a "news" channel along which messages on the corresponding "topic" can be received.
type Sub struct {
topic string
news chan Message
}
// The server holds two channels: csub on which subscribers can register, and cpub along which a published sends messages.
type Server struct {
csub chan Sub
cpub chan Message
}
// Subscriber and Publisher
// A subscriber registers and waits for messages.
func subscriber(server Server, t string) {
s := Sub{topic: t, news: make(chan Message)}
server.csub <- s
for {
msg := <-s.news
fmt.Printf("topic %s: \n message %s \n", t, msg.body)
}
}
// A publisher (here, "slashdot") sends messages along the corresponding channel.
func slashdot(server Server) {
for {
m := Message{topic: "slashdot", body: "some news"}
server.cpub <- m
time.Sleep(2 * 1e9)
}
}
/*
The server manages the subscriber list.
At the same time (via `select`), it listens for subscribers and publishers.
A subscriber is simply added to the list.
A messages from a publisher is sent to the corresponding subscribers.
*/
func pubSubServer(server Server) {
subscribers := list.New()
for {
select {
case s := <-server.csub:
subscribers.PushBack(s)
case m := <-server.cpub:
for e := subscribers.Front(); e != nil; e = e.Next() {
s := (e.Value).(Sub) // type assertion
if s.topic == m.topic {
s.news <- m // (B)
}
}
}
}
}
/* Blocking of the server.
Now, for the question in the exercise: if the server manages all clients in one thread, the server can block when Subscribe clients stop reading messages.
Why?
What could alleviate the problem?
*/
func reuters(server Server) {
i := 0
for {
s := strconv.Itoa(i)
m := Message{topic: "reuters", body: "some news " + s}
server.cpub <- m
time.Sleep(1 * 1e9)
i++
}
}
func main() {
server := Server{csub: make(chan Sub), cpub: make(chan Message)}
go pubSubServer(server)
go subscriber(server, "slashdot")
go subscriber(server, "reuters")
go slashdot(server)
reuters(server)
}
We consider an implementation of a buffered channel based solely on channels without buffers.
To simplify, we consider a quantified semaphore. That is, we ignore the actual messages. We expect the following signature:
Note that you need to define QSem
yourself. Your
implementation should only use ``simple’’ non-buffered channels
(otherwise, the exercise is trivial).
Initially, the quantity is set with newQSem
. Function
wait
lowers the quantity and blocks if the quantity is
zero. Function signal
increases the quantity and blocks if
the quantity is equal to the initial quantity. A blocked
wait
is unblocked by a signal
.
We consider an example with four parallel threads. Two threads
execute wait
, and the other two signal
. We
assume that the quantity is at most 1, where initially the actual
quantity is 1 already.
Quantity | Thread 1 | Thread 2 | Thread 3 | Thread 4 |
---|---|---|---|---|
1 | wait | wait | signal | signal |
R | ||||
0 | D | |||
R | ||||
B | ||||
R | ||||
U2 | ||||
D | ||||
D | ||||
R | ||||
1 | D |
wait
decreases the quantity.wait
thread, this will be
signalled to continue.wait
in thread 2
and the signal
in thread 3 occur simultaneously.Access to the actual quantity stored in QSem
must be
protected. To guarantee a mutual exclusion for simultaneous
wait
and signal
, we shall use a mutex (as seen
in the previous lecture).
Extend the Sleeping Barber example:
To repeat, the simple version.
package main
import "fmt"
import "time"
const (
NUMBER_OF_CHAIRS = 8
)
type Request struct {
id int
ack chan int
}
func barber(waitQ (chan Request)) {
for {
req := <-waitQ
fmt.Printf("BARBER: Serving customer %d \n", req.id)
time.Sleep(1 * 1e9)
fmt.Printf("BARBER: Done with customer %d \n", req.id)
req.ack <- 1
}
}
func customer(waitQ (chan Request), id int) {
var ack = make(chan int)
for {
fmt.Printf("CUSTOMER: %d wants hair cut \n", id)
req := Request{id, ack}
waitQ <- req
fmt.Printf("CUSTOMER: %d sits on chair \n", id)
<-ack
fmt.Printf("CUSTOMER: %d served by barber \n", id)
time.Sleep(1 * 1e9)
}
}
func main() {
var waitQ = make(chan Request, NUMBER_OF_CHAIRS)
go customer(waitQ, 1)
go customer(waitQ, 2)
barber(waitQ)
}
Another variant of the Sleeping Barber. We give a few example solutions. Try to figure out how this implementation can fail.
// Sleeping barber variant with distinction among blond and red haired customers
package main
import (
"fmt"
"math/rand"
"time"
)
// Barber shall wait for either a group of blonds or reds.
// The quantities for each group are defined by the following constants.
const BLONDS = 2
const REDS = 3
// Sample solution.
func barber(blond chan int, red chan int) {
seenBlonds := 0
seenReds := 0
for {
// Check if group has been formed.
if seenReds == REDS {
fmt.Printf("\n Cutting reds!")
seenReds = 0
}
if seenBlonds == BLONDS {
fmt.Printf("\n Cutting blonds!")
seenBlonds = 0
}
// Check for blonds and reds wanting to join group.
select {
case <-blond:
seenBlonds++
case <-red:
seenReds++
}
}
}
// Another attempt.
// Any issues?
func barber2(b chan int, r chan int) {
for {
select {
case <-b:
select {
case <-b:
fmt.Println("Working on 2 blond hair customers")
default:
b <- 1
fmt.Println("blond released")
}
case <-r:
select {
case <-r:
select {
case <-r:
time.Sleep(100 * time.Millisecond)
fmt.Println("Working on 3 red hair customers")
default:
r <- 1
r <- 1
fmt.Println("reds released")
}
default:
r <- 1
fmt.Println("red released")
}
}
}
}
// Customer simulation.
func customerSimulation(ch chan int) {
x := 0
for {
rand.Seed(time.Now().UnixNano())
n := rand.Intn(4) // n will be between 0 and 4
// fmt.Printf("Sleeping %d seconds...\n", n)
time.Sleep(time.Duration(n) * time.Second)
x++
ch <- x
}
}
func testBarber() {
blond := make(chan int)
red := make(chan int)
go customerSimulation(blond)
go customerSimulation(red)
barber(blond, red)
}
func testBarber2() {
blond := make(chan int)
red := make(chan int)
go customerSimulation(blond)
go customerSimulation(red)
barber2(blond, red)
}
We consider the problem of the dining philosophers. The order of
forks doesn’t play a role here. That is, we assume that there are
n
philosophers sitting at one table, and there are
n
forks. To eat, a philosopher needs two forks.
This is a possible implementation.
package main
import "fmt"
import "time"
func philo(id int, forks chan int) {
for {
<-forks
<-forks
fmt.Printf("%d eats \n", id)
time.Sleep(1 * 1e9)
forks <- 1
forks <- 1
time.Sleep(1 * 1e9) // think
}
}
func main() {
var forks = make(chan int, 3)
forks <- 1
forks <- 1
forks <- 1
go philo(1, forks)
go philo(2, forks)
philo(3, forks)
}
We model the forks as buffered channels. Every philosopher needs two
fork. Hence, reading twice from the fork
channel.
What kind of problems can we encounter?
Exercise: give concrete examples (as trace).
Here is another attempt.
package main
import "fmt"
import "time"
func philo(id int, forks chan int) {
for {
<-forks
select {
case <-forks:
fmt.Printf("%d eats \n", id)
time.Sleep(1 * 1e9)
forks <- 1
forks <- 1
time.Sleep(1 * 1e9) // think
default:
forks <- 1
}
}
}
func main() {
var forks = make(chan int, 3)
forks <- 1
forks <- 1
forks <- 1
go philo(1, forks)
go philo(2, forks)
philo(3, forks)
}
Consider the following variant.
package main
import "fmt"
import "time"
func philo(id int, forks chan int) {
for {
<-forks
<-forks
fmt.Printf("%d eats \n", id)
time.Sleep(1 * 1e9)
forks <- 1
forks <- 1
time.Sleep(1 * 1e9) // think
}
}
func main() {
var forks = make(chan int)
go func() { forks <- 1 }()
go func() { forks <- 1 }()
go func() { forks <- 1 }()
go philo(1, forks)
go philo(2, forks)
philo(3, forks)
}
Which of the problems you described above can still occur?
Santa repeatedly sleeps until wakened by either all of his nine reindeer, back from their holidays, or by a group of three of his ten elves. If awakened by the reindeer, he harnesses each of them to his sleigh, delivers toys with them and finally unharnesses them (allowing them to go off on holiday). If awakened by a group of elves, he shows each of the group into his study, consults with them on toy R&D and finally shows them each out (allowing them to go back to work).
In general, the following priority rule shall be enforced:
Santa gives priority to the reindeer in the case that there is both a group of elves and a group of reindeer waiting.