Let's explore Go's concurrency slogan:
Do not communicate by sharing memory; instead, share memory by communicating.
Before, we go ahead, let's just throw up our hands in desperation and say, "What does that even mean?!!"
Go's Concurrency is similar to Unix pipelines and Hoare's Communicating Sequential Processes (CSP). Since we are are talking about sharing memory, understanding the memory models for above may get us an inkling of what's going on.
Ken Thompson(co-inventor of Go) added pipes to Unix systems in 1973. A unix pipe is exactly as the name would suggest: it pipes output of one program as input of the other. Each pipe has it's own pipe buffer. A process reads/writes to the pipe buffer for input/output respectively. What if there are multiple processes reading/writing on the same pipe?
Something like this:
Here ps1 writes to the pipe, while ps2 reads it. Don't we need some kind of synchronization between the two processes? So, you know, ps2 should read the pipe only when the there is message to be read. Before we go ahead, let's define our terms here: Synchronization is an agreement between two proceses towards a certain sequence of reading/writing, Message is the data being read/written into a pipe.
If we knew what the agreement was and who was the mediator we could get a better idea.From the pipe man pages:
If a process attempts to read from an empty pipe, then read(2) will
block until data is available. If a process attempts to write to a
full pipe (see below), then write(2) blocks until sufficient data has
been read from the pipe to allow the write to complete. Non-blocking
I/O is possible by using the fcntl(2) F_SETFL operation to enable the
O_NONBLOCK open file status flag.
So you see, ps1 is sharing memory with ps2 by communicating it a message which is syncronized as per the agreement mediated by the kernel. If, "Bogus! How is that sharing memory by communicating? You could even say that the other way around. Boo... " is what you are thinking, consider this: Neither ps1 or ps2 has the book-keeping information to enforce the agreement. The kernel is the book-keeper and the mediator. Therefore, ps1 & ps2 are not the one's who are sharing memory. Also to a user, the bird's view would be: "the message itself is the synchronizer"
Communicating Sequential Processes
CSP is a way to describe/model the specs for concurrency patterns and interactions. The constructs from CSP will help us understand Go concurrency patterns even better. Let's take a classical CSP example from wikipedia:
In the above example the behavior of processes VendingMachine and Person depend on events coin and card. Now either we can synchronize on both events or just the coin event. In both cases, the choice will be deterministic. But to an external observer, who doesn't know about these events, i.e Person doesn't make the decision that inserting a card or coin will lead to delicious chocolate, the choice is nondeterministic. The idea(in the context of Go) is to avoid nondeterminism to reduce complexity.
We see two styles of concurrency:
1. Deterministic: sequence of actions is well defined
2. Non-deterministic: sequence of actions is not defined.
This is where Go excels. It promotes deterministic concurrency by proving well defined sequence of actions(a.k.a synchronization) namely through channels which has one sender and one reciever each.
For even better understanding let's dive into the syncronization agreement which Go has.
Go Memory Model
The "Happens before" model clearly defines this agreement
To specify the requirements of reads and writes, we define happens before, a partial order on the execution of memory operations in a Go program. If event e1 happens before event e2, then we say that e2 happens after e1. Also, if e1 does not happen before e2 and does not happen after e2, then we say that e1 and e2 happen concurrently.
Within a single goroutine, the happens-before order is the order expressed by the program.
Communicating by channels is the best way to follow the above synchronization agreement. Though the sync package provides other lower level primitives, Once and Waitgroup are the one's advised for higher level synchronization.
This approach also elucidates, "share memory by communicating". Since the concurrency style is deterministic, the memory book-keeping and read/write enforcement is handled by the runtime. Not only that, the synchronization primitives have been intelligently wrapped up in a higher-level construct called, "Channels". With respect to the programmer, message itself is the synchronizer which reduces complexity and spreads awesomeness.
I hope, we now have a better understanding of Go Concurrency Model. Following resources can help: