The Concurrency with Python Series:
- Concurrency with Python: Why?
- Concurrency with Python: Threads and Locks
- Concurrency with Python: Functional Programming
- Concurrency with Python: Separating Identity From State
- Concurrency with Python: Actor Models
- Concurrency with Python: CSP and Coroutines
- Concurrency with Python: Hardware-Based Parallelism
- Concurrency with Python: Data-Intensive Architectures
- Concurrency with Python: Conclusion
(Update 07/07/2019): This series was intended to be a starting off point for a larger discussion around Python concepts. However, it turns out writing in-depth technical overviews of concurrency models in Python is considerably more difficult than I anticipated :flushed:
Original estimates averaged about half a week from research to publication per blog post. In actuality it's taken about two to three full weeks per blog post, and 2-4 hours a day at that. I've learned a lot, but I'm also ready to try something else to give back to the software community and improve my own professional career. Feel free to email me if you have any advice/tips.
This series began in part as an attempt to answer questions about how concurrency models behave when combined together. In doing so, it covered:
Threads and locks, where developers interface more or less directly with the hardware.
Functional programming, where source code is written in an idempotent and commutative way to allow either the developer or the language to schedule tasks concurrently.
Separation of identity and state, where the language itself supports structs where atomic snapshots can be taken safely.
Actor models, where developers constrained their code to fit the framework of sending and receiving messages.
Communicating sequential processes, where channels share information between state machines concurrently executing tasks.
Hardware-based parallelism, where specialized hardware enabling parallelism provided a software framework to move data from one place to another.
Data-intensive architectures, where commodity hardware leveraged software frameworks and other concurrency models to execute operations on large quantities of data.
I've found as I've worked through this series that I gained more questions than answers, but that there are some heuristics you can always apply when designing systems at scale:
Focus on the properties of the language: Fundamental programming language aspects like the type system (e.g. encodings and byte definitions of types, type system richness and extensibility), attribute defaults (e.g. immutability and identity of data structures), and control flow primitives will strongly affect what kinds of concurrency paradigms are appropriate. This is essentially describing the field of programming language theory, and this series is effectively detailing why it's important.
Generally, decisions that can be formally proved are better than those that cannot. Mark Semann published a great blog post about this reason propelling his move from object-oriented to functional languages, since functional languages relate to category theory while object-oriented languages relate to design patterns, which is more of a heuristic. Along that axis of exploration, learning about how languages implement process calculuses may shine light into how tightly defined a language implements a concurrency model.
The simplest abstractions (to the hardware) are the ones that thrive: Originally, I thought the simplest abstractions to the developer would be the ones that thrive. If you took a monkey, a typewriter, and the ability to write code, you could produce pretty much any program in that language, and imagining it as such makes it easy to see how most software is just bugs with a few happy paths sprinkled in. Hence, simple abstractions that are easy to grok should provide the most reliable guarantees of correctness, the smallest organizational overhead, and the happiest developers. Right?
Not necessarily. While strongly opinionated languages like Erlang or Clojure may attract the best developers, the languages with broad corporate support are the ones that give developers lots of access and power. While threading/locking code is difficult to write, using threads and locks instead of actors might eke out a tiny performance gain, which may result in millions of dollars of savings at scale, which results in continued corporate investment into the language that made it possible. Hence, the positive spiral of success.
Concurrency models, as with all software models, come with different tradeoffs: Many of these models do not play nice with each other, and require a service layer or other intermediary in order to communicate with each other. In addition, many newcomers to a language come from different engineering backgrounds, and since learning a language to become production-ready is difficult enough, tying a particular model to a language increases complexity in learning past the threshold of feasibility.
This is likely why most general-purpose languages and toolchains don't recommend a single concurrency model, but instead offer a wide suite of options and make it work. With a "way out", the model loses much of its strong guarantees, but developers gain a lot of flexibility in shipping results.