Hypothetical Labs » Blog Archive » Part 1: Introducing riak_core
riak_core is a single OTP application which provides all the services necessary to write a modern, well-behaved distributed application. riak_core began as part of Riak. since the code was generally useful in building all kinds of distributed applications we decided to refactor and separate the core bits into their own codebase to make it easier to use.
Distributed systems are complex and some of that complexity shows in the amount of features available in riak_core. Rather than dive deeply into code, I'm going to separate the features into broad categories and give an overview of each.
Node Liveness & Membership
riak_core_node_watcher is the process responsible for tracking the status of nodes within a riak_core cluster. it uses net_kernel to efficiently monitor many nodes. riak_core_node_watcher also has the capability to take a node out of the cluster programmatically. this is useful in situations where a brief node outage is necessary but you don't want to stop the server software completely.
riak_core_node_watcher also provides an API for advertising and locating services around the cluster. this is useful in clusters where nodes provide a specialized service, like a CUDA compute node, which is used by other nodes in the cluster.
riak_core_node_watch_events cooperates with riak_core_node_watcher to generate events based on node activity, i.e. joining or leaving the cluster, etc. Interested parties can register callback functions which will be called as events occur.
Partitioning & Distributing Work
riak_core uses a master/worker configuration on each node to manage the execution of work units. Consistent hashing is used to determine which target node(s) to send the request and the master process on each node farms out the request to the actual workers. riak_core calls worker processes vnodes. The coordinating process is the vnode_master.
The partitioning and distribution logic inside riak_core also handles hinted handoff when required. Hinted handoff occurs as a result of a node failure or outage. in order to assure availability, most clustered systems will use operational nodes in place of down nodes. When the down node comes back the cluster needs to migrate the data from its temporary home on the substitute nodes to the data's permanent home on the restored node. this process is called hinted handoff and is managed by components inside riak_core. riak_core also handles migrating partitions to new nodes when they join the cluster such that all work continues to be evenly partitioned to all cluster members.
riak_core_vnode_master starts all the worker vnodes on a given node and routes requests to the vnodes as the cluster runs.
riak_core_vnode is an OTP behavior wrapping all the boilerplate logic required to implement a vnode. Application-specific vnodes need to implement a handful of callback functions in order to participate in handoff sessions and receive work units from the master.
A riak_core cluster stores global state in a ring structure. The state information is transferred between nodes in the cluster in a controlled manner to keep all cluster members in sync. this process is referred to as "gossiping".
riak_core_ring is the module used to create and manipulate the ring state data shared by all nodes in the cluster. Ring state data includes items like partition ownership and cluster-specific ring metadata. Riak KV stores bucket metadata in the ring metadata, for example.
riak_core_ring_manager manages the cluster ring for a node. it is the main entry point for application code accessing the ring, via riak_core_ring_manager:get_my_ring/1, and also keeps a persistent snapshot of the ring in sync with the current ring state.
riak_core_gossip manages the ring gossip process and insures the ring is generally consistent across the cluster.
Over the next several months I'm going to cover the process of building a real application in a series of posts to this blog where each post covers some aspect of system building with riak_core. All of the source to the application will be published under the Apache2 licensed and shared via a public repo on github.
And what type of application will we build? since the goal of this series is to illustrate how to build distributed systems using riak_core and also satisfy my own technical curiosity I've decided to build a distributed graph database. A graph database should provide enough use cases to really exercise riak_core while at the same time not obscuring the core learning experience in tons of complexity.
Thanks to Sean Cribbs and Andy Gross for providing helpful review and feedback.