QuickCheck for Paxos

This is the second post in Paxos Made Complete, a series highlighting design decisions when building a system based on  Paxos and the State Machine Approach.  Code for the series is available in the github repository.  Today we will finish the Classic Paxos implementation by embedding it into a small simulator, then generating tests using QuickCheck.

Fuzzing with Event Streams

Critical infrastructure requires thorough testing, and distributed computing is an especially challenging domain to test.  The technique I have found most valuable is deterministic simulations driven by a generated event stream.  For small networks, I can simulate a complete multi-node system on a single machine and fuzz it with a stream of random events.  The deterministic nature of the simulator allows me to write the event stream to a file and replay it later for regression testing.

For Paxos, this means defining some interesting events which can occur in the real world, then writing a simulator which accepts a stream of these events and interprets them by calling our Paxos function(s).  Testing becomes the process of generating a random stream of events and running them in the simulator, checking assertions all the while.

Since the entire state of the global system is available locally, it is trivial to check global properties between each event, ensuring correctness at all times.  Another benefit to this technique is the exact event which causes the inconsistency is discovered immediately.

For this blog series, I chose a set of events which is sufficiently interesting to expose weaknesses in the protocol, but not so many as to be a burden to implement.  Here’s what I chose:

data Event
  = Deliver           -- Deliver the next outstanding message
  | Duplicate         -- Duplicate a message and add it to the network
  | Drop              -- Delete a message from the network
  | Shift             -- Rearrange message delivery order
  | Tick NodeID       -- Send Paxos a Timeout event (failure suspicion)
  | Req NodeID Value  -- Request a value to be added to the log

Deliver, Drop, and Duplicate should be self-explanatory.  Shift rearranges the message ordering to simulate packets delayed in the network.

Tick and Req may require a bit more explanation:

Tick Event

In production Paxos environments, there must be a signal to cause a Leader to send the Prepare message.  This signal is often derived from a Leader Election protocol.  The leader election is driven by timeouts and other inputs which combine to form a suspicion of node failure, causing the election process to commence.

The Tick event skips all of that – we declare that this node believes itself to be the Leader and will attempt to gain control of the voting.  This is a conservative strategy; all possible leader elections will be covered by placing the Tick event at the right spot in the event stream, as well as a variety of situations which we hope would not happen in the real world, but which Paxos should handle correctly anyway.

Req Event

In typical implementations there are two entities outside Paxos; a client application which is requesting values to be added to the log, and an under-consensus application consuming the values from the log.  For example, a web app sending database requests to the cloud, and the database applying the updates in the order specified by the Paxos log.

The Req event simulates the client’s request to add values to the log.  There are so many design decisions related to client semantics, expected behaviors, and control flow, I’ll have to leave that for a future post.  For now, we will assume that clients will resend requests if they don’t receive a response for awhile.

QuickCheck

QuickCheck is a library which generates random instances of a data type and feeds them into assertions to check a program’s properties.   If it finds a failed assertion, it will automatically trim the datatype, looking for the shortest/smallest example which triggers the assertion.  The result is an input to the assertion which causes it to fail — a counter-example to the assertion.

This makes QuickCheck particularly valuable for debugging; it will not just tell you an error has occurred, but offer the actual inputs needed to create the failure.  Haskell’s pure functional semantics has the added benefit of being deterministic, so these inputs can be stored as regression test cases without needing the often-complex setup code needed to test imperative programs.

In order to fuzz Paxos with QuickCheck, we must supply a few functions to teach QuickCheck how to walk through our data types and what (interesting) values to insert in each location.  Here’s a few examples for the types we defined last time:

instance Arbitrary NodeID where
   arbitrary = elements [NodeA,NodeB,NodeC]

instance Arbitrary Value where
   arbitrary = elements [ValueA,ValueB,ValueC]

instance Arbitrary Event where
   arbitrary = do

instance Arbitrary Event where
   arbitrary = do
      node <- arbitrary
      val  <- arbitrary
      frequency
        [ (50, elements [Deliver]) 
        , (40, elements [Drop])
        , (20, elements [Tick node])
        , (20, elements [Req node val])
        , (5, elements [Shift])
        , (5, elements [Duplicate])
        ]

Once QuickCheck is setup for our types, it’s time to define the properties we want to check.  A property is any deterministic predicate.  In Haskell, that means just about any function returning a boolean!

The Consistency Property

The main property of Paxos is its freedom from inconsistency; any correct node will reach the same decision, and decisions never change once made.  Our job is to encode this property as a function which takes a list of events and returns TRUE if Paxos held the property, or FALSE if an inconsistency was detected.

My tactic is similar to the method described in “Paxos Made Live”; execute random events against a fresh network, simulating an unstable period, then run a series of “stabilizing” events to flush out the protocol.  If the nodes disagree on the result, an inconsistency has occurred.  If we carefully select the stabilizing events such that they are the very minimum possible events to complete the protocol, we can also detect “live-lock” situations (since any remaining undecided nodes would be “out of spec”).

I’ll skip most of the infrastructure needed to get the simulator running.  Here are the important parts:

type Network = ([(From,To,Message)], Map NodeID Node)

...
handleEvent :: Network -> Event -> Network
handleEvent (msgs, net)     (Tick to)    = networkDeliver (From to) (To to) Timeout msgs net
handleEvent (msgs, net)     (Req to val) = networkDeliver (From to) (To to) (Request val) msgs net
handleEvent ([], net)       _            = ([],net)
-- Always deliver messages to ourself
handleEvent (((From from),(To to),msg):msgs, net) _ | from == to = networkDeliver (From from) (To to) msg msgs net
handleEvent ((from,to,msg):msgs, net) Deliver      = networkDeliver from to msg msgs net
handleEvent (msg:msgs, net) Drop         = (msgs, net)
handleEvent (m:ms, net)     Duplicate    = (m:m:ms, net)
handleEvent (m:ms, net)     Shift        = (ms ++ [m], net)

...
runNetwork :: Network -> [Event] -> Network
runNetwork net events = Prelude.foldl handleEvent net events

...
prop_ConsensusAchieved :: Network -> [Event] -> Bool
prop_ConsensusAchieved network events =
  let unstable    = runNetwork network $ (Tick NodeA):events
      (_,stable)  = stableEvents unstable stabilize
      allVals     = extractStableValues stable
  in Prelude.notElem NoValue (IntMap.elems allVals)

The function “prop_ConsensusAchieved” is the property we want to test.  It takes a list of Events (generated by QuickCheck), executes the events using handleEvent on a Network containing simulated nodes and a message queue.  Once the “unstable” network has run, we offer a stream of “stabilizing” events, after which we extract values and compare.  If this leaves us with an inconsistency or no agreed value, the property has failed.

Finally, we can invoke QuickCheck on this property:

quickCheckWith stdArgs { maxSuccess = 1000 } prop_ConsensusAchieved

Check out the github repository for the complete source code.

Design Decisions

  • I’ve used a technique often described in Paxos literature where all outgoing messages are also sent to our self for processing.  This allows the logic for each Role to remain separate and avoid code pollution and duplication.
    • The handleEvent function hides one important feature; messages sent to our self are always delivered.  This overrides the Drop, Duplicate, and Shift events.  As written, this still consumes an event from the queue.  A better technique would be to immediately deliver the message to our self, although this poses the risk for an infinite loop of self-delivery…
  • The selection of events is somewhat arbitrary.  Specifically Duplicate and Shift, which I wanted to be deterministic even if the intervening events were to generate a different number of output messages into the network queue.  The semantics I selected meet my criteria, but I’m not totally satisfied with them.
  • The design presented here is slightly more complex than necessary in preparation for an upcoming step which will allow us to test multiple different versions of Paxos against each other on the same event stream(s).

Next Steps

Next time we will modify Classic Paxos into Mult-Paxos, and it’ll blow your mind just how trivial the changes are!  From there we will begin examining efficiency by optimizing the structures and message formats to streamline the trivial implementation.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s