{"id":37126,"date":"2012-04-25T13:40:23","date_gmt":"2012-04-25T13:40:23","guid":{"rendered":"http:\/\/rafaelfajardo.com\/portfolio\/blake-matheny-tumblr-firehose-the-gory-details\/"},"modified":"2012-04-25T13:40:23","modified_gmt":"2012-04-25T13:40:23","slug":"blake-matheny-tumblr-firehose-the-gory-details","status":"publish","type":"post","link":"https:\/\/rafaelfajardo.com\/portfolio\/blake-matheny-tumblr-firehose-the-gory-details\/","title":{"rendered":"Blake Matheny: Tumblr Firehose &#8211; The Gory Details"},"content":{"rendered":"<p><a href='http:\/\/tumblr.mobocracy.net\/post\/21756118310\/tumblr-firehose-the-gory-details'>Blake Matheny: Tumblr Firehose &#8211; The Gory Details<\/a><\/p>\n<div class=\"link_description\">\n<p><a class=\"tumblr_blog\" href=\"http:\/\/tumblr.mobocracy.net\/post\/21756118310\/tumblr-firehose-the-gory-details\">mobocracy<\/a>:<\/p>\n<blockquote>\n<p>Back in December I started putting some thought into the tumblr firehose. While the initial launch was covered\u00a0<a href=\"http:\/\/engineering.tumblr.com\/post\/21276808338\/tumblr-firehose\">here<\/a>, and the business stuff surrounding it was covered by places like\u00a0<a href=\"http:\/\/techcrunch.com\/2012\/04\/17\/gnip-syndicates-tumblr-firehose\/\">techcrunch<\/a>\u00a0and\u00a0<a href=\"http:\/\/allthingsd.com\/20120417\/tumblr-gets-a-data-firehose\/\">AllThingsD<\/a>, not much has been said about the technical details.<\/p>\n<p>First, some back story. I knew in December that a product need for the firehose was upcoming and had simultaneously been spending a fair amount of time thinking about the general tumblr activity stream. In particular I had been toying quite a bit with trying to figure out a reasonable real-time processing model that would work in a heterogenous environment like the one at Tumblr. I had also been quite closely following some of the exciting work being done at LinkedIn by Jay Kreps and others on\u00a0<a href=\"http:\/\/incubator.apache.org\/kafka\/\">Kafka<\/a>\u00a0and\u00a0<a href=\"http:\/\/www.slideshare.net\/dtunkelang\/databus-a-system-for-timelineconsistent-lowlatency-change-capture\">Databus<\/a>, by Eric Sammer from Cloudera on\u00a0<a href=\"http:\/\/incubator.apache.org\/projects\/flume.html\">Flume<\/a>, and by Nathan Marz from Twitter on\u00a0<a href=\"https:\/\/github.com\/nathanmarz\/storm\">Storm<\/a>.<\/p>\n<p>I had talked with some of the engineers at twitter about their firehose and knew some of the challenges they had overcome in scaling it. I spent some time reading their fantastic\u00a0<a href=\"https:\/\/dev.twitter.com\/docs\/streaming-api\/methods\">documentation<\/a>\u00a0and after reviewing some of these systems came up with the system I actually wanted to build, much of it completely influenced by the great work being done by other people. My \u2018ideal\u2019 firehose, from the consumer\/client side, had the following properties:<\/p>\n<ul>\n<li>Usable via\u00a0<code>curl<\/code><\/li>\n<li>Allows a client to \u2018rewind\u2019 the stream in case of missed events or maintenance<\/li>\n<li>If a client disconnects, they should pick up the stream where they left off<\/li>\n<li>Client concurrency\/parallelism, e.g. multiple consumers getting unique views of the stream<\/li>\n<li>Near real-time is good enough (sub 1s from an event emitted to consumed)<\/li>\n<\/ul>\n<p>From an event emitter (or producer) perspective, we simply wanted an elastic backend that could grow and shrink based on latency and persistence requirements.<\/p>\n<p>What we ended up with accomplishes all of these goals and ended up being fairly simple to implement. We took the best of many worlds (a bit of kafka, a bit of finagle, some flume influences) and created the whole thing in about 10 days. The internal name for this system is Parmesan which is both a cheese as well as an arrested development character (Gene Parmesan, PI).<\/p>\n<p>The system is comprised of 4 primary components.<\/p>\n<ul>\n<li>A ZooKeeper cluster, used for coordinating Kafka as well as stream checkpoints<\/li>\n<li>Kafka, which is used for message persistence and distribution<\/li>\n<li>A thrift process, written with scala\/finagle, which the tumblr application talks to<\/li>\n<li>An HTTP process, written with scala\/finagle, which consumers talk to<\/li>\n<\/ul>\n<p>The Tumblr application makes a Thrift RPC call containing event data to parmesan. These RPC calls take about 5ms on average, and the client will retry unless it gets a success message back. Parmesan batches these events and uses Kafka to persist them to disk every 100ms. This functionality is all handled by the thrift side of the parmesan application. We also implemented a very simple custom message serialization format so that parmesan could completely avoid any kind of message serialization\/deserialization overhead. This had a dramatic impact on GC time (the serialization change wasn\u2019t made until it was needed) which in turn had a significant impact on average connection latency.<\/p>\n<p>On the client side, any standard HTTP client works and requires (besides a username and password) an application ID and an optional offset. The offset is used for determining where in the stream to start reading from, and is specified either as Oldest (7 days ago), Newest (from right now), or an offset in seconds from the current time in UTC. Up to 16 clients with the same application ID can connect, each viewing a unique partition of the activity stream. Stream partitioning allows you to parallelize your consumption without seeing duplicates. This is a great feature for instance if you took your app down for maintenance and want to quickly catch back up in the stream.<\/p>\n<p>Kafka doesn\u2019t easily (natively) support this style of rewinding so we just persist stream offsets to ZooKeeper. That is, periodically clients with a specific application ID will say, \u201cHey, at this unixtime I saw a message which had this internal Kafka offset\u201d. By periodically persisting this data to Kafka, we can \u2018fake\u2019 this rewind functionality in a way that is useful, but imprecise (we basically have to estimate where in the Kafka log to start reading from).<\/p>\n<p>We use 4 \u2018queue class\u2019 (tumblr speak for a box with 72GB of RAM and 2 mirrored disks) machines, capable of supporting roughly 100k messages per second each, to support the entire stream. Those 4 machines provide a message backlog of 1 week, allowing clients to drop into the stream anywhere in the past week.<\/p>\n<p>As I mentioned on\u00a0<a href=\"https:\/\/twitter.com\/#!\/bmatheny\/status\/159878707513798656\">twitter<\/a>, I\u2019m quite proud of the software and the team behind it. Many thanks to\u00a0<a href=\"http:\/\/derekg.org\/\">Derek<\/a>,\u00a0<a href=\"http:\/\/strle.tumblr.com\/\">Danielle<\/a>\u00a0and\u00a0<a href=\"http:\/\/wkmacura.tumblr.com\/\">Wiktor<\/a>\u00a0for help and feedback.<\/p>\n<p>If you\u2019re interested in this kind of distributed systems work, we\u2019re\u00a0<a href=\"http:\/\/www.tumblr.com\/jobs\">hiring<\/a>.<\/p>\n<\/blockquote>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Blake Matheny: Tumblr Firehose &#8211; The Gory Details mobocracy: Back in December I started putting some thought into the tumblr firehose. While the initial launch was covered\u00a0here, and the business stuff surrounding it was covered by places like\u00a0techcrunch\u00a0and\u00a0AllThingsD, not much has been said about the technical details. First, some back story. I knew in December [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"link","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[],"tags":[1539],"class_list":["post-37126","post","type-post","status-publish","format-link","hentry","tag-emergent-digital-practices","post_format-post-format-link"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p6PWot-9EO","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/posts\/37126","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/comments?post=37126"}],"version-history":[{"count":0,"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/posts\/37126\/revisions"}],"wp:attachment":[{"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/media?parent=37126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/categories?post=37126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rafaelfajardo.com\/portfolio\/wp-json\/wp\/v2\/tags?post=37126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}