Kafka

From Wikitech
Jump to navigation Jump to search

Introduction

Apache Kafka is a scalable and durable distributed logging buffer.

Administration

See the Kafka Administration page for administration tips and documentation.

Kafka Clusters

WMF runs 3 Kafka clusters: analytics-eqiad, main-eqiad, and main-codfw. We run Confluent's Kafka distribution.

analytics (eqiad)

analytics-eqiad is the original Kafka install at WMF. (It was originally referred to as just eqiad.) It consists of 6 brokers inside the Analytics VLAN. It originally served webrequest logs and various other high volume analytics data. Most of its uses have been moved to jumbo-eqiad (described below). As of 2018-07, it still hosts the Mediawiki produced Avro topics for CirrusSearchRequestSet and ApiAction logs. Removal of these are blocked on this ticket.

This cluster is slated to be decommissioned as soon as we can remove all usage from it.

Brokers: kafka_clusters.eqiad.

jumbo (eqiad)

The Kafka jumbo cluster is the replacement for the Kafka analytics cluster. The bulk of the data on this cluster is from webrequests. This cluster is also used directly for Analytics EventLogging, Discovery Analytics, statsv and EventStreams. Much of the data here is imported into Hadoop using Camus. Data from main clusters is mirrored to this cluster for Analytics purposes.

This cluster is intended to be used for high volume Analytics, as well as other non production critical services. If you are building a production level (e.g. paging on a holiday) service that uses Kafka, you should use the main Kafka clusters. All (almost) topics from the main clusters are mirrored here via MirrorMaker instances running on each of the broker nodes. These MirrorMaker instances are called main-eqiad_to_jumbo-eqiad, and as such they mirror topics from the main-eqiad cluster.

Brokers: kafka_clusters.jumbo-eqiad.

main (eqiad and codfw)

The 'main' Kafka clusters in eqiad and codfw are mirrors of each other. The 'main' clusters should be used for low volume critical production services. Main is currently used directly by EventBus and change-propogation.

Each main Kafka cluster consists of 3 brokers. eventlogging-service-eventbus, the EventBus HTTP produce service endpoint, is colocated with main Kafka brokers.

MirrorMaker instance run on each broker node, and consume from the remote data center. On main-eqiad, these instances are called main-codfw_to_main-eqiad, and on main-codfw, they are callsed main-eqiad_to_main-codfw.

Only topics prefixed by the appropriate datacenter names are mirrored between the two main clusters. I.e. only eqiad.* topics are mirrored from main-eqiad -> main-codfw, and vice versa.

Brokers: kafka_clusters.main-eqiad.

Webrequest logs

varnishkafka is installed on frontend varnishes. It sends webrequest logs to the jumbo Kafka brokers.

kafaktee is a replacement for udp2log that consumes from Kafka instead of from the udp2log firehose. It runs on oxygen, consumes, samples, and filters the webrequest to files for easy grepping and troubleshooting. Fundraising also runs an instance of Kafkatee that feeds webrequest logs into banner analysis logic.

Monitoring

In Labs

Kafka is puppetized in order to be able to spawn up arbitrary clusters in labs. Here's how.

You'll need a running Zookeeper and Kafka broker. These instructions show how to set up a single node Zookeeper and Kafka on the same host.

Create a new Jessie labs instance. In this example we have named our new instance 'kafka1' and it is in the 'analytics' project. Thus the hostname is kafka1.analytics.eqiad.wmflabs. Wait for the instance to spawn and finish its first puppet run. Make sure you can log in.

Edit hiera data for your project and set the following:

zookeeper_cluster_name: my-zookeeper-cluster
zookeeper_clusters:
   my-zookeeper-cluster:
     hosts:
       kafka1.analytics.eqiad.wmflabs: "1"

kafka_clusters:
  my-kafka-cluster:
    zookeeper_cluster_name: my-zookeeper-cluster
    brokers:
        kafka1.analytics.eqiad.wmflabs:
            id: "1"

Go to the configure instance page for your new instance, and check the following boxes to include needed classes:

  • role::zookeeper::server
  • role::kafka::simple::broker

Run puppet on your new instance. Fingers crossed and you should have a new Kafka broker running.

To verify, log into your instance and run

 kafka topics --create --topic test --partitions 2 --replication-factor 1
 kafka topics --describe

If this succeeds, you will have created a topic in your new single node Kafka cluster.

Kafka clients usually take a list of brokers and/or a zookeeper connect string in order to work with Kafka. In this example, those would be:

  • broker list: kafka1.analytics.eqiad.wmflabs:9092
  • zookeeper connect: kafka1.analytics.eqiad.wmflabs:2181/kafka/my-kafka-cluster

Note that the zookeeper connect URL contains a path that has the value of kafka_cluster_name in it. You should substitute this for whatever you named your cluster in your hiera config.

How do I ...

Produce/Consume to kafka

Easiest is to use kafkacat, and can be executed in consumer or producer mode

Consume

stat1007$ kafkacat -C -b kafka-jumbo1001.eqiad.wmnet:9092 -t test

Produce

stat1007$ cat test_message.txt
Hola Mundo
stat1007$ cat test_message.txt | kafkacat -P -b kafka-jumbo1001.eqiad.wmnet:9092 -t test

Consume avro schema from kafka

 from kafka import KafkaConsumer
 import avro.schema
 import avro.io
 import io

 # To consume messages
 consumer = KafkaConsumer('mediawiki_CirrusSearchRequestSet',
                         group_id='my_group',
                         metadata_broker_list=['kafka-jumbo1001:9092'])

 schema_path="/home/madhuvishy/avro-kafka/CirrusSearchRequestSet.avsc"
 schema = avro.schema.parse(open(schema_path).read())

 for msg in consumer:
    bytes_reader = io.BytesIO(msg.value)
    decoder = avro.io.BinaryDecoder(bytes_reader)
    reader = avro.io.DatumReader(schema)
    data = reader.read(decoder)
    print data

See also