Event Platform

From Wikitech

The Wikimedia Event Platform exists to enable software and systems engineers to build loosely coupled software systems with event driven architectures. Systems built this way can be easily distributed and decoupled, allowing for inherent horizontal scalability and versatility with respect to yet-unknown use cases.

Platform Architecture Diagram

Philosophy and data model

In typical web stacks, when data changes occur, those changes are reflected as an update to a state store. For example, if a user renames a wikipedia page title, that page's record in a database table would have its page title field updated with the new page title value. Modeling data in this way works if all you care about is the current state. But by keeping only the latest state, we discard a huge piece of useful information: the history of changes.

Instead of updating a database when a state change happens, we can choose to model the state change as what is is: an 'event'. An event is something happening at a specific time, e.g. 'page id 123 was renamed from title_A to title_B by user X at 2020-06-25T12:34:00Z'. If we keep the history of all events, we can always use them recreate the current state as well as the state at any point in the past. An event based data model decouples the occurrence of an event from any downstream state changes. Multiple consumers can be notified of the events occurrence, which enables engineers to build new systems based on the events without interfering with the producers or other consumers of those events.

Event Platform generic concepts

event A strongly typed and schema-ed piece of data, usually representing something happening at a definite time. E.g. 'revision create','user button click', 'page view', etc.
event schema (AKA 'schema') A datatype of an event. The event schema describes the data model of any given event and is used for validation upon receipt of events, as well as for data storage integration (an event schema can be mapped to an RDBS schema, etc.). An event schema is just like a programming data type.
event stream (AKA 'stream') A continuous collection of events (loosely) ordered by time.
event bus A publish & subscribe (PubSub) message queue where events are initially produced to and consumed from. WMF uses Apache Kafka. (NOTE: here 'event bus' is meant to be a generic term and is not referencing the MW EventBus extension).
event producer Producers create events and produce them to specific event streams in the event bus.
event consumer Consumers consume events from specific event streams in the event bus.
event intake service An HTTP service that accepts events, validates them against their schema, and produces them to a specific event stream in the event bus. WMF uses EventGate.
Event stream processing Refers to any software that consumes, transforms, and produces event streams. This includes simple event processing, as well as complex event processing and stateful stream processing. This is usually done using a distributed framework of some kind, e.g. Apache Flink, Apache Spark, or Kafka Streams, but also includes simpler home grown technologies like Change-propogation.

Background reading

History

The first event platform at WMF was EventLogging. This system originally used ZeroMQ to transport messages between its various services, but was later improved to use Kafka. It was built to collect client-side performance measures and click tracking data, and has also been adopted in our mobile apps. It used a Meta-Wiki (meta.wikimedia.org) namespace to store its schemas, which were then referenced by stable revision ID to validate incoming events before producing them to the primary ZeroMQ/Kafka topics for consumption.

The original EventLogging with ZeroMQ eventually reached a scaling problem in 2014. It ran from a single server without automatic failover and did not support multiple eventlogging-processor servers running concurrency, sustaining around 1000 events per second.[1] In 2014, EventLogging intake was migrated to Kafka, with the eventlogging-processor instances able to scale horizontally to multiple servers with Kafka naturally distributing traffic and effectively failing over as-needed.

EventLogging was designed for instrumenting features for telemetry, tracking interactions and recording measurements to give us insights into how features are used by real users. The system did not have built-in support for responding to incoming events or taking actions in response to them. Such larger pipelines were supported using the python-eventlogging client and deploying a separate microservice based on a minimal template.

In 2015, an effort was made to extend the analytics focus of EventLogging to "from and to production" event-driven pipelines. This effort was dubbed "EventBus" and culminated in three new components: the EventBus extension for MediaWiki, the mediawiki/event-schemas Git repository, and eventlogging-service-eventbus. eventlogging-service-eventbus is the the internal frontend that accepts HTTP POST. It validated and produced events against more tightly controlled production schemas, and produced them to Kafka. EventBus was used to build the ChangeProp service. We originally intended to merge the analytics and production uses of EventLogging.

In 2018, we started the "Modern Event Platform" program, which included EventBus's original analytics+production unification goal as well as other parts of WMF's event processing stack using open source (non-homegrown) components where possible. The EventLogging python codebase was specific to WMF and MediaWiki, making it more difficult to accomplish this unification, so it was decided to build a new more generic and extensible JSONSchema event service, eventually entitled EventGate.

In 2019, EventGate, along with other Modern Event Platform components, replaced eventlogging-service-eventbus, and is intended to eventually replace the 'analytics' deployment of EventLogging services (e.g. eventlogging-processor).

Today, events logged via EventLogging don't go to a MySQL database, instead data analysts can find the data in Hadoop, which enables a greater volume of events to be stored.

In 2022, The "Event Platform Value Stream" working group was created and tasks with working on the Stream Connectors and Stream Processing components. Evolution of these components is being driven by work to improve the ability to externalize the current state of MediaWiki pages using event streams.

References

  1. ↑ Event Infrastructure at WMF (2007-2018) by Andrew Otto (WMF-restricted)