The Analytics Data Lake (ADL), or the Data Lake for short, is a large, analytics-oriented repository of data about Wikimedia projects (in industry terms, a data lake).
- Traffic data
- Webrequest, pageviews, and unique devices
- Edits data
- Historical data about revisions, pages, and users (e.g. MediaWiki History)
- Content data
- Wikitext (latest & historical) and wikidata-entities
- Events data
- EventLogging, EventBus and event streams data (raw, refined, sanitized)
- ORES scores
- Machine learning predictions (available as events as of 2020-02-27)
Some of these datasets (such as webrequests) are only available in Hive, while others (such as pageviews) are also available as data cubes (usually in more aggregated capacity).
You can access these engines through several different routes:
- Superset has a graphical SQL editor where you can run Presto queries
- Hue has a graphical SQL editor where you can run Hive queries
- Custom code on one of the analytics clients (the easiest way to do this is to use our Jupyter service)
All three engines also have command-line programs which you can use on one of the analytics clients. This is probably the least convenient way, but if you want to use it, consult the engine's documentation page.
Differences between the SQL engines
For the most part, Presto, Hive, and Spark work the same way, but they have some differences in SQL syntax and processing power.
- Spark and Hive use
STRINGas the keyword for string data, while Presto uses
- In Spark and Hive, you use the
SIZEfunction to get the length of an array, while in Presto you use
- In Spark and Hive, double quoted text (like
"foo") is interpreted as a string, while in Presto it is interpreted as a column name. It's easiest to use single quoted text (like
'foo') for strings, since all three engines interpret it the same way.
- Spark and Hive have a
CONCAT_WS("concatenate with separator") function, but Presto does not.
- Presto interprets double quoted things as column names
- Presto has no FIRST and LAST functions
- If you need to use a keyword like
DATEas a column name, you use backticks (
`date`) in Spark and Hive, but double quotes (
"date") in Presto.
Data Lake datasets which are available in Hive are stored in the Hadoop Distributed File System (HDFS), usually in the file format. The Hive metastore is a centralized repository for metadata about these data files, and all three SQL query engines we use (Presto, Spark SQL, and Hive) rely on it.
The Analytics cluster, which consists of Hadoop servers and related components, provides the infrastructure for the Data Lake.