Jump to content

Data Platform/Systems/Kerberos

From Wikitech

Hadoop by default does not ship with a strong authentication mechanism for users and daemons. In order to enable its "Secure mode", an external authentication service must be plugged in, and the only compatible one is Kerberos.

When enabled, it means that users and daemons will need to authenticate to our Kerberos service before being able to use Hadoop. Please read the next sections to get more info about what do to.

High level overview



This diagram is a high level overview of how Kerberos authentication affects users. First of all, notice that the Hadoop cluster is the only part of the infrastructure that will be configured to use Kerberos. The red lines show which systems are required to authenticate with Kerberos in order to use Hadoop:

  • Druid, since its deep storage is Hadoop HDFS. Please note that this will not mean that Druid itself will require Kerberos authentication from users, but only that Druid itself will need to authenticate before fetching data from HDFS. This means that Superset and Turnilo dashboards will keep working as before, without changes.
  • Users on the Analytics Clients. Anyone who uses a tool on the clients that interacts with Hadoop (such as Oozie, Hive, Spark, Jupyter Notebooks) will need to authenticate via Kerberos.

Will my files on HDFS change when Kerberos is enabled?

Nothing will change, including file permissions. Kerberos allows users prove their identities before accessing or modifying a file, nothing more.

How do I..

Authenticate via Kerberos

Run the kinit command, enter your password and then execute any command (spark, etc..). This is very important since if you don't do it, you'll see horrible error messages reported by basically anything you'll use. The kinit command grants you a so called Kerberos TGT (Ticket Granting Ticket), that will be used to allow you to authenticate to various services and hosts.

The ticket lasts for 2 days, so you will not need to run kinit every time, just every time it expires. You can inspect the status of your ticket via klist.

If you are using the analytics client nodes (stat100x) then the Kerberos Auto-Renewal Service means that you are only required to enter your password once every 14 days.

The first time that kinit is executed it will ask to change the temporary password that you should have received via email when requesting access (if not, please see next section).

Renewing a Ticket

Your Kerberos ticket expires after 2 days, but within this period it can be renewed and its lifetime extended by another 2 days using the command kinit -R

The maximum lifetime of a Kerberos ticket is currently configured to be 14 days, after which it cannot be renewed and you will have to request a new ticket with kinit and enter your Kerberos password.

Kerberos Auto-Renewal Service

In order to make life easier for users, the analytics client nodes (stat100[4-9]) have a system in place that will automatically renew your Kerberos ticket every day, up to the maximum permissible lifetime of the ticket.

Once you have run kinit for the first time and have a valid ticket, the next time you SSH into the same host a periodic task will be created for you that will renew your ticket every day at midnight UTC.

You will see a message like this:

You have a valid Kerberos ticket.
Creating automatic Kerberos ticket renewal service.

On subsequent logins you will see a message like this:

You have a valid Kerberos ticket.
Your automatic Kerberos ticket renewal service is also active on this host.

The result is that on these hosts you only have to enter your Kerberos password every 14 days, as opposed to every 2 days.

Get a password for Kerberos

Please create a Phabricator task with the "Data Engineering" tag to request a Kerberos identity. Check that:

  • Your shell username is in analytics-privatedata-users.
  • Your shell username and email are listed in the task's description.

If you have any doubt, feel free to ask to the Data Engineering team on Libera.chat #wikimedia-analytics connect or via email. You'll receive an email containing a temporary password, that you'll be required to change during you first authentication (see section above).

FAQ:

  • This is really annoying, can't we just use LDAP or something similar to avoid another password?
    • We tried really hard but for a lot of technical reasons, the integration would be complicated and cumbersome to maintain for Data Engineering and SRE. There might be some changes in the future, but for now we'll have to deal with another password to remember.

Reset my password for Kerberos?

File a task with the "Data Engineering" tag and we'll reset it for you, there is no self service.

Run a command as a system user?

The option that is currently available is a Kerberos keytab: a file with permissions set that only the owner can read, holding the password to authenticate to Kerberos. We use keytabs for daemons/services, and we'll plan to provide those to users with the need to run periodical jobs. The major drawbacks are:

  • our security standard lowers down a bit, since it is sufficient to ssh to a host to access HDFS (as opposed to also know a password). This is more or less the current scheme, so not a big deal, but we have to think about it.
  • The keytab needs to be generated for every host that needs to have this automation and it also needs to be regenerated and re-deployed when the user changes the password (this doesn't happen for daemons of course). It is currently not automated, and it requires a ping to Data Engineering every time..


The solution that we are currently working on is the following: every user of the analytics-privatedata-users POSIX group will be able to sudo/impersonate the analytics-privatedata user, that in turn will be able to read a keytab on some hosts (likely stat1007 and notebook1003) and authenticate to Kerberos without password. In order to hide complexity and help users we created the kerberos-run-command tool:

# No kinit done, hence no credentials for my user
elukey@stat1007:~$ klist
klist: No credentials cache found (filename: /tmp/krb5cc_13926)

# As expected, a simple ls will fail
elukey@stat1007:~$ hdfs dfs -ls /
[..cut..]
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "stat1007.eqiad.wmnet/10.64.5.32"; destination host is: "analytics1029.eqiad.wmnet":8020;
[..cut..]

# First attempt to use the kerberos-run-command 
elukey@stat1007:~$ kerberos-run-command analytics-privatedata hdfs dfs -ls /
The user keytab that you are trying to use (/etc/security/keytabs/analytics/analytics-privatedata.keytab) doesn't exist or it isn't readable from your user, aborting...

# The problem is that the keytab is not readable by all
# analytics-privatedata-users members directly, but they
# have to sudo first:
elukey@stat1007:~$ sudo -u analytics-privatedata kerberos-run-command analytics-privatedata hdfs dfs -ls /
Found 5 items
drwxr-xr-x   - hdfs hadoop          0 2019-06-20 06:00 /system
drwxrwxrwt   - hdfs hdfs            0 2019-11-14 15:03 /tmp
drwxr-xr-x   - hdfs hadoop          0 2019-10-25 16:24 /user
drwxr-xr-x   - hdfs hdfs            0 2019-01-17 13:42 /var
drwxr-xr-x   - hdfs hadoop          0 2019-06-25 13:46 /wmf

# It worked! Interesting question: does my user have credentials now? Let's check...
elukey@stat1007:~$ klist
klist: No credentials cache found (filename: /tmp/krb5cc_13926)

# Why not? Because only the analytics-privatedata user has:
elukey@stat1007:~$ sudo -u analytics-privatedata klist
Ticket cache: FILE:/tmp/krb5cc_498
Default principal: analytics-privatedata/stat1007.eqiad.wmnet@WIKIMEDIA

Valid starting       Expires              Service principal
11/14/2019 15:03:33  11/15/2019 01:03:33  krbtgt/WIKIMEDIA@WIKIMEDIA
	renew until 11/15/2019 15:03:33

# This may seem confusing at first, but it makes sense, since we had to sudo
# to be able to read the keytab.
# Corollary: the analytics-privatedata user is not a replacement for your
# kerberos authentication, only a convenient way to run recurrent jobs via
# cron or similar.

NOTE: currently kerberos-run-command doesn't support scripts, only executables. The workaround is to make the user you're trying to sudo as kinit via a simple kerberos-run-command. Example: sudo -u analytics kerberos-run-command analytics hdfs dfs -ls and after that you can run commands as that user relying on the kinit: sudo -u analytics spark2-sql ....

Submit a job to Oozie or Hive with a different user than mine

Hive (server) and Oozie (server) now runs as Hadoop proxies, namely they are entitled to run jobs as other users. They of course ask authentication credentials to users before doing any action on their behalf. After a kinit you are able to provide those credentials (that will be handled transparently from hive/oozie client tools) but if you need to submit as another user, then you'll need to follow what written in the above section and use kerberos-run-command.

Use JDBC with Hive Server

Use the following connection string (adding custom parameter that you need of course):

jdbc:hive2://analytics-hive.eqiad.wmnet:10000/default;principal=hive/analytics-hive.eqiad.wmnet@WIKIMEDIA

The principal=hive/analytics-hive.eqiad.wmnet@WIKIMEDIA part may look weird at first, since we'd expect to put our credentials in there. In JDBC it seems that you need to provide the identity of the target Kerberos principal, not yours (that will be automatically picked up from the credentials cache) to instruct Hive to use Kerberos. See Hive docs for more info.

Check the Yarn Resource Manager's UI

Nothing changed for the Yarn's UI!

Check Hue

Nothing changed for Hue!

Use Hive

The hive cli is compatible with Kerberos, even if it uses an old protocol (connecting to the Hive Metastore and HDFS directly). The beeline command line uses the Hive 2 server via JDBC and it is also compatible with Kerberos. You just need to authenticate as described above and then run the tool on an-tool1006.eqiad.wmnet.

Use Spark 2

On stat100[4,5,7] and notebook100[3,4] authenticate via kinit and then use the spark shell as you are used to. There are currently some limitations:

  • spark2-thriftserver requires the hive keytab, that is only present on an-coord1001, so when running on client nodes it will return the following error: org.apache.hive.service.ServiceException: Unable to login to kerberos with given principal/keytab

Use Jupyterhub (SWAP replica)

You can authenticate to Kerberos running kinit in the Terminal window. Please remember that it will be needed only once every 24h, not every time.

Administration

Main article: Data Engineering/Systems/Kerberos/Administration

Check the status of the HDFS Namenodes and Yarn Resource Managers

Most of the commands are the same, but of course to authenticate as the user hdfs you'll need to use a keytab:

sudo -u hdfs kerberos-run-command hdfs /usr/bin/yarn rmadmin -getServiceState an-master1001-eqiad-wmnet

sudo -u hdfs kerberos-run-command hdfs /usr/bin/hdfs haadmin -getServiceState an-master1002-eqiad-wmnet