You will need lots of accounts, memberships and other secret keys to become a real productive member of the Analytics team. Here's an overview of things you should do in the first week. Please update this document as you go along! Last but not the least, the most important thing: welcome to the Analytics team!
This section is related to the first logistic steps to effectively join the Wikimedia's staff crew. Take your time to explore and look around, don't rush!
Wikimedia tech employee orientation
Starting point for each new Wikimedia employee: https://office.wikimedia.org/wiki/New_tech_employee_orientation
Your manager and the Wikimedia's IT department will help you open several accounts including your work email. Right after this step, you will be able to communicate and participate to the day to day discussions between staff members. Be patient and don't be scared about the huge amount of information and emails that you'll receive!
Reading mailing lists is important. All projects we build or use are open-source, and as most open-source projects, they have communities which come together on mailing lists. There is much knowledge to be gained in these mailing lists.
Once you have a Wikimedia e-mail address you should subscribe yourself to these e-mail lists:
- analytics@ (Archive)
- analytics-alerts - needs a phabricator task like https://phabricator.wikimedia.org/T123141
- Please request acces to:
For an overview of all available mailing lists see https://lists.wikimedia.org/mailman/listinfo
- Optionally you may want to read archives or subscribe to the following mailing lists:
If there are mailing lists you want to read without subscribing you may consider using the following gateways:
Most of our communication happens on IRC, you should set up an IRC nick
- Install an IRC client -- ask team members for recommendations ( some would be quassel, irssi, pidgin, xchat, textual or adium if you're on a Mac )
- Follow instructions on https://meta.wikimedia.org/wiki/IRC/Cloaks to request an IRC cloack
- Connect to #wikimedia-analytics on Freenode
- Other channels you might be interested in:
#wikimedia-cloud, #wikimedia-operations, #wikimedia-office
Make sure you have an employee account and that you can use the office wiki, your office wiki user will be given to you once you get your wikimedia e-mail address.
Please buy a high-quality headset -- your colleagues will love you for this. For more tips see https://office.wikimedia.org/wiki/Office_IT/Projects/Telepresence
We are part of a movement with a unique culture. It's worth taking the time to read a bit about how our biggest project works. This policy could be a useful start, as it introduces the core concepts from a concrete point of view: https://en.wikipedia.org/wiki/Wikipedia:Biographies_of_living_persons
Please follow the next subsections (order matters!) to get permissions for various fundamental services. Access to Production will be covered in a separate section.
Cloud VPS is a cluster of virtual machines. Access is completely decoupled from production and different ssh keys should be used.
Cloud VPS is not production but we have several tools hosted on the cluster, accessing to Cloud VPS requires a Wikimedia developer account:
- Create account
- Log in
- You need to set up ssh keys
- Upload your public SSH key. Please have in mind that Cloud VPS is a testing environment thus this ssh key should only be used in testing, if you need access to machines in the production cluster your ssh key should be different (see section below about Production access).
- Configure your ~/.ssh/config with bastion hosts
- Ask someone in the team to add you to the relevant projects in Cloud VPS.
- Get familiar with the Cloud VPS environment, how to use the Horizon interface to spin up nodes, remove nodes, etc
https://phabricator.wikimedia.org is the version of Phabricator that we use. Follow this page to log in for the first time (please use the sunflower icon as suggested by the tutorial to leverage the single sign on).
- Create an account
- Log in
- Gerrit is the code review workflow we use, build on top of git
- Log in to Gerrit using your Wikimedia developer account credentials.
- To verify everything works, clone a repo repo from https://gerrit.wikimedia.org/r/#/admin/projects/?filter=analytics using SSH.
- Take a look at how to deal with gerrit in different work scenarios: http://etherpad.wikimedia.org/p/analytics-gerrit
Accessing production infrastructure
With great power comes great responsibility. Please do read carefully the Wikimedia's SSH access guidelines and familiarize with your new SSH config before proceeding. Moreover we manage very sensitive data, please read Analytics/Data access to familiarize yourself with our procedures.
Shell access to Wikimedia cluster and production infrastructure
Tickets are filed for the ops team to see and need to be approved by a manger (example: https://phabricator.wikimedia.org/T96053).
You will also need to receive and acknowledge a legal disclaimer about data deletion. This is an important legal requirement for which we need to ping legal everytime someone gains access to data with sudo permissions.
Talk with Andrew Otto about how to submit your ssh public key. You would likely need to proxy your ssh connection from a know machine to access some of the hosts above. You should not use the same ssh key for Cloud VPS (testing) and stat1 machines (production).
The easiest would be to ask some team member for its .ssh/config file and get the proxy setup.
Please have in mind that different processes are required to access production machines (stat1) and testing machines (Cloud VPS)
Sample ssh config
See SSH access#SSH configuration for sample SSH config. If you're in the analytics team you will probably SSH into both Cloud VPS and Production, so add relevant config for both in your ~/.ssh/config file.
Once you have your SSH setup in place and your credentials have been approved by Ops (using the Phabricator task created before) you will be able to explore the Analytics infrastructure. Please start from Analytics and check the instruction for projects, for example:
Talk with the people of your team on IRC about their work and pointers to their projects, so you will get a more precise idea about who does what. Be patient, it will take a while to get a good overall picture!
Add the Analytics Team Calendar to your default view. Someone (we all can manage sharing) should go to https://calendar.google.org and add you:
- My Calendars -> Settings
- Click Team Analytics -> Share This Calendar
- Add the new person
As far as equipment goes you will need a good development machine.
Minimum machine specs:
- >=4GB RAM
- i7 >= 2.4 Ghz quad-core or better
- 300GB disk
Recommended machine specs:
- >=8GB RAM
- i7 >= 2.4 Ghz quad-core or better
- 300GB disk
At first sight you might think these are not required, but you will have to run VMs, you will be using vagrant to re-create various environments(sometimes with multiple nodes), so you will need some hardware for that.
You could consider creating accounts for:
The machines we deploy on are using Ubuntu and it would be more convenient for you to have Ubuntu installed on your development machine or any other UNIX based operating system. It will considerably facilitate your work. You may choose any other Linux distribution you're familiar with.
Mac is also a very possible choice.
This is a collection of things you might find useful in your work.
You may find the following tools useful for sync-ing files between your local machine and remote machines(one-way or two-way). You can also mount remote directories as if they were your local directories:
IDEs and editors
For Java development, you may use what IDE you feel comfortable with. Eclipse is the IDE du jour, but you might want to look at IDEA also. For remote development you may find vim to be useful(or a combination of a sync tool and your favorite editor/IDE). Other editors you might find useful may include Sublime Text, Emacs.
You may find the following tools useful to search through configuration files or code:
It may be useful that you familiarize yourself with Vagrant and Puppet to be able to recreate smaller environments/conditions on your machine to test various software you're developing or contributing to.
General Overview: What is analytics doing back there?
Our most recent talk that gives you an overview of the stack:
Talks recommended by other members of the Analytics team:
- The Paramecium Talk, Aaron Halfaker
- Kafka @ Wikimedia foundation, Andrew Otto
- Hadoop and Beyond. An overview of Analytics infrastructure
- What happens when you type el.wikipedia.org (overview of the setup of wikipedia by our SRE team)