Wikimetrics
Administration
Logs
Wikimetrics is now logging to systemd. Logs are available through journalctl
sudo journalctl -u uwsgi-wikimetrics-web --since "1 hour ago"
sudo journalctl -u wikimetrics-queue --since "1 hour ago"
Logs from uwsgi are at:
/var/log/uwsgi/app
IP tables
These are not in puppet and need to be rerun everytime instance is rebooted:
iptables -t nat -I OUTPUT -p tcp --destination s1.labsdb --dport mysql -j DNAT --to-destination 10.64.20.12:3306 iptables -t nat -I OUTPUT -p tcp --destination 192.168.99.2 --dport mysql -j DNAT --to-destination 10.64.37.4:3306 iptables -t nat -I OUTPUT -p tcp --destination s3.labsdb --dport mysql -j DNAT --to-destination 10.64.37.5:3306 iptables -t nat -I OUTPUT -p tcp --destination 192.168.99.4 --dport mysql -j DNAT --to-destination 10.64.37.4:3307 iptables -t nat -I OUTPUT -p tcp --destination 192.168.99.5 --dport mysql -j DNAT --to-destination 10.64.37.4:3308 iptables -t nat -I OUTPUT -p tcp --destination s6.labsdb --dport mysql -j DNAT --to-destination 10.64.37.5:3307 iptables -t nat -I OUTPUT -p tcp --destination s7.labsdb --dport mysql -j DNAT --to-destination 10.64.37.5:3308 iptables -t nat -I OUTPUT -p tcp --destination c1.labsdb --dport mysql -j DNAT --to-destination 10.64.37.4:3309 iptables -t nat -I OUTPUT -p tcp --destination s2.labsdb --dport mysql -j DNAT --to-destination 10.64.37.4:3309 iptables -t nat -I OUTPUT -p tcp --destination c3.labsdb --dport mysql -j DNAT --to-destination 10.64.37.4:3309
Local setup
Madhu set up Docker! I had some crufty old version and I had to do this: Here's how to set-up Wikimetrics locally:
sudo pip install --upgrade docker-compose sudo pip uninstall docker-py; sudo pip uninstall docker; sudo pip install docker
After that, the instructions from the README work in a fresh checkout of wikimetrics from gerrit. The explanation of what's running and how to work with this development environment is also in the README. There is also a Docker installation available.
Essentially Wikimetrics has the following pieces:
- db: a mysql database that has tables to track users of wikimetrics, the cohorts they have defined, and the reports they ran on those cohorts.
- queue: a celery instance that listens for async tasks from the wikimetrics website. These can be validating users in a cohort or running reports.
- redis: a redis instance that stores the results of a celery job.
- scheduler: a celery scheduler that wikimetrics uses like a more flexible cron to run daily jobs (for recurring reports).
- web: a Flask website that serves the user interface and report results.
- migrations: wikimetrics uses SQLAlchemy Alembic to version control the database schema. There are good instructions in the README for updating your local database and generating new migrations.
- tests: wikimetrics has too many tests. We used to aim for 100% coverage and coverage is still fairly high. I wouldn't rely on just the tests to catch bugs, but if you break a test something is probably wrong so they're useful.
To do anything useful, you'll have to add some users to the test databases that are set up, because wikimetrics validates cohorts and so your cohorts would have no valid users otherwise. It sets up "enwiki", "dewiki", and "wiki" as test databases during the Docker setup.
Labs setup
There's a Wikimetrics Labs Project.
- wikimetrics-test: total playground, we used it to try anything crazy
- wikimetrics-staging: semi-stable, meant to beta test features and send links to folks like Amanda
- wikimetrics-01: "production" box where the actual metrics.wmflabs.org points to via a web proxy setup in Horizon
Deploying wikimetrics
You can use this repo to deploy wikimetrics: https://github.com/wikimedia/analytics-wikimetrics-deploy. Check the README there for what is covered. You'll probably need admin rights in the Wikimetrics project to do this. For the production deploy, you'll see there's a secrets folder with a submodule in it that points to the private puppet repo. This is where passwords for "production" wikimetrics are along with OAuth secrets and other stuff like that.