Portal:Toolforge/Admin/Harbor
This page contains information about the Toolforge Harbor deployment.
Deployment layout
Currently harbor is deployed partially using puppet, and manual steps, using docker-compose
.
Puppet takes care of setting up the docker environment, creating the base structure directories, and the basic configuration.
From there, the manual steps are to generate the docker-compose
file and start the application (and setting admin and robot users/passwords).
Currently the whole installation is done in a single VM named <project>-harbor-<num>
(ex. tools-harbor-1.tools.eqiad1.wikimedia.cloud
). You can see the upstream docs of the procedure here.
Inside that VM, all the data and config is under /srv/ops/harbor
, where there's:
docker-compose.yml
: with the generated docker-compose configurationharbor.yml
: with the actual configuration, managed by puppetprepare
script: generates all the config files from theharbor.yml
file. You have to manually run to generate the docker-compose file when it changes.data
directory: this is actually a mounted cinder volume, and holds all the data for the harbor containers (ex. the images pushed to harbor).common
directory: this containes some configuration files for the docker containers generated by the prepare script.
The database is hosted in Trove, a different one for each project (tools/toolsbeta).
root@tools-harbor-1:/srv/ops/harbor# ls /srv/ops/harbor
common data docker-compose.yml harbor.yml prepare
Running components
You can check all the components statuses by running:
root@tools-harbor-1:/srv/ops/harbor# docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------------
harbor-core /harbor/entrypoint.sh Up (healthy)
harbor-exporter /harbor/entrypoint.sh Up
harbor-jobservice /harbor/entrypoint.sh Up (healthy)
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-portal nginx -g daemon off; Up (healthy)
nginx nginx -g daemon off; Up (healthy) 0.0.0.0:80->8080/tcp, 0.0.0.0:9090->9090/tcp
redis redis-server /etc/redis.conf Up (healthy)
registry /home/harbor/entrypoint.sh Up (healthy)
registryctl /home/harbor/start.sh Up (healthy)
This installation is exposed to the world using a CloudVPS proxy.
Logs
Currently the logs are only on docker side (not docker-compose, just docker). For example:
ot@tools-harbor-1:/srv/ops/harbor# docker logs --tail 10 harbor-core
2023-10-25T09:20:49Z [INFO] [/pkg/task/dao/execution.go:471]: scanned out 2471 executions with outdate status, refresh status to db
...
Credentials
As common for puppet managed projects, you can find the actual secrets under /etc/puppet/private
hiera directory tree of the project puppetmaster host (ex. tools-puppetmaster-02.tools.eqiad1.wikimedia.cloud
).
You can find also all the admin and robot credentials under /srv/ops/harbor/harbor.yml
.
There are a bunch of credentials related to the Harbor setup.
- admin account (default to
admin
/Harbor12345
, see on each puppetmaster for project the specific one). - tekton robot (for pipelines, they inject docker images built by users).
- builds-api (for project creation, when a build is started for the first time).
- maintain-harbor uses it to ensure policies are set.
- some tool admins personal admin accounts.
Procedures
Restart services
For this you can just docker-compose restart
, that will stop and start the processes (will not recreate containers though)
Refresh containers (pull new images)
For a full container recreation, you have to stop and start the whole setup, like:
docker-compose down # bring everything down
docker-compose up -d # bring everything up again
Quota management
Quota here refers to how much storage a project is entitled to for its artifacts. There are two types of quotas in Harbor:
- The default project quota
- Individual project quotas
Both types are currently manually managed, meaning you have to log into the Harbor UI as a user with admin permissions. Once logged in, navigate to Administration -> Project Quotas through the menu on the left.
Changing the default project quota
Click EDIT at the top of the screen where it says Default quota space per project. Select a number and a unit, or type in -1 for unlimited quota (don't). Important to note is that any changes to the default quota will apply to projects created after the change only.
Changing individual project quotas
Click the box on the left side of the project you want to change the quota for, then click on the edit button above the list of projects. Project quotas can only be edited one at a time, not in batch. Again, use -1 if you want to indicate unlimited quota, although if you think that a project might need unlimited quota, maybe think again.
Links
https://toolsbeta-harbor.wmcloud.org/harbor/projects
https://tools-harbor.wmcloud.org/harbor/projects
Upgrade Harbor
Toolsbeta
- SSH into the toolsbeta-harbor instance
- Harbor is at
/srv/ops/harbor
- Create a backup of the
/srv/ops/harbor/data/*
folders inside the/srv/ops/harbor/data/
directory, which is a volume mount. Make sure there's enough space left on the volume before you attempt this. Also, remove any older backups if there are any to avoid including them in the new backup> tar -cvzf /srv/ops/harbor/data/backup.tar.gz -C data .
- Grep the database password from
/srv/ops/harbor/harbor.yml
- Backup the Harbor database (PostgreSQL on Trove) to
/srv/harbor_backup
> pg_dump -h ttg4ncgzifw.svc.trove.eqiad1.wikimedia.cloud -p 5432 -U harbor -d harbor -F c -f <backup file path>
- Get the latest Harbor release package (the 'online' install version) from https://github.com/goharbor/harbor/releases and extract it.
- Compare the puppet
puppet/modules/profile/templates/toolforge/harbor/prepare.erb
file with the newly generatedprepare
script and reconcile the differences if any (leave the "GENERATED BY PUPPET" comment). - Migrate
harbor.yml
to the new schema by running# test this command > docker run -it --rm -v /:/hostfs goharbor/prepare:v<version here> migrate -i /srv/ops/harbor/harbor.yml -o /srv/ops/harbor/harbor.yml.new
- Copy the content of this new
harbor.yml.new
into the body of puppetpuppet/modules/profile/templates/toolforge/harbor/harbor-docker.yaml.epp
. Make sure you leave any template bits as they were, e.g. the robot account stuff at the bottom of the file. Also, replace the hostnames, passwords and such with the corresponding variables from the top of the file - Push the puppet patch to gerrit
- SSH into toolsbeta-puppetserver and navigate to the puppet modules directory
user@toolsbeta-puppetserver-1:~$ sudo -i root@toolsbeta-puppetserver-1:/etc/puppet# cd /srv/git/operations/puppet/modules root@toolsbeta-puppetserver-1:/srv/git/operations/puppet/modules#
- Cherry-pick the commit
- Back on the toolsbeta-harbor instance, force a puppet run to apply the cherry-picked commit
- Send a log message to #wikimedia-cloud that toolsbeta harbor is about to go down
- Disable puppet
- Run
./prepare
- Refresh the harbor installation to pull the new images
docker-compose down
anddocker-compose up -d
- Check that the UI works, that you can build an image and pull it, start a webservice from it, and that maintain-harbor still works as expected
- Reenable puppet once the patch has been merged (after upgrading harbor on tools)
- Clean up any artifacts, leaving only the most recent backups
Tools
- Backup the data on tools-harbor similarly to how you did it on toolsbeta-harbor. Again, you will have to retrieve the password from
/srv/ops/harbor/harbor.yml
. The tools trove instance is at5ujoynvlt5c.svc.trove.eqiad1.wikimedia.cloud
- Merge the puppet patch.
- Force a puppet run on the tools-harbor instance.
- Run the
/srv/ops/harbor/prepare
script. - Refresh the harbor installation to pull the new images
docker-compose down
anddocker-compose up -d
- Again, check that Harbor is up and works as expected
- Delete any backups to free space
- Update and improve these instructions :)
Development
Local setup
If you want to try with harbor in a local setup (your laptop), maybe try Toolforge lima-kilo .