Portal:Toolforge/Admin/Harbor

From Wikitech
Jump to navigation Jump to search

This page contains information about the Toolforge Harbor deployment.

Deployment layout

Currently harbor is deployed partially using puppet, and manual steps, using docker-compose.

Puppet takes care of setting up the docker environment, creating the base structure directories, and the basic configuration. From there, the manual steps are to generate the docker-compose file and start the application (and setting admin and robot users/passwords).

Currently the whole installation is done in a single VM named <project>-harbor-<num> (ex. tools-harbor-1.tools.eqiad1.wikimedia.cloud). You can see the upstream docs of the procedure here.

Inside that VM, all the data and config is under /srv/ops/harbor, where there's:

  • docker-compose.yml: with the generated docker-compose configuration
  • harbor.yml: with the actual configuration, managed by puppet
  • prepare script: generates all the config files from the harbor.yml file. You have to manually run to generate the docker-compose file when it changes.
  • data directory: this is actually a mounted cinder volume, and holds all the data for the harbor containers (ex. the images pushed to harbor).
  • common directory: this containes some configuration files for the docker containers generated by the prepare script.

The database is hosted in Trove, a different one for each project (tools/toolsbeta).

root@tools-harbor-1:/srv/ops/harbor# ls /srv/ops/harbor
common  data  docker-compose.yml  harbor.yml  prepare

Running components

You can check all the components statuses by running:

root@tools-harbor-1:/srv/ops/harbor# docker-compose ps
      Name                     Command                  State                          Ports                    
----------------------------------------------------------------------------------------------------------------
harbor-core         /harbor/entrypoint.sh            Up (healthy)                                               
harbor-exporter     /harbor/entrypoint.sh            Up                                                         
harbor-jobservice   /harbor/entrypoint.sh            Up (healthy)                                               
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp                   
harbor-portal       nginx -g daemon off;             Up (healthy)                                               
nginx               nginx -g daemon off;             Up (healthy)   0.0.0.0:80->8080/tcp, 0.0.0.0:9090->9090/tcp
redis               redis-server /etc/redis.conf     Up (healthy)                                               
registry            /home/harbor/entrypoint.sh       Up (healthy)                                               
registryctl         /home/harbor/start.sh            Up (healthy)

This installation is exposed to the world using a CloudVPS proxy.

Logs

Currently the logs are only on docker side (not docker-compose, just docker). For example:

ot@tools-harbor-1:/srv/ops/harbor# docker logs --tail 10 harbor-core 
2023-10-25T09:20:49Z [INFO] [/pkg/task/dao/execution.go:471]: scanned out 2471 executions with outdate status, refresh status to db
...

Credentials

As common for puppet managed projects, you can find the actual secrets under /etc/puppet/private hiera directory tree of the project puppetmaster host (ex. tools-puppetmaster-02.tools.eqiad1.wikimedia.cloud).

You can find also all the admin and robot credentials under /srv/ops/harbor/harbor.yml.

There are a bunch of credentials related to the Harbor setup.

  • admin account (default to admin/Harbor12345, see on each puppetmaster for project the specific one).
  • tekton robot (for pipelines, they inject docker images built by users).
  • builds-api (for project creation, when a build is started for the first time).
  • maintain-harbor uses it to ensure policies are set.
  • some tool admins personal admin accounts.

Procedures

Restart services

For this you can just docker-compose restart, that will stop and start the processes (will not recreate containers though)

Refresh containers (pull new images)

For a full container recreation, you have to stop and start the whole setup, like:

docker-compose down  # bring everything down

docker-compose up -d  # bring everything up again

Upgrade Harbor

Toolsbeta

  1. SSH into the toolsbeta-harbor instance.
  2. Harbor is at /srv/ops/harbor.
  3. Create a backup of the /srv/ops/harbor/data/* folders inside the /srv/ops/harbor/data/ directory (it is a volume mount).
  4. Grep the database password from /srv/ops/harbor/harbor.yml
  5. Backup the Harbor database (PostgreSQL on Trove)
    > pg_dump -h ttg4ncgzifw.svc.trove.eqiad1.wikimedia.cloud -p 5432 -U harbor -d harbor -F c -f <backup file name>
    
  6. Get the latest Harbor release package (the 'online' install version) from https://github.com/goharbor/harbor/releases and extract it.
  7. Compare the puppet prepare.erb file with the newly generated prepare script and reconcile the differences if any (leave the "GENERATED BY PUPPET" comment).
  8. Migrate harbor.yml to the new schema by running
    # test this command
    > docker run -it --rm -v /:/hostfs goharbor/prepare:v<version here> migrate -i /srv/ops/harbor/harbor.yml -o /srv/ops/harbor/harbor.yml.new
    
  9. Copy the content of this new harbor.yml.new into the body of puppet harbor.yml.epp. Make sure you leave any template bits as they were, e.g. the robot account stuff at the bottom of the file. Also, replace the hostnames, passwords and such with the corresponding variables from the top of the file.
  10. Push the puppet patch to gerrit.
  11. SSH into toolsbeta-puppetmaster and navigate to the puppet modules directory
    dcaro@toolsbeta-puppetmaster-04:~$ sudo -i
    
    root@toolsbeta-puppetmaster-04:~# cd /etc/puppet
    
    root@toolsbeta-puppetmaster-04:/etc/puppet# ls -l
    ...
    lrwxrwxrwx 1 root root   38 Jun 11  2020 modules -> /var/lib/git/operations/puppet/modules
    ...
    
    root@toolsbeta-puppetmaster-04:/etc/puppet# cd /var/lib/git/operations/puppet/modules
    
    root@toolsbeta-puppetmaster-04:/var/lib/git/operations/puppet/modules#
    
  12. Cherry-pick the commit.
  13. Back on the toolsbeta-harbor instance, force a puppet run to apply the cherry-picked commit.
  14. Send a log message to #wikimedia-cloud that toolsbeta harbor is about to go down.
  15. Disable puppet.
  16. Run ./prepare.
  17. Refresh the harbor installation to pull the new images docker-compose down and docker-compose up -d
  18. Enable puppet.
  19. Check that the UI works, that you can build an image and pull it, etc.
  20. Remove the data folders backups to free space

Tools

  1. Backup the data on tools-harbor similarly to how you did it on toolsbeta-harbor. Again, you will have to retrieve the password from /srv/ops/harbor/harbor.yml. The tools trove instance is at 5ujoynvlt5c.svc.trove.eqiad1.wikimedia.cloud
  2. Merge the puppet patch.
  3. Force a puppet run on the tools-harbor instance.
  4. Run the /srv/ops/harbor/prepare script.
  5. Refresh the harbor installation to pull the new images docker-compose down and docker-compose up -d
  6. Again, check that Harbor is up and works as expected
  7. Delete any backups to free space
  8. Update and improve these instructions :)

Development

Local setup

If you want to try with harbor in a local setup (your laptop), maybe try Toolforge lima-kilo .


See also