Toolforge Workgroupmeeting 2022-11-15 notes
- Andrew Bogott
- Arturo Borrero Gonzalez
- Bryan Davis
- David Caro
- Francesco Negri
- Nicholas Skaggs
- Raymond Olisaemeka
- Seyram Komla Sapaty
- Slavina Stefanova
- Taavi Väänänen
- Did we finish discussing a looser approach accepting changes to toolforge docker images?
- Idea to rewrite parts of toolforge-cli in golang (from python)
- Discussion: How decisions affecting Toolforge are made/communicated. Taavi mentioned a couple of times that he's learnt about some decisions by chance.
- Decision request / Enhancement proposal: https://phabricator.wikimedia.org/T320667 Toolforge Kubernetes component workflow improvements.
- Which buildpacks to allow users using, and how (install any debian package? etc.)
- Welcome! In the past few months, informal toolforge meetings have been taking place, as well as discussions occurring in the WMCS weekly meetings.
- Toolforge is a complex topic to discuss, and a complex system to work on.
- Regular meetings will help ensure feedback and agenda can be discussed.
- Meeting is open to the community, thank you for joining!
- Toolforge roots especially are most welcome.
Did we finish discussing a looser approach accepting changes to toolforge docker images?
- Truetype fonts being added to PHP image.
- Shall we accept community patches into the base images for Toolforge?
- I.e. someone wants to add a debian package to the base docker image, do we accept the patch and rebuild images within a reasonable limit?
DC: It depends. In general, no, not all patches. Can’t install everything for everyone. Things that unblock language / accessibility / security, yes. Buildpacks will make this easier, *soon*
AB: Do we have a policy around this? Can what DC shared be documented?
DC: Unknown. But we should add something. Toolforge roots could add. Add a contribute file to the repo (link to a wiki?)
AB: What if the proposed change is to migrate an existing tool from grid to kubernetes?
A: Hard to discuss in isolation. Want existing container to go away, rather than be added on over time. In the past, we’ve said no, wait and we’ll have a better solution for you. This violates our policy to migrate from grid ASAP, but we don’t want to risk build pack adoption.
FN: Not a problem with either adding packages or waiting. Do we know how many specific requests we have for additional packages, right now?
BD: There’s a phabricator tag and board
K: Sharing numbers from grid migration board. 16 projects reached out already that need images changes. How many of those are for c#/mono? Unsure.
TV: Would like to keep it minimal. Today, we don’t have the infrastructure to constantly rebuild new images.
AB: Not blocking harbor on buildpacks?
DC: Harbor will replace docker registry. Can migrate images later, also plug it into gitlab CI to make it easier.
FN: Do we want to find a way to say ‘yes’ to some things? Or have a policy that is firmly ‘no’?
TV: Yes, we should allow some things. IE, saying no to a headless browser, but consider things like make if requested.
AB: Sounds like we agree on considering requests, but generally saying no.
DC: Which buildpacks should we allow people to use? Maintain our own? Upstream only? Which upstream(s)? (This might be a different topic)
TV: Different topic. Would be curious to see buildpacks before forming an opinion
AB: Should be a way to include arbitrary debian packages in a buildpack
NS: We should also consider allowing upstream package manager packages
AB: Allowing anything becomes like build your own container right?
BD: Buildpacks are defining this right? Adding things to these seems out of scope and goes away from rebuilding upstream buildpacks
NS: In a buildpack world, as operators, we would build images, not users
DC: Yes, you can define in inside projects dependencies you may need. Difference between which buildpacks are you running and what are you passing to them. We control which buildpacks can be built/run, but users control what is needed inside of those images.
FN: What are we worried about? Too many resources? What are we trying to prevent?
- "encourage" running open source only
- being able to upgrade the base image easily (buildpacks are built in a way that you can rebase an already built image on top of a new base one without rebuilding it completely)
AB: Perhaps buildpacks can be the discussion point for the next Toolforge Workgroup meeting
Idea to rewrite parts of toolforge-cli in golang (from python)
AB: Today we have toolforge jobs, could also add toolforge build. Maybe later toolforge ws (webservices). Common interface could be behind these. Currently in python, considering re-writing in golang
DC: Currently project exists in toolforge beta. We're considering restructuring buildpacks service in a different way than in the past.
TV: are we talking about the "wrapper" part or about the build service specific part?
DC: Yes, but not needed. Wanted to rebuild toolforge service to be an API. Closer to toolforge jobs service. Would enable control a bit more where and how users build things. Originally thought to build in golang, k8s integration would be easier. Also wanted to add golang experience. Tried to build something in golang, as a test. Both the decision and POC was delayed. However, can still chat about it now.
DC: Golang has some advantages around speed, including startup time. Slavina might have more opinions / thoughts
SS: Would be nice to learn and explore golang drove thinking just as much as thinking golang was the best language choice. In order to contribute upstream for things like buildpacks, golang knowledge is needed
AB: Want to learn golang, like having a reason. Community is probably mostly more familiar with python rather than golang. Rust is also emerging
FN: Nice to have python knowledge and python consistency. Python is probably good enough for what we are trying to do. That said, it would be useful for others and the community to know more golang. Undecided, see both points. However, strict technical needs don’t suggest a big advantage to use golang
TV: Since k8s uses go, we should have knowledge in go. Small project like this makes sense to use go. Writing k8s related things in go because SDK’s are much better
DC: Basically the points raised are why we are doing. Small project, learn go, webhooks are in golang, SDK’s are better in golang, keep k8s things in one language (golang).
DC: Not actively re-writing the client. No going to happen soon, unless someone else wants to do it now.
TV: CLI wrapper and backend would be nice to detangle and split into different repos.
DC: Yes, that was the plan Discussion: How decisions affecting Toolforge are made/communicated. Taavi mentioned a couple of times that he's learnt about some decisions by chance.
AB: Some decisions were made without larger discussion. Need to think about potential communication avenues. Lots of public channels, mailing lists, IRC, phab, etc. But also private channels. 1:1 chats, video calls, slack. How should decisions involving toolforge should be made and communicated?
DC: As part of this, toolforge build service team started discussing how to better share work. Lots of fave to face interactions. Proposed writing updates similar to Toolhub on mediawiki. Give updates on decisions, etc. This could help, but only for toolforge build service. Don’t have a coordinated effort for toolforge. Would be interesting to share ideas before they become decisions.
TV: Notice things like patches that do things, or a test doing something. But no phabricator task describes what is changing or why. Would like to see this on a ticket. Would also want to give input as decisions are being made. Difference in hearing about something happening versus participating and maybe changing the outcome.
DC: Makes sense. Any specific examples?
TV: The golang cli is an example. Just the patch, no discussion.
AB: Following that example, what could have been done differently? Email and patch? IRC and patch?
TV: Yes, seeing conversation somewhere. Someone didn’t just say ‘let’s rewrite this’
NS: Taavi is not the only one who’s experienced this
BD: Slack has caused some discussion rift, discussions are happening outside of IRC. That said, different people are on the team, different tools, etc
DC: Fairly sure this didn’t get discussed in slack either. More of a face to face conversation. Would be interesting to consider open slack channels?
BD: IT won’t allow open slack channels
DC: Trying to make sure tasks are created is something that come from retrospective
AB: Action items?
A: Hard part is deciding what granularity we want? People fix bugs often. With toolforge, nothing is publicly communicated. A volunteers work isn’t always shared. Staff versus volunteer can differ on work that affects a single project versus all projects. What’s the line for wanting to know everything?
TV: Hard to define. Want to know more than the average toolforge user wants. What is called a decision versus not isn't’ the line. For example, prometheus metrics working on. Yes, we want to monitor everything. Would want to know when everything is actually being monitored.
AB: Announcing it’s done is something we do. But that’s different than announcing status along the way
Decision request / Enhancement proposal: https://phabricator.wikimedia.org/T320667 Toolforge Kubernetes component workflow improvements.
AB: We are creating things and adding technical debt. Not gitops or similar approach. Just a couple cookbooks. Main step was deciding to use deploy.sh. We should define things more, as we are using k8s more often.
TV: Want to understand what we are deploying. Proposing using helm file to deploy. Idea is long, hopefully folks have read proposal
DC: Yes, good idea. Some thoughts on implementation. Harbor should help a lot. Can play in toolsbeta. Both images and helm charts can be deployed there. Would put everything in helm, have one place
AB: At some point if we want to solve this problem, we will need to allocate time to fix technical debt. Can’t be done in spare time. Needs focus and attention, even incrementally.
TV: Yes, agreed. It will take time to do it. Definitely prioritize this in future.
AB: Welcome to give feedback on this meeting and format.
BD: Useful, thank you for creating!
DC: Likewise, thank you!
- Perhaps have a buildpacks-dedicated meeting next time?