Nova Resource:Admin-monitoring/SAL

From Wikitech

2024-02-13

  • 09:41 arturo: deleting all leaked instances by hand (11 VMs)

2023-01-26

  • 09:26 taavi: deleting instances leaked by T327980

2023-01-10

  • 12:39 arturo: delete all VMs and restart the fullstack service on cloudcontrol1005 - T326626

2022-10-26

  • 07:41 taavi: deleted leaked nova-fullstack VMs

2022-10-07

  • 09:03 arturo: restarted nova-fullstack.service on cloudcontrol1005 (T320232)
  • 08:49 arturo: cleaning up a bunch of leaked VMs on "BUILD" status (T320232)

2022-06-22

  • 16:52 taavi: remove some failed nova-fullstack instances now that DNS problems are hopefully fixed

2022-05-10

  • 20:57 balloons: cleanup fullstack test instances after cloudcontrol reboots

2022-05-06

  • 13:27 andrewbogott: removed a few leaked VMs to forestall a weekend page

2021-07-15

  • 20:09 balloons: remove stale/errored fullstack tests

2021-06-10

2021-06-08

  • 09:36 dcaro: actually, there's several different errors, will open tasks for each of them
  • 09:31 dcaro: there's a bunch of novafullstack vms in error because it timed out when trying to allocate network, though there's a "successfully plugged vif" message from neutron, cleaning up for now

2021-04-24

  • 16:19 arturo: deleting 2 leaked VMs by hand: 6aefef6f-0723-499d-895f-314f4804c377 | fullstackd-20210424153344 and af8bc9bd-ea0a-4789-b8dd-cf5cf96c31cc | fullstackd-20210424074938 (puppet check step timed out)

2019-10-17

  • 14:30 jeh: cleaning up failed nova fullstack vms related to puppet ca T234332

2019-07-02

  • 07:27 jeh: clearing out leaked instances that are active and accessible

2019-03-14

  • 17:05 gtirloni: Updated fullstackd to use debian-9.8

2018-02-09

  • 00:53 bd808: Added Arturo Borrero Gonzalez, BryanDavis, and Bstorm as admins
  • 00:50 bd808: Removed Yuvipanda at user request (T186289)

2017-09-27

  • 05:36 madhuvishy: Clear all fullstackd instances because the project is full and the fullstack tests are failing

2017-06-07

  • 16:20 andrewbogott: clearing out leaked instances — most are up and accessible and fine — this seems to be a timing issue of some sort

2017-03-29

  • 19:03 andrewbogott: updated image, cleared out old stuck instances