nova-specs/efe79ed0625ce0f01d6dd0d1d1c...

215 lines
10 KiB
Plaintext

{
"comments": [
{
"unresolved": false,
"key": {
"uuid": "ef67e337_b5febb04",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 6
},
"lineNbr": 0,
"author": {
"id": 11604
},
"writtenOn": "2023-01-02T05:47:47Z",
"side": 1,
"message": "im not actully back form pto until thursday but there is one thing i think we should simplifi.\n ",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "f71836e7_ea4ef19e",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 6
},
"lineNbr": 0,
"author": {
"id": 4393
},
"writtenOn": "2023-01-09T22:16:13Z",
"side": 1,
"message": "Soft -1 but I think setting a flag to trigger configdrive regen would be better.",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "0f3a72aa_a8f4260a",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 61,
"author": {
"id": 11604
},
"writtenOn": "2023-01-02T05:47:47Z",
"side": 1,
"message": "at the compute agent perhaps but the work flow at the api should be identical.",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "0ab082be_78db82c6",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 61,
"author": {
"id": 33634
},
"writtenOn": "2023-01-09T15:22:42Z",
"side": 1,
"message": "Yep, will update the wording here.",
"parentUuid": "0f3a72aa_a8f4260a",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "67977a08_fee5ff7a",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 66,
"author": {
"id": 11604
},
"writtenOn": "2023-01-02T05:47:47Z",
"side": 1,
"message": "im not sure if we need this complexity.\n\nhow are you going to know if the instnqace is using config drive.\n\nare you going to check the instance system metadata to see if the vm is booted with config drive.\n\nthis is an option but it would requrie two code path to update the userdata.\n\nits proably better to just alway make the rpc call passing the user-data\nbut have the compute agent skip regenerating the config drive if the vms does not have a config drive.",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "33acc66d_89d4a7a7",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 66,
"author": {
"id": 33634
},
"writtenOn": "2023-01-09T15:22:42Z",
"side": 1,
"message": "Yes, the thought was to query the instance\u0027s system metadata to check whether it uses a config drive, like in the initial implementation.\n\nComplexity-wise, I agree that it would be cleaner to just have one code path for that. If there are no implications with calling the RPC method every time (e. g. performance, dependence on RPC / updated agents), and no one else disagrees, I guess I can update the spec accordingly.",
"parentUuid": "67977a08_fee5ff7a",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "d97c339a_4ec59c04",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 66,
"author": {
"id": 4393
},
"writtenOn": "2023-01-09T22:16:13Z",
"side": 1,
"message": "I agree that it\u0027s more work to do the configdrive thing. It seems nice from the user\u0027s perspective, but I\u0027m not sure it\u0027s really worth it. Instances using config drives could also be looking at the metadata server, and it would be unfortunate to force those to go through a reboot cycle just to make the API call work.",
"parentUuid": "33acc66d_89d4a7a7",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "ce9be335_dfded81e",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 93,
"author": {
"id": 4393
},
"writtenOn": "2023-01-09T22:16:13Z",
"side": 1,
"message": "I think what I\u0027d prefer is to just set a flag that the compute manager will check on next boot. Like, we could shove a flag into system_metadata that says \"config drive is dirty\". Then whenever we go to start an instance, we check that flag and regenerate like we normally would on hard reboot, initial boot, or whatever.\n\nDoing it in the context of an RPC call means it needs to complete within the timeout of an http (and rpc) call. Also, unless you provide a lock to prevent it from happening more than once, you could be adding a sort of DoS attack vector for the user to take against the compute.\n\nSetting the flag and letting it be regenerated on next boot puts like activities together.",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "8d52e24c_c23a58e0",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 93,
"author": {
"id": 33634
},
"writtenOn": "2023-01-23T16:16:13Z",
"side": 1,
"message": "Correct me if I\u0027m wrong, but wasn\u0027t that your proposal to use RPCs in the first place (https://review.opendev.org/c/openstack/nova-specs/+/863884/1..6/specs/2023.1/approved/update-userdata.rst#100)? Could you further elaborate on this?\n\nBy \"regenerate like we normally would\" you mean calling already existing methods? So we would not need to create a new RPC method just for config drive regen?",
"parentUuid": "ce9be335_dfded81e",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "4a4e84ad_7cec60c2",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 93,
"author": {
"id": 4393
},
"writtenOn": "2023-01-23T18:05:15Z",
"side": 1,
"message": "What I said is that if we\u0027re going to do it as part of the reboot, it needs to be a flag to the RPC call so that we\u0027re not subverting the RPC interface\u0027s versioning scheme, effectively making a call we expect to have certain characteristics not represented by the version we asked for, not knowing if the other side is actually new enough or not. That\u0027s the thing I was concerned about there.\n\nMaking a *new* synchronous RPC call that must do substantial IO on the destination machine within the timeout of an HTTP request (and the MQ timeout for that matter) is also problematic for other reasons, as described above.\n\nIf you use a flag on the instance, which has the implication on \"next boot\" then we do not need to make the synchronous call. The flag is also not specifically related to any RPC call that fails to cover it in the versioning scheme, so we\u0027re not subverting that. Finally, you can check the service version of the host the instance is on (and yes, bump it for this feature) to make sure that it will honor it at next boot before you tell the HTTP requester that \"yes, I will schedule that to be done.\"",
"parentUuid": "8d52e24c_c23a58e0",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "b3e3d5ef_2f54c2fb",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 93,
"author": {
"id": 33634
},
"writtenOn": "2023-02-07T18:00:05Z",
"side": 1,
"message": "Ok, that makes sense.\n\nSo the proposal would be to still use a flag (like config_drive_dirty in the initial implementation) and additionally check during the HTTP call if the RPC version of the compute host that instance is currently living on is new enough. We would still require instances to be stopped. Whenever the instance is started again, compute driver checks the flag and regenerates config drive if True. To my understanding, this approach would basically replace the last step in the spec\u0027s current version.\n\nSince this is yet another approach that differs from what we agreed on after Zed\u0027s feature freeze, I would be happy to hear other opinions on this @Sean @gibi @Sylvain.",
"parentUuid": "4a4e84ad_7cec60c2",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "90fe4173_4be24840",
"filename": "specs/2023.1/approved/update-userdata.rst",
"patchSetId": 6
},
"lineNbr": 93,
"author": {
"id": 11604
},
"writtenOn": "2024-04-07T05:29:55Z",
"side": 1,
"message": "@dan didnt you ahve an issue with use updating the",
"parentUuid": "b3e3d5ef_2f54c2fb",
"revId": "efe79ed0625ce0f01d6dd0d1d1c112297373050a",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
}
]
}