devstack-plugin-ceph/8df7e358f8b76b905ca3dbca43e...

232 lines
10 KiB
Plaintext

{
"comments": [
{
"unresolved": true,
"key": {
"uuid": "dc2dbfae_3fabcb9d",
"filename": ".zuul.yaml",
"patchSetId": 50
},
"lineNbr": 160,
"author": {
"id": 4393
},
"writtenOn": "2023-08-22T21:44:00Z",
"side": 1,
"message": "This should be voting, no? We have jobs that depend on this that were voting in the past and got broken because this isn\u0027t voting for the repo in which it is defined. We\u0027ve since make it nonvoting while we wait, but we\u0027d definitely like to see this go back.\n\nIs there something we\u0027re waiting for? If it\u0027s soak time, perhaps we could queue up a patch behind this to make it voting that we can merge soon? And if not, let\u0027s make it voting now :)",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "d1f4c1bd_7adcc795",
"filename": ".zuul.yaml",
"patchSetId": 50
},
"lineNbr": 160,
"author": {
"id": 16643
},
"writtenOn": "2023-08-22T22:02:59Z",
"side": 1,
"message": "Ack; thanks for sharing the history on this. I agree, if voting jobs depend on this, we should keep this voting. My objective with asking for it to be \"non-voting\" is to match what we had in this repo - but i do realize we probably switched it to non-voting to buy time to fix it in the first place :) \n\nI think we give this a bit of soaktime and then switch to voting. We could do this around Bobcat rc; wdyt?\n\nIn that time; i\u0027ll do a deprecation patch for the package based deploy that will flip this deployment mode on by default (\"CEPHADM_DEPLOY\" will default to True) and write an email to the ML.",
"parentUuid": "dc2dbfae_3fabcb9d",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "b55dda40_77c2e702",
"filename": ".zuul.yaml",
"patchSetId": 50
},
"lineNbr": 160,
"author": {
"id": 4393
},
"writtenOn": "2023-08-22T22:14:22Z",
"side": 1,
"message": "Yeah, we had a job based on this in nova which was voting, and it took us a while to figure out how we could be broken and the plugin repo was not... 😊\n\n\nCool, yep, if you\u0027ll do that and we can merge it around rc time, that sounds good to me, thanks!",
"parentUuid": "d1f4c1bd_7adcc795",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "295ebc3a_4e8032c3",
"filename": ".zuul.yaml",
"patchSetId": 50
},
"lineNbr": 160,
"author": {
"id": 16643
},
"writtenOn": "2023-09-07T20:16:35Z",
"side": 1,
"message": "Ashley and I spoke about this; we\u0027re quite close to RC1.. so we agreed to convert this to voting right away..",
"parentUuid": "b55dda40_77c2e702",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "08956e27_eb150f29",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 50
},
"lineNbr": 0,
"author": {
"id": 16643
},
"writtenOn": "2023-08-11T17:00:36Z",
"side": 1,
"message": "recheck\n\nJob Timed out; a bunch of 404s in the logs.. lets see how the rerun fares",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "646b6651_af7ab382",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 50
},
"lineNbr": 0,
"author": {
"id": 16643
},
"writtenOn": "2023-08-11T19:24:04Z",
"side": 1,
"message": "LGTM; thanks for the changes Ashley!",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "546740d5_cfead194",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 50
},
"lineNbr": 0,
"author": {
"id": 4393
},
"writtenOn": "2023-08-22T21:44:00Z",
"side": 1,
"message": "Looks like this is working and that the subnode is connected to ceph as expected, so that\u0027s good. I see in the history that Goutham asked for the job to be non-voting, but I\u0027d like to know the reason why. If we can make it voting I think we should, and if not, let\u0027s please queue up that change so it\u0027s on the radar and we can queue up changes in other projects that depend on it.\n\nOne question about the get_or_create change that can probably be just fixed with a comment (or maybe removed altogether). Those two points aside, I\u0027m good, thanks.",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "8c984f41_5f4d645d",
"filename": "devstack/lib/cephadm",
"patchSetId": 50
},
"lineNbr": 292,
"author": {
"id": 4393
},
"writtenOn": "2023-08-22T21:44:00Z",
"side": 1,
"message": "Hmm, the only difference here is \"get_or_create\" vs \"get\" (other than the default perms). Is there some reason why we need to do this? If so, can you add a comment about why? This is a pretty (darn) thick compound command here so it\u0027d be good to know why we need to do this.\n\nPerhaps it\u0027s because the remote machine may reach this point first and it would create the key before the main machine would? Even still, I\u0027d expect `get_or_create` to work as it describes.",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "4246a6ff_deab3389",
"filename": "devstack/lib/cephadm",
"patchSetId": 50
},
"lineNbr": 292,
"author": {
"id": 25402
},
"writtenOn": "2023-08-24T07:57:33Z",
"side": 1,
"message": "Hi Dan/Asheley,\nI think in general, in a multinode context (but even when the Ceph cluster\nis external), the `create_keys` function should be run under a `REMOTE_CEPH` `if`\ncondition to reflect the following:\n\n1. `REMOTE_CEPH \u003d false`: I\u0027m deploying the Ceph cluster (and I actually call the\n`bootstrap` function), hence I should create the keys according to the enabled_services\n\n2. `REMOTE_CEPH \u003d True`: my cluster is already deployed elsewhere, ceph.conf and keyrings\nare supposed to be copied from the \"ansible layer\", I also get the FSID by running\n`cat $CEPH_CONFIG ...` on L37; I have all the information in place, and I think\nwe have no reason to regenerate keys with wrong info.\nFor similar reasons, by simply calling \"get\" we\u0027re trying to \"export\" in the\ntarget location keyring information that are already present in the node.\n\nUnless we only copy ceph.conf omitting keyrings (which I don\u0027t think so given we\u0027re\npassing the admin keyring to this function), L300 should be in a form like:\n\n```\nif [[ \"$REMOTE_CEPH\" \u003d \"False\" ]]; then\n add_pools\n create_keys\n```\nand this function restored to the original content.",
"parentUuid": "8c984f41_5f4d645d",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "a4486ecc_1f0b6794",
"filename": "devstack/lib/cephadm",
"patchSetId": 50
},
"lineNbr": 292,
"author": {
"id": 4393
},
"writtenOn": "2023-08-24T13:54:37Z",
"side": 1,
"message": "Yep, agree, it would be better to make it explicit which node does that work. I originally had written something similar but couldn\u0027t resolve why `get_or_create` wouldn\u0027t *technically* work. However, better to be explicit and deterministic, IMHO, unless there\u0027s some reason we can\u0027t.",
"parentUuid": "4246a6ff_deab3389",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "ef707e46_898bcbad",
"filename": "devstack/lib/cephadm",
"patchSetId": 50
},
"lineNbr": 292,
"author": {
"id": 16643
},
"writtenOn": "2023-09-07T18:03:04Z",
"side": 1,
"message": "ack; the get_or_create was failing because of \"missing\" OSD caps.. so the logic here was incorrect in a sense.. so i\u0027m okay with us dropping this and relying on the ansible layer to handle the conf and keyring copy in the CI jobs... on local devstacks, we should probably add a note that the /etc/ceph directory must contain the necessary config and keys..",
"parentUuid": "a4486ecc_1f0b6794",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "78e74e8f_763dba63",
"filename": "devstack/lib/cephadm",
"patchSetId": 50
},
"lineNbr": 292,
"author": {
"id": 16643
},
"writtenOn": "2023-09-07T18:03:04Z",
"side": 1,
"message": "ack;",
"parentUuid": "a4486ecc_1f0b6794",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "b5e5446c_cfcaec00",
"filename": "devstack/lib/cephadm",
"patchSetId": 50
},
"lineNbr": 654,
"author": {
"id": 25402
},
"writtenOn": "2023-08-24T07:57:33Z",
"side": 1,
"message": "```\nif [[ \"$REMOTE_CEPH\" \u003d \"False\" ]]; then\n add_pools\n create_keys\nfi\n```",
"revId": "8df7e358f8b76b905ca3dbca43e9dba454d8f1ff",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
}
]
}