nova-specs/8487cb0bd57caf9f881457b38a5...

856 lines
33 KiB
Plaintext

{
"comments": [
{
"unresolved": false,
"key": {
"uuid": "f2ec6240_8b2b3422",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 1
},
"lineNbr": 0,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "I need to look again to complete.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "a5549c6e_346d9fb9",
"filename": "/PATCHSET_LEVEL",
"patchSetId": 1
},
"lineNbr": 0,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T18:42:44Z",
"side": 1,
"message": "this spec is primary intended to serve as a discussion point.\n\nwe likely wont merge this as is or if we do ill refactor it to capture the use-cases and requirement without the proposed implementation \n\nfeedback is welcome and i can adjust the proposed changes section fi we want to develop this further but this proposal is not intended to preempt the discussion of \n\n\"is this something that is in the scope of nova to do as a project\"\n\ni just want to set expectations for reviewers that if we were to accept this we would need a separate spec for a specific release with details of how to do this.\n\ni shoudl also note that i was proposing this against backlog as i intentionally don\u0027t believe this shoudl be done in the Dalmation 2024.2 cycle.\n\nwe have alot of other work in flight in nova and this in my view would be an E (2025.1) or later cycle feature",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "a05c6d2e_cc6caa07",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 22,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "resources",
"range": {
"startLine": 22,
"startChar": 41,
"endLine": 22,
"endChar": 49
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "68584b6c_ec5fabc9",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 22,
"author": {
"id": 8864
},
"writtenOn": "2024-01-24T16:32:54Z",
"side": 1,
"message": "divide",
"range": {
"startLine": 22,
"startChar": 26,
"endLine": 22,
"endChar": 32
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "75c1c5b4_19d7127c",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 25,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": ". Shared",
"range": {
"startLine": 25,
"startChar": 39,
"endLine": 25,
"endChar": 48
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "24c73649_32e70477",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 25,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "this is confgured in nova.conf and not as flavor property.\n\n`[compute]\ncpu_dedicated_set\u003d2-17\ncpu_shared_set\u003d18-47`",
"range": {
"startLine": 25,
"startChar": 74,
"endLine": 25,
"endChar": 88
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "eff59a98_97fa637d",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 27,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "While",
"range": {
"startLine": 27,
"startChar": 0,
"endLine": 27,
"endChar": 5
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "4aae7c7e_876aa8a3",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 28,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "can we put a new line here, to separate it from above para.",
"range": {
"startLine": 28,
"startChar": 30,
"endLine": 28,
"endChar": 41
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "ce995e8e_e5c4cdbf",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 37,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "but, from user perspective, the dedicated cpus are might be exclusive ones for vm1 and user may not want to share them with others (for any reason) even they are idle.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "bdf2ab8d_2c28ed46",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 37,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "from given example:\n\ncompute host has 8 cpus, \n1 instance i.e vm1 has 4 dedicated cpus.\nother 4 instances has shared cpus 4 cpus.\n\nnow, in case when vm1 is idle (or not using its 4 dedicated cpus fully), other instances can not use them either.\n\nand this is problem, we have more resource but we may not able to use them when required.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "f336163f_5fd42b01",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 45,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "I",
"range": {
"startLine": 45,
"startChar": 15,
"endLine": 45,
"endChar": 17
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "bda6b583_beae311d",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 51,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "I",
"range": {
"startLine": 51,
"startChar": 51,
"endLine": 51,
"endChar": 52
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": false,
"key": {
"uuid": "a3c46911_cdd97589",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 62,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "okay, this answers my above question/doubt.\nconfigure those hosts only, where we can have non exclusive vms, which is mostly.",
"range": {
"startLine": 62,
"startChar": 66,
"endLine": 62,
"endChar": 81
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "08674c67_12999c23",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 62,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "will cpu_dedicated_set be subset of cpu_dynamic_set, how operator will mention in nova.conf ?",
"range": {
"startLine": 62,
"startChar": 66,
"endLine": 62,
"endChar": 81
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "63942788_1703c9c5",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 63,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "coexistence",
"range": {
"startLine": 63,
"startChar": 60,
"endLine": 63,
"endChar": 70
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "31f2315e_65e2b8d0",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 64,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "coexistence",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "dea31614_0539c957",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 69,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "capacity",
"range": {
"startLine": 69,
"startChar": 2,
"endLine": 69,
"endChar": 9
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "d3a4f5db_27ac96bc",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 172,
"author": {
"id": 9708
},
"writtenOn": "2024-01-23T17:09:16Z",
"side": 1,
"message": "This means that with cpu_overhead_set some of the hypervisor cpus needs to be dedicated to the overhead and therefore cannot be used for vcpus. This will decreased the vcpu capacity of the hypervisor at least from scheduling perspective. I\u0027m not sure deployers will be happy with this capacity reduction.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "4da83910_6d28effc",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 172,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "i will reword this\n\nthe specific thing i want to prevent is overlap bettween cpu_overhead_set\nand cpu_dedicated_set or cpu_dynmaic_set\n\noverlap with cpu_shared_set is actully fine since cpu_overhead_set shoudl default to cpu_shared_set if not defiend.\n\nwhat i would like to be able to express more simpley is i have a host which only has dedicated cpu and the emulatort thread shoudl not run on those cores\n\ntoday you have to use cpu_shared_set to express that and then reserve all cores in it.\n\nmy expecation is that cpu_overhead_set woudl actully just be the cores you reserve for the host os when you are using cpu_dynmaic_set so it shoudl not require extra capstity\n\ni just want a clean way to enable the emulator threads to be pined to cores outside of cpu_dynmaic_set when the instance requst cpu_policy\u003ddedicated with threads_policy\u003dshared.\n\nwe coudl get the same effect by supproting cpu_shared_set and cpu_dynmaic_set and ensuring they do not overlap. that could be supproted its just more complicated but that is a viald alternitive.",
"parentUuid": "d3a4f5db_27ac96bc",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "b4e587d2_4351a44a",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 187,
"author": {
"id": 9708
},
"writtenOn": "2024-01-23T17:09:16Z",
"side": 1,
"message": "I think this is the main change. And there are dragons here. Any time we tried to represent the same physical resource as two, independently consumable logical resources in placement we ended up introducing bugs. I don\u0027t say we cannot do this correctly but I do say that this is a dangerous move.\n\nE.g. \n* we deprecated support for configuring both the PF and its VFs via device_spec.\n* we are fixing bugs where a PGPU can provide different but mutually exclusive VGPU types.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "582c0eaa_fd2194e9",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 261,
"author": {
"id": 9708
},
"writtenOn": "2024-01-23T17:09:16Z",
"side": 1,
"message": "While you try to limit the fragmentation of the pin-able cpu cores via packing floating cores such fragmentation will manifest eventually via the creation and deletion of VMs with different flavors.\n\nE.g.:\n3 physical cores, allocation ratio 2.0. pcore_id : available_vcore_count \u003d {0:2, 1:2, 2:2}\n1. boot a VM with 2 vcpus -\u003e mapped to 0, 1, {0:1, 1:1, 2:2}\n2. boot a VM with 1 vcpu -\u003e mapped to 0 (0 is full now), {0:0, 1:1, 2:2}\n3. boot a VM with 2 vcpus -\u003e mapped to 1, 2 (as 0 is full but 1 still have a vcpu), {0:0, 1:0, 2:1} \n4. delete the VM with the 1 vcpu allocation. {0:1, 1:0, 2:1}\nAt this point we have a total allocation of 4 vcpus but it is spread to all 3 pcores. So logically we have space for a pinned core, but due to fragmentation we don\u0027t have the ability to allocate it. \n\nI don\u0027t think there is a fragmentation free algorithm. So either we live with the fact of fragmentation, or do some dynamic re-packing of the floating vcpus.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "10cca819_1a6f08b4",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 261,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "you are correct that the packing behavior was an attempt to minimise fragmentation and also a correctness requireemtn to ensure that dedicated and shared cores dont overlap.\n\ni have some tought on how to adress fragmentation but i woudl prefer to do that as a followup.\n\nbascially there are two relitivly simple things we coudl do with pros and cons.\nfirst we coudl make hard reboot repin the cores of just that vm and compact them whne an instance is hard rebooted. that woudl invovle doing an RP update in palcement and a db update on hard reboot but that not terrible.\n\nsecond on a host reboot we can effectivly use teh defrag on hardreboot to ensure\nthat as each instnace is started again it will lazally or egerly defrag the allocations.\n\ni could add this as a note in the spec for future work or we could bring it into scope proper. there may be other aprpoches to this but this is the lest problematic in the context of action on one instance not impacting others.",
"parentUuid": "582c0eaa_fd2194e9",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "ecc28e0a_05b68e17",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 319,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "How did this jump from 0 to 4 here?",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "49a55dfa_dd2af664",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 319,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "we claimed on phsical core form the dynamic set for use as a dedicated core\n\nsince the allcoation ratio is 4.0 we need to increase by 4\n\n\"\"\"\ngiven share cpus allow over subscription\nwhen a host core is reserved for dedicated use, the vcpu reserved is increased by\n(1 * allocation_ratio) instead of 1.\n\"\"\"\n\ni noted this in the explaint after on line 359",
"parentUuid": "ecc28e0a_05b68e17",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "4b138557_1a3d4474",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 445,
"author": {
"id": 9708
},
"writtenOn": "2024-01-23T17:09:16Z",
"side": 1,
"message": "This is probably not relevant but I noticed that in the proposed change if a host has available PCPUs then it will always have available VCPUs as a PCPU can be used as one or more VCPUs. However not vice versa. So there might be hosts with available VCPUs but no available PCPUs any more.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "6eb7958f_9fd6c905",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 445,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "yes its true thats a side effect of teh fact all cores not used as dedicated cpus currenlty can be used as shared cpus. \n\nin a future release i coudl see us supproting\n\ncpu_dedicated_set and cpu_dynmic_set provided they did not overlap to allow you to have a pool of dedeicated cpus and some that coudl be shared or dedicated.\n\nit may also make sense to supprot cpu_dynmic_set with cpu_shared_set or even all 3\nprovided they do not overlap with each other.\n\nfor simplictiy of reasoning i wanted to start without condiering how we could make those 3 sets fo cpu resouce interact but your observation is correct.",
"parentUuid": "4b138557_1a3d4474",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "580557b4_df01d708",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 455,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "I think the ghost of jaypipes would say that this means you\u0027re using it wrong (i.e. because you\u0027re representing one set of hardware with twice the inventory). Personally I think any plan that goes down that road is a regression from where we\u0027ve been trying to get to with placement and our usage of it.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "8c3bad9f_462624ad",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 455,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "you are correct and he would be too however that is the main tension of this spec.\n\neither you can have static partioning of resouce on the host and in palcment or both can be dynmci\n\nif we go the dynmaic route and we do not have a new resocue class then there must be reporting of both.\n\nwe need to deicied if we are ok with that or not and that will inform if this feature can be supported in nova or not.",
"parentUuid": "580557b4_df01d708",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "68f5599a_86df508f",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 465,
"author": {
"id": 9708
},
"writtenOn": "2024-01-23T17:09:16Z",
"side": 1,
"message": "I don\u0027t see a way that placement would return a wrong allocation candidate due to the proposed change as the inventory of both logical resources are correctly tracked via changing the reserved value.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "026952c2_325f54dc",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 465,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "Maybe I\u0027m missing something, but this seems like it\u0027s representing the same resources with double the inventory. Won\u0027t conductor/scheduler be over-allocating instances for the compute, especially if there\u0027s a flood of boot requests? Put another way, before the compute has a chance to twiddle the inventory to reflect reality after each boot, anything looking at only the placement inventory will not give an accurate picture.",
"parentUuid": "68f5599a_86df508f",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "0e7e25ad_eda9110d",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 465,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "that woudl be no diffent then today. today the numa toplogy filter can accpet the host but it can be invlaid by the time the instance reaches it becasue in the intervening time another instance has booted on it.\n\nso yes but its not a regression form how it works today.\n\nthis race exist cor pCPUs and hugepages today.",
"parentUuid": "026952c2_325f54dc",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "254e47ea_25dcb9e7",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 465,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T16:28:18Z",
"side": 1,
"message": "There\u0027s a race today, yes, but it\u0027s present because placement has *correctly* identified a host with capacity, but without an appropriate topology, right? Expanding the race to include not only the potential for a topology mis-fit, but also a fundamental oversubscription of a set of resources that can\u0027t actually be oversubscribed.",
"parentUuid": "0e7e25ad_eda9110d",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "43e628af_21e5f5f2",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 477,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "I think this paragraph is really saying \"well, we can\u0027t be positive that an instance will fit, making it less accurate (or more lossy) isn\u0027t bad.\" I don\u0027t think I agree.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "66712783_15d0671f",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 477,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "not just that. im also calling out that today \"we can\u0027t be positive that an instance will fit\" and this is not makign that worse just different.",
"parentUuid": "43e628af_21e5f5f2",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "e363e7d7_a7c0eb0b",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 477,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T16:28:18Z",
"side": 1,
"message": "Okay, I think it makes it worse ;)",
"parentUuid": "66712783_15d0671f",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "fd191732_8cc85566",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 487,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "It also means the compute can get hammered by multiple scheduler/conductor processes asking \"does this fit?\" during a massive boot request storm.",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "56b6f216_e1180280",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 487,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "yep again that is a probelm with the current implemation we have to day so this wont make that any worse",
"parentUuid": "fd191732_8cc85566",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "b41154b7_116bc906",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 487,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T16:28:18Z",
"side": 1,
"message": "Perhaps I\u0027m forgetting something, but where does the scheduling operation in conductor call to the compute to see if something will fit before it has been assigned/allocated? My reading of this paragraph is that we would move the actual allocation step to the compute and have conductor call to compute to have _it_ do the allocation step synchronously before conductor considers an instance schedule complete.",
"parentUuid": "56b6f216_e1180280",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "16428a35_fe33b75d",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 499,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "I think this approach would require fully exposing all the compute details so that a conductor can make that decision right?",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "e4cb2608_a89154d3",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 499,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "we have this code extracted into the hardware.py module and its already run in the schduler via the numa toplogy filter and on teh comptue so we have all the info we need in the cell db curerntly.\n\nso i think we have the abstraction in place to do this in the context of cpus hugepages, numa aware vswtichs and pci devices",
"parentUuid": "16428a35_fe33b75d",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "3a92441b_b1330e9a",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 511,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "\"cpus\" ?",
"range": {
"startLine": 511,
"startChar": 71,
"endLine": 511,
"endChar": 80
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "bf600b3e_a1d2b306",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 511,
"author": {
"id": 11604
},
"writtenOn": "2024-01-24T16:16:58Z",
"side": 1,
"message": "yep that shoudl be cpus",
"parentUuid": "3a92441b_b1330e9a",
"range": {
"startLine": 511,
"startChar": 71,
"endLine": 511,
"endChar": 80
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "9971e461_543ce7cf",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 570,
"author": {
"id": 4393
},
"writtenOn": "2024-01-24T15:50:34Z",
"side": 1,
"message": "I\u0027m not sure how I feel about changing this. For a single-vendor cloud, Nova is just IaaS and multi-tenancy constraints like this are just \"limitations\" to the operator. I agree that we don\u0027t do this sort of thing today, and that doing in the future is a fundamental change in how nova behaves (both from the multitenancy concern _and_ the PoV that we \"don\u0027t do orchestration.\").",
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
},
{
"unresolved": true,
"key": {
"uuid": "571b1904_b7580d35",
"filename": "specs/backlog/nova-dynamic-cpus.rst",
"patchSetId": 1
},
"lineNbr": 917,
"author": {
"id": 34860
},
"writtenOn": "2024-01-23T14:53:33Z",
"side": 1,
"message": "did you meant it for 2024.2 - D ?\n\nit says specs/backlog and I can\u0027t find it in here.\nhttps://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_16e/906205/1/check/openstack-tox-docs/16e6b35/docs/specs/backlog/index.html",
"range": {
"startLine": 917,
"startChar": 7,
"endLine": 917,
"endChar": 21
},
"revId": "8487cb0bd57caf9f881457b38a5e77a6e88733cd",
"serverId": "4a232e18-c5a9-48ee-94c0-e04e7cca6543"
}
]
}