We copied some codes/frameworks from Nova, but frogot to add
the original file informations, this tries to add them back.
Change-Id: I310d94fc334b6e72dc5eafa316f1b4842066ccbe
This adds a check before constructing member_of parameters to
avoid passing empty aggregates list to placement.
Also this changes to only get aggregates with affinity_zone
during server group scheduling.
Change-Id: Ic282d5afcad5fa78fdb029e874c78ac10cd73153
Currently we support to list compute nodes and aggregate nodes
with names, but server show will return the node uuid always,
which make the admins have to go ironic to get the relationship
between name and uuid, so this changes to return node name.
Change-Id: Ia0e5122aa29c4de2e870c5b71c420de49530ba92
Closes-Bug: #1715036
This change will allow the scheduler to select nodes with specified server
group in schedule_hints.
Partially Implements: bp support-schedule-hints
Change-Id: Idecd5e5f470eee12de0088b2332874b43f9dae52
We should extract resources aggregates information from flavor,
and pass it to placement with resources filters.
Partially Implements: bp node-aggregate
Change-Id: I96eeb7ab6636fa241e7d82bbddad8d70791c1f43
This tries to remove aggregate from resource provider in placement
when removing the aggregate.
Partially Implements: bp node-aggregate
Change-Id: Ia80a92ea8e19650b7b53d4d771e848079039aa28
This change ensure allocations of a rp being deleted firstly before the rp
deleted. And this change also avoid updating the inventory of a rp if the
rp(node) has already been consumed.
Change-Id: I507176834f8247d112c8e6d969656d1a4dded6d7
* list nodes for the specified aggregates
* add node to an aggregate
* remove node from an aggregate
Change-Id: I78ce523c7ba26860e02dcde84bcb84a6cc0c60ee
This adds support for listing node names from resource providers,
which will only available for admins.
Partially Implements: bp node-aggregate
Change-Id: I414bf176302fc076288e6a6fbfd88a7090541622
This change change the scheduler to use placement api to filter query
nodes with specified resource class of flavor, and clean the filters
and weighers in scheduler.
Change-Id: I89ad443f553510da6daf289b83f3c30d9d546ace
Partially Implements: bp track-resources-using-placement
This change importing placement service, report nodes resources to
placement and use it as unique nodes resource provider service for
Mogan's scheduler.
Co-Authored-By: Zhenguo Niu <Niu.ZGlinux@gmail.com>
Change-Id: I1cb91a10cfc8139687ad44167f33f45fd3c08e5c
Partially Implements: bp track-resources-using-placement
Currently, we use node_type property to link with the flavors, but
ironic node's resource_class field just want to do such thing, so
we can just change to use it instead of adding a new node property.
Change-Id: I535606ce05299037a7bd5aacdcd7869ca5f855c4
Closes-Bug: #1695819
Select all available nodes before staring to create servers. If not
enough nodes for request instances, then abort the this process.
Closes-Bug: #1691699
Change-Id: I041927ed056859f30829b5fdcd0f7b486832424a
This change add a __repr__ method which can help to record a summary info
of node when scheduling than objects. It can help to trace the process
of scheduling for debuging.
Closes-Bug: #1691711
Change-Id: I3460ef2e5a129be7f08671d162ecf26cb2b7bbbb
As we already splitted scheduler out from engine service, the retry
information can't be saved anymore, so move the logic to engine to
make it work again.
Change-Id: If0ddc339ea705889fc52a84ed190df136abca650
We have saved node ports information to DB, so change to use it
instead of issuing an extra API request.
Change-Id: I5b9183a1afdcb1e9444a88dfa50e1c31bf54e805
For multi instance creation scenario, we need to schedule for all
requested instances first then run the flow separately, so better
to move this task out, and leave the OnFailureReschedule task to
the flow.
Change-Id: If576003cd6a2db0dd90e2ee5cdf0b0cb00f6da63